Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web
Clinical Processes - The Killer Application for Constraint-Based Process Interactions
DEFF Research Database (Denmark)
Jiménez-Ramírez, Andrés; Barba, Irene; Reichert, Manfred
2018-01-01
. The scenario is subject to complex temporal constraints and entails the need for coordinating the constraint-based interactions among the processes related to a patient treatment process. As demonstrated in this work, the selected real process scenario can be suitably modeled through a declarative approach....... examples. However, to the best of our knowledge, they have not been used to model complex, real-world scenarios that comprise constraints going beyond control-flow. In this paper, we propose the use of a declarative language for modeling a sophisticated healthcare process scenario from the real world......For more than a decade, the interest in aligning information systems in a process-oriented way has been increasing. To enable operational support for business processes, the latter are usually specified in an imperative way. The resulting process models, however, tend to be too rigid to meet...
International Nuclear Information System (INIS)
Yang Jie; Wang Guozhen; Xuan Fuzhen; Tu Shandong
2013-01-01
Background: Constraint can significantly alter the material's fracture toughness. Purpose: In order to increase accuracy of the structural integrity assessment. It needs to consider the effect of constraint on the fracture toughness of nuclear power materials and structures. A unified measure which can reflect both in-plane and out-of-plane constraint is needed. Methods: In this paper, the finite element numerical simulation method was used, a unified measure and characterization parameter of in-plane and out-of-plane constraint based on crack-tip equivalent plastic strain have been investigated. Results: The results show that the area surrounded by ε p isoline has a good relevance with the material's fracture toughness on different constraint conditions, so it may be a suitable parameter. Based on the area A PEEQ , a unified constraint characterization parameter √A p is defined. It was found that there exists a sole linear relation between the normalized fracture toughness J IC /J re f and √A p regardless of the in-plane, out-of-plane constraint and the selection of the p isolines. The sole J IC /J re f-√A p line exists for a certain material. For different materials, the slope of J IC /J re f-√A p reference line is different. The material whose slope is larger has a higher J IC /J re f and is more sensitive to constraint at the same magnitude of normalized unified parameter. Conclusions: The unified J IC /J re f -√A p reference line may be used to assess the safety of a cracked component with any constraint levels regardless of in-plane or out-of-plane constraint or both. (authors)
Improved constraints on cosmological parameters from SNIa data
International Nuclear Information System (INIS)
March, M.C.; Trotta, R.
2011-02-01
We present a new method based on a Bayesian hierarchical model to extract constraints on cosmological parameters from SNIa data obtained with the SALT-II lightcurve fitter. We demonstrate with simulated data sets that our method delivers considerably tighter statistical constraints on the cosmological parameters and that it outperforms the usual χ 2 approach 2/3 of the times. As a further benefit, a full posterior probability distribution for the dispersion of the intrinsic magnitude of SNe is obtained. We apply this method to recent SNIa data and find that it improves statistical constraints on cosmological parameters from SNIa data alone by about 40% w.r.t. the standard approach. From the combination of SNIa, CMB and BAO data we obtain Ω m =0.29±0.01, Ω Λ =0.72±0.01 (assuming w=-1) and Ω m =0.28±0.01, w=-0.90±0.04 (assuming flatness; statistical uncertainties only). We constrain the intrinsic dispersion of the B-band magnitude of the SNIa population, obtaining σ μ int =0.13±0.01 [mag]. Applications to systematic uncertainties will be discussed in a forthcoming paper. (orig.)
Improved constraints on cosmological parameters from SNIa data
Energy Technology Data Exchange (ETDEWEB)
March, M.C.; Trotta, R. [Imperial College, London (United Kingdom). Astrophysics Group; Berkes, P. [Brandeis Univ., Waltham (United States). Volen Centre for Complex Systems; Starkman, G.D. [Case Western Reserve Univ., Cleveland (United States). CERCA and Dept. of Physics; Vaudrevange, P.M. [Case Western Reserve Univ., Cleveland (United States). CERCA and Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2011-02-15
We present a new method based on a Bayesian hierarchical model to extract constraints on cosmological parameters from SNIa data obtained with the SALT-II lightcurve fitter. We demonstrate with simulated data sets that our method delivers considerably tighter statistical constraints on the cosmological parameters and that it outperforms the usual {chi}{sup 2} approach 2/3 of the times. As a further benefit, a full posterior probability distribution for the dispersion of the intrinsic magnitude of SNe is obtained. We apply this method to recent SNIa data and find that it improves statistical constraints on cosmological parameters from SNIa data alone by about 40% w.r.t. the standard approach. From the combination of SNIa, CMB and BAO data we obtain {omega}{sub m}=0.29{+-}0.01, {omega}{sub {lambda}}=0.72{+-}0.01 (assuming w=-1) and {omega}{sub m}=0.28{+-}0.01, w=-0.90{+-}0.04 (assuming flatness; statistical uncertainties only). We constrain the intrinsic dispersion of the B-band magnitude of the SNIa population, obtaining {sigma}{sub {mu}}{sup int}=0.13{+-}0.01 [mag]. Applications to systematic uncertainties will be discussed in a forthcoming paper. (orig.)
Architectural setup for online monitoring and control of process parameters in robot-based ISF
Störkle, Denis Daniel; Thyssen, Lars; Kuhlenkötter, Bernd
2017-10-01
This article describes new developments in an incremental, robot-based sheet metal forming process (Roboforming) for the production of sheet metal components for small lot sizes and prototypes. The dieless kinematic-based generation of the shape is implemented by means of two industrial robots, which are interconnected to a cooperating robot system. Compared to other incremental sheet forming (ISF) machines, this system offers high geometrical design flexibility without the need of any part-dependent tools. However, the industrial application of ISF is still limited by certain constraints, e.g. the low geometrical accuracy. Responding to these constraints, the authors introduce a new architectural setup extending the current one by a superordinate process control. This sophisticated control consists of two modules, i.e. the compensation of the two industrial robots' low structural stiffness as well as a combined force/torque control. It is assumed that this contribution will lead to future research and development projects in which the authors will thoroughly investigate ISF process parameters influencing the geometric accuracy of the forming results.
Chandra Cluster Cosmology Project III: Cosmological Parameter Constraints
DEFF Research Database (Denmark)
Vikhlinin, A.; Kravtsov, A. V.; Burenin, R. A.
2009-01-01
function evolution to be used as a useful growth of a structure-based dark energy probe. In this paper, we present cosmological parameter constraints obtained from Chandra observations of 37 clusters with langzrang = 0.55 derived from 400 deg2 ROSAT serendipitous survey and 49 brightest z ≈ 0.05 clusters...
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the
Utilising temperature differences as constraints for estimating parameters in a simple climate model
International Nuclear Information System (INIS)
Bodman, Roger W; Karoly, David J; Enting, Ian G
2010-01-01
Simple climate models can be used to estimate the global temperature response to increasing greenhouse gases. Changes in the energy balance of the global climate system are represented by equations that necessitate the use of uncertain parameters. The values of these parameters can be estimated from historical observations, model testing, and tuning to more complex models. Efforts have been made at estimating the possible ranges for these parameters. This study continues this process, but demonstrates two new constraints. Previous studies have shown that land-ocean temperature differences are only weakly correlated with global mean temperature for natural internal climate variations. Hence, these temperature differences provide additional information that can be used to help constrain model parameters. In addition, an ocean heat content ratio can also provide a further constraint. A pulse response technique was used to identify relative parameter sensitivity which confirmed the importance of climate sensitivity and ocean vertical diffusivity, but the land-ocean warming ratio and the land-ocean heat exchange coefficient were also found to be important. Experiments demonstrate the utility of the land-ocean temperature difference and ocean heat content ratio for setting parameter values. This work is based on investigations with MAGICC (Model for the Assessment of Greenhouse-gas Induced Climate Change) as the simple climate model.
Safety constraints applied to an adaptive Bayesian condition-based maintenance optimization model
International Nuclear Information System (INIS)
Flage, Roger; Coit, David W.; Luxhøj, James T.; Aven, Terje
2012-01-01
A model is described that determines an optimal inspection and maintenance scheme for a deteriorating unit with a stochastic degradation process with independent and stationary increments and for which the parameters are uncertain. This model and resulting maintenance plans offers some distinct benefits compared to prior research because the uncertainty of the degradation process is accommodated by a Bayesian approach and two new safety constraints have been applied to the problem: (1) with a given subjective probability (degree of belief), the limiting relative frequency of one or more failures during a fixed time interval is bounded; or (2) the subjective probability of one or more failures during a fixed time interval is bounded. In the model, the parameter(s) of a condition-based inspection scheduling function and a preventive replacement threshold are jointly optimized upon each replacement and inspection such as to minimize the expected long run cost per unit of time, but also considering one of the specified safety constraints. A numerical example is included to illustrate the effect of imposing each of the two different safety constraints.
Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui
2012-01-01
Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.
Ottaviani, Gianluigi; Tsakalos, James L; Keppel, Gunnar; Mucina, Ladislav
2018-01-01
Complex processes related to biotic and abiotic forces can impose limitations to assembly and composition of plant communities. Quantifying the effects of these constraints on plant functional traits across environmental gradients, and among communities, remains challenging. We define ecological constraint ( C i ) as the combined, limiting effect of biotic interactions and environmental filtering on trait expression (i.e., the mean value and range of functional traits). Here, we propose a set of novel parameters to quantify this constraint by extending the trait-gradient analysis (TGA) methodology. The key parameter is ecological constraint, which is dimensionless and can be measured at various scales, for example, on population and community levels. It facilitates comparing the effects of ecological constraints on trait expressions across environmental gradients, as well as within and among communities. We illustrate the implementation of the proposed parameters using the bark thickness of 14 woody species along an aridity gradient on granite outcrops in southwestern Australia. We found a positive correlation between increasing environmental stress and strength of ecological constraint on bark thickness expression. Also, plants from more stressful habitats (shrublands on shallow soils and in sun-exposed locations) displayed higher ecological constraint for bark thickness than plants in more benign habitats (woodlands on deep soils and in sheltered locations). The relative ease of calculation and dimensionless nature of C i allow it to be readily implemented at various scales and make it widely applicable. It therefore has the potential to advance the mechanistic understanding of the ecological processes shaping trait expression. Some future applications of the new parameters could be investigating the patterns of ecological constraints (1) among communities from different regions, (2) on different traits across similar environmental gradients, and (3) for the same
Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.
Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S
2018-02-05
To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.
Constraints on cosmological parameters in power-law cosmology
International Nuclear Information System (INIS)
Rani, Sarita; Singh, J.K.; Altaibayeva, A.; Myrzakulov, R.; Shahalam, M.
2015-01-01
In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H 0 (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not with H(z) data. However, the constraints obtained on i.e. H 0 average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details
Constraint processing in our extensible language for cooperative imaging system
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
γ parameter and Solar System constraint in chameleon-Brans-Dicke theory
International Nuclear Information System (INIS)
Saaidi, Kh.; Mohammadi, A.; Sheikhahmadi, H.
2011-01-01
The post Newtonian parameter is considered in the chameleon-Brans-Dicke model. In the first step, the general form of this parameter and also effective gravitational constant is obtained. An arbitrary function for f(Φ), which indicates the coupling between matter and scalar field, is introduced to investigate validity of solar system constraint. It is shown that the chameleon-Brans-Dicke model can satisfy the solar system constraint and gives us an ω parameter of order 10 4 , which is in comparable to the constraint which has been indicated in [19].
Constraints on a generalized deceleration parameter from cosmic chronometers
Mamon, Abdulla Al
2018-04-01
In this paper, we have proposed a generalized parametrization for the deceleration parameter q in order to study the evolutionary history of the universe. We have shown that the proposed model can reproduce three well known q-parametrized models for some specific values of the model parameter α. We have used the latest compilation of the Hubble parameter measurements obtained from the cosmic chronometer (CC) method (in combination with the local value of the Hubble constant H0) and the Type Ia supernova (SNIa) data to place constraints on the parameters of the model for different values of α. We have found that the resulting constraints on the deceleration parameter and the dark energy equation of state support the ΛCDM model within 1σ confidence level at the present epoch.
CHANDRA CLUSTER COSMOLOGY PROJECT III: COSMOLOGICAL PARAMETER CONSTRAINTS
International Nuclear Information System (INIS)
Vikhlinin, A.; Forman, W. R.; Jones, C.; Murray, S. S.; Kravtsov, A. V.; Burenin, R. A.; Voevodkin, A.; Ebeling, H.; Hornstrup, A.; Nagai, D.; Quintana, H.
2009-01-01
Chandra observations of large samples of galaxy clusters detected in X-rays by ROSAT provide a new, robust determination of the cluster mass functions at low and high redshifts. Statistical and systematic errors are now sufficiently small, and the redshift leverage sufficiently large for the mass function evolution to be used as a useful growth of a structure-based dark energy probe. In this paper, we present cosmological parameter constraints obtained from Chandra observations of 37 clusters with (z) = 0.55 derived from 400 deg 2 ROSAT serendipitous survey and 49 brightest z ∼ 0.05 clusters detected in the All-Sky Survey. Evolution of the mass function between these redshifts requires Ω Λ > 0 with a ∼5σ significance, and constrains the dark energy equation-of-state parameter to w 0 = -1.14 ± 0.21, assuming a constant w and a flat universe. Cluster information also significantly improves constraints when combined with other methods. Fitting our cluster data jointly with the latest supernovae, Wilkinson Microwave Anisotropy Probe, and baryonic acoustic oscillation measurements, we obtain w 0 = -0.991 ± 0.045 (stat) ±0.039 (sys), a factor of 1.5 reduction in statistical uncertainties, and nearly a factor of 2 improvement in systematics compared with constraints that can be obtained without clusters. The joint analysis of these four data sets puts a conservative upper limit on the masses of light neutrinos Σm ν M h and σ 8 from the low-redshift cluster mass function.
Multiparameter elastic full waveform inversion with facies-based constraints
Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing
2018-06-01
Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.
Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints
Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing
2018-03-01
Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.
Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints
Zhang, Zhendong
2018-03-20
Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.
Cosmological constraints with clustering-based redshifts
Kovetz, Ely D.; Raccanelli, Alvise; Rahman, Mubdi
2017-07-01
We demonstrate that observations lacking reliable redshift information, such as photometric and radio continuum surveys, can produce robust measurements of cosmological parameters when empowered by clustering-based redshift estimation. This method infers the redshift distribution based on the spatial clustering of sources, using cross-correlation with a reference data set with known redshifts. Applying this method to the existing Sloan Digital Sky Survey (SDSS) photometric galaxies, and projecting to future radio continuum surveys, we show that sources can be efficiently divided into several redshift bins, increasing their ability to constrain cosmological parameters. We forecast constraints on the dark-energy equation of state and on local non-Gaussianity parameters. We explore several pertinent issues, including the trade-off between including more sources and minimizing the overlap between bins, the shot-noise limitations on binning and the predicted performance of the method at high redshifts, and most importantly pay special attention to possible degeneracies with the galaxy bias. Remarkably, we find that once this technique is implemented, constraints on dynamical dark energy from the SDSS imaging catalogue can be competitive with, or better than, those from the spectroscopic BOSS survey and even future planned experiments. Further, constraints on primordial non-Gaussianity from future large-sky radio-continuum surveys can outperform those from the Planck cosmic microwave background experiment and rival those from future spectroscopic galaxy surveys. The application of this method thus holds tremendous promise for cosmology.
Constraint-based Attribute and Interval Planning
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang
2018-01-01
The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.
Zweben, Monte
1993-01-01
The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Energy optimization of bread baking process undergoing quality constraints
International Nuclear Information System (INIS)
Papasidero, Davide; Pierucci, Sauro; Manenti, Flavio
2016-01-01
International home energy rating regulations are forcing to use efficient cooking equipment and processes towards energy saving and sustainability. For this reason gas ovens are replaced by the electric ones, to get the highest energy rating. Due to this fact, the study of the technologies related to the energy efficiency in cooking is increasingly developing. Indeed, big industries are working to the energy optimization of their processes since decades, while there is still a lot of room in energy optimization of single household appliances. The achievement of a higher efficiency can have a big impact on the society only if the use of modern equipment gets widespread. The combination of several energy sources (e.g. forced convection, irradiation, microwave, etc.) and their optimization is an emerging target for oven manufacturers towards optimal oven design. In this work, an energy consumption analysis and optimization is applied to the case of bread baking. Each source of energy gets the due importance and the process conditions are compared. A basic quality standard is guaranteed by taking into account some quality markers, which are relevant based on a consumer viewpoint. - Highlights: • Energy optimization is based on a validated finite-element model for bread baking. • Quality parameters for the product acceptability are introduced as constraints. • Dynamic optimization leads to 20% energy saving compared to non-optimized case. • The approach is applicable to many products, quality parameters, thermal processes. • Other heating processes can be easily integrated in the presented model.
Directory of Open Access Journals (Sweden)
Arnaud Gotlieb
2013-02-01
Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.
Constraints on the parameters of the CKM matrix by End 1998
Parodi, F; Stocchi, A
1999-01-01
A review of the current status of the Cabibbo-Kobayashi-Maskawa matrix (CKM) is presented. This paper is an update of the results published in [1]. The experimental constraints imposed by the measurements of \\epsilon_K, V_{ub}/V_{cb}, \\Delta m_d and from the limit on \\Delta m_d are used. Values of the constraints and of the parameters entering into the constraints, which restrict the range of the \\bar{\\rho} and \\bar{\\eta} parameters, include recent measurements presented at 1998 Summer Conferences and progress obtained by lattice QCD collaborations. The results are: \\bar{\\rho}=0.202 ^{+0.053}_{-0.059},\\bar{\\eta}=0.340 \\pm 0.035, from which the angles \\alpha, ^{+ 0.29}_{-0.28} ,\\sin 2 \\beta = 0.725 ^{+0.050}_{-0.060} ,\\gamma= (59.5^{+8.5}_{-7.5})^{\\circ}. Without using the constraint from \\epsilon_K, external measurements or theoretical inputs have been removed, in turn, from the constraints and their respective probability density functions have been obtained. Central values and uncertainties on these quantit...
GA based CNC turning center exploitation process parameters optimization
Directory of Open Access Journals (Sweden)
Z. Car
2009-01-01
Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.
Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.
Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony
2003-06-06
We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.
Constraint-based scheduling applying constraint programming to scheduling problems
Baptiste, Philippe; Nuijten, Wim
2001-01-01
Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...
Cosmological Constraints on Mirror Matter Parameters
International Nuclear Information System (INIS)
Wallemacq, Quentin; Ciarcelluti, Paolo
2014-01-01
Up-to-date estimates of the cosmological parameters are presented as a result of numerical simulations of cosmic microwave background and large scale structure, considering a flat Universe in which the dark matter is made entirely or partly of mirror matter, and the primordial perturbations are scalar adiabatic and in linear regime. A statistical analysis using the Markov Chain Monte Carlo method allows to obtain constraints of the cosmological parameters. As a result, we show that a Universe with pure mirror dark matter is statistically equivalent to the case of an admixture with cold dark matter. The upper limits for the ratio of the temperatures of ordinary and mirror sectors are around 0.3 for both the cosmological models, which show the presence of a dominant fraction of mirror matter, 0.06≲Ω_m_i_r_r_o_rh"2≲0.12.
Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm
Directory of Open Access Journals (Sweden)
Norlina Mohd Sabri
2016-06-01
Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Directory of Open Access Journals (Sweden)
Junhui Huang
2016-12-01
Full Text Available Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
Simic, Vladimir
2016-06-01
As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
Model-based verification method for solving the parameter uncertainty in the train control system
International Nuclear Information System (INIS)
Cheng, Ruijun; Zhou, Jin; Chen, Dewang; Song, Yongduan
2016-01-01
This paper presents a parameter analysis method to solve the parameter uncertainty problem for hybrid system and explore the correlation of key parameters for distributed control system. For improving the reusability of control model, the proposed approach provides the support for obtaining the constraint sets of all uncertain parameters in the abstract linear hybrid automata (LHA) model when satisfying the safety requirements of the train control system. Then, in order to solve the state space explosion problem, the online verification method is proposed to monitor the operating status of high-speed trains online because of the real-time property of the train control system. Furthermore, we construct the LHA formal models of train tracking model and movement authority (MA) generation process as cases to illustrate the effectiveness and efficiency of the proposed method. In the first case, we obtain the constraint sets of uncertain parameters to avoid collision between trains. In the second case, the correlation of position report cycle and MA generation cycle is analyzed under both the normal and the abnormal condition influenced by packet-loss factor. Finally, considering stochastic characterization of time distributions and real-time feature of moving block control system, the transient probabilities of wireless communication process are obtained by stochastic time petri nets. - Highlights: • We solve the parameters uncertainty problem by using model-based method. • We acquire the parameter constraint sets by verifying linear hybrid automata models. • Online verification algorithms are designed to monitor the high-speed trains. • We analyze the correlation of key parameters and uncritical parameters. • The transient probabilities are obtained by using reliability analysis.
Diffusion Processes Satisfying a Conservation Law Constraint
Directory of Open Access Journals (Sweden)
J. Bakosi
2014-01-01
Full Text Available We investigate coupled stochastic differential equations governing N nonnegative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires a set of fluctuating variables to be nonnegative and (if appropriately normalized sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the nonnegativity and the unit-sum conservation law constraints are satisfied as the variables evolve in time. We investigate the consequences of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.
International Nuclear Information System (INIS)
Teuber, T; Steidl, G; Chan, R H
2013-01-01
In this paper, we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b, H · ) for rather general linear operators H and L. The I-divergence is also known as Kullback–Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data but also in the case of multiplicative Gamma noise. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. A central part of this paper consists in proving relations between the parameters of I-divergence constrained and penalized problems. To solve the I-divergence constrained problem, we consider various first-order primal–dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least-squares problem which can be solved based on Morozov’s discrepancy principle by a Newton method. We prove that these algorithms produce not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which converges to a regularization parameter so that the corresponding penalized problem has the same solution. Furthermore, we derive a rule for automatically setting the constraint parameter for data corrupted by multiplicative Gamma noise. The performance of the various algorithms is finally demonstrated for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. (paper)
CBDS: Constraint-based diagnostic system for malfunction identification in the nuclear power plant
International Nuclear Information System (INIS)
Ha, J.
1992-01-01
Traditional rule-based diagnostic expert systems use the experience of experts in the form of rules that associate symptoms with underlying faults. A commonly recognized failing of such systems is their narrow range of expertise and their inability to recognize problems outside this range of expertise. A model base diagnostic system isolating malfunctioning components-CBDS, the Constraint based Diagnostic System-has been developed. Since the intended behavior of a device is more predictable than unintended behaviors (faults), a model based system using the intended behavior has a potential to diagnose unexpected malfunctions by considering faults as open-quotes anything other than the intended behavior.close quotes As a knowledge base, the CBDS generates and decomposes a constraint network based on the structure and behavior model, which are represented symbolically in algebraic equations. Behaviors of generic components are organized in a component model library. Once the library is available, actual domain knowledge can be represented by declaring component types and their connections. To capture various plant knowledge, the mixed model was developed which allow the use of different parameter types in one equation by defining various operators. The CBDS uses the general idea of model based diagnosis. It detects a discrepancy between observation and prediction using constraint propagation, which carriers and accumulates the assumptions when parameter values are deduced. When measured plant parameters are asserted into a constraint network and are propagated through the network, a discrepancy will be detected if there exists any malfunctioning component. The CBDS was tested in the Recirculation Flow Control System of a BWR, and has been shown to be able to diagnose unexpected events
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa
We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.
Energy Technology Data Exchange (ETDEWEB)
Thu, Hien Cao Thi; Lee, Moonyong [Yeungnam University, Gyeongsan (Korea, Republic of)
2013-12-15
A novel analytical design method of industrial proportional-integral (PI) controllers was developed for the optimal control of first-order processes with operational constraints. The control objective was to minimize a weighted sum of the controlled variable error and the rate of change in the manipulated variable under the maximum allowable limits in the controlled variable, manipulated variable and the rate of change in the manipulated variable. The constrained optimal servo control problem was converted to an unconstrained optimization to obtain an analytical tuning formula. A practical shortcut procedure for obtaining optimal PI parameters was provided based on graphical analysis of global optimality. The proposed PI controller was found to guarantee global optimum and deal explicitly with the three important operational constraints.
An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage
Directory of Open Access Journals (Sweden)
German Gramajo
2017-01-01
Full Text Available A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of flight. Comparisons of this formulation to a path planning algorithm based on those with time constraint show equivalent coverage performance but improvement in prediction of overall mission duration and accuracy of the terminal position of the vehicle.
Directory of Open Access Journals (Sweden)
Zhiqiang GENG
2014-01-01
Full Text Available Output noise is strongly related to input in closed-loop control system, which makes model identification of closed-loop difficult, even unidentified in practice. The forward channel model is chosen to isolate disturbance from the output noise to input, and identified by optimization the dynamic characteristics of the process based on closed-loop operation data. The characteristics parameters of the process, such as dead time and time constant, are calculated and estimated based on the PI/PID controller parameters and closed-loop process input/output data. And those characteristics parameters are adopted to define the search space of the optimization identification algorithm. PSO-SQP optimization algorithm is applied to integrate the global search ability of PSO with the local search ability of SQP to identify the model parameters of forward channel. The validity of proposed method has been verified by the simulation. The practicability is checked with the PI/PID controller parameter turning based on identified forward channel model.
Directory of Open Access Journals (Sweden)
Urszula Ledzewicz
1993-01-01
Full Text Available In this paper, a general distributed parameter control problem in Banach spaces with integral cost functional and with given initial and terminal data is considered. An extension of the Dubovitskii-Milyutin method to the case of nonregular operator equality constraints, based on Avakov's generalization of the Lusternik theorem, is presented. This result is applied to obtain an extension of the Extremum Principle for the case of abnormal optimal control problems. Then a version of this problem with nonoperator equality constraints is discussed and the Extremum Principle for this problem is presented.
Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen
2015-09-01
In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Constraint-Muse: A Soft-Constraint Based System for Music Therapy
Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin
Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.
An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage
Gramajo, German; Shankar, Praveen
2017-01-01
A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of ...
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2016-05-01
Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.
Constraint-based Word Segmentation for Chinese
DEFF Research Database (Denmark)
Christiansen, Henning; Bo, Li
2014-01-01
-hoc and statistically based methods. In this paper, we show experiments of implementing different approaches to CWSP in the framework of CHR Grammars [Christiansen, 2005] that provides a constraint solving approach to language analysis. CHR Grammars are based upon Constraint Handling Rules, CHR [Frühwirth, 1998, 2009......], which is a declarative, high-level programming language for specification and implementation of constraint solvers....
Uncertainty management by relaxation of conflicting constraints in production process scheduling
Dorn, Juergen; Slany, Wolfgang; Stary, Christian
1992-01-01
Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.
Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints
Directory of Open Access Journals (Sweden)
Jun Zhang
2017-06-01
Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.
Density-Based Clustering with Geographical Background Constraints Using a Semantic Expression Model
Directory of Open Access Journals (Sweden)
Qingyun Du
2016-05-01
Full Text Available A semantics-based method for density-based clustering with constraints imposed by geographical background knowledge is proposed. In this paper, we apply an ontological approach to the DBSCAN (Density-Based Geospatial Clustering of Applications with Noise algorithm in the form of knowledge representation for constraint clustering. When used in the process of clustering geographic information, semantic reasoning based on a defined ontology and its relationships is primarily intended to overcome the lack of knowledge of the relevant geospatial data. Better constraints on the geographical knowledge yield more reasonable clustering results. This article uses an ontology to describe the four types of semantic constraints for geographical backgrounds: “No Constraints”, “Constraints”, “Cannot-Link Constraints”, and “Must-Link Constraints”. This paper also reports the implementation of a prototype clustering program. Based on the proposed approach, DBSCAN can be applied with both obstacle and non-obstacle constraints as a semi-supervised clustering algorithm and the clustering results are displayed on a digital map.
International Nuclear Information System (INIS)
Rao, R. Venkata; Rai, Dhiraj P.
2017-01-01
Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).
Energy Technology Data Exchange (ETDEWEB)
Rao, R. Venkata; Rai, Dhiraj P. [Sardar Vallabhbhai National Institute of Technology, Gujarat (India)
2017-05-15
Submerged arc welding (SAW) is characterized as a multi-input process. Selection of optimum combination of process parameters of SAW process is a vital task in order to achieve high quality of weld and productivity. The objective of this work is to optimize the SAW process parameters using a simple optimization algorithm, which is fast, robust and convenient. Therefore, in this work a very recently proposed optimization algorithm named Jaya algorithm is applied to solve the optimization problems in SAW process. In addition, a modified version of Jaya algorithm with oppositional based learning, named “Quasi-oppositional based Jaya algorithm” (QO-Jaya) is proposed in order to improve the performance of the Jaya algorithm. Three optimization case studies are considered and the results obtained by Jaya algorithm and QO-Jaya algorithm are compared with the results obtained by well-known optimization algorithms such as Genetic algorithm (GA), Particle swarm optimization (PSO), Imperialist competitive algorithm (ICA) and Teaching learning based optimization (TLBO).
Coverage-based constraints for IMRT optimization
Mescher, H.; Ulrich, S.; Bangert, M.
2017-09-01
Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.
Fuzzy Constraint-Based Agent Negotiation
Institute of Scientific and Technical Information of China (English)
Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu
2005-01-01
Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.
Directory of Open Access Journals (Sweden)
Gang Qiao
2016-05-01
Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric
Observational constraints on Hubble parameter in viscous generalized Chaplygin gas
Thakur, P.
2018-04-01
Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.
Automatic Verification of Timing Constraints for Safety Critical Space Systems
Fernandez, Javier; Parra, Pablo; Sanchez Prieto, Sebastian; Polo, Oscar; Bernat, Guillem
2015-09-01
In this paper is presented an automatic process of verification. We focus in the verification of scheduling analysis parameter. This proposal is part of process based on Model Driven Engineering to automate a Verification and Validation process of the software on board of satellites. This process is implemented in a software control unit of the energy particle detector which is payload of Solar Orbiter mission. From the design model is generated a scheduling analysis model and its verification model. The verification as defined as constraints in way of Finite Timed Automatas. When the system is deployed on target the verification evidence is extracted as instrumented points. The constraints are fed with the evidence, if any of the constraints is not satisfied for the on target evidence the scheduling analysis is not valid.
Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters
Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei
2018-05-01
In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.
Multi-objective Optimization of Process Parameters in Friction Stir Welding
DEFF Research Database (Denmark)
Tutum, Cem Celal; Hattel, Jesper Henri
The objective of this paper is to investigate optimum process parameters in Friction Stir Welding (FSW) to minimize residual stresses in the work piece and maximize production efficiency meanwhile satisfying process specific constraints as well. More specifically, the choices of tool rotational...... speed and traverse welding speed have been sought in order to achieve the goals mentioned above using an evolutionary multi-objective optimization (MOO) algorithm, i.e. non-dominated sorting genetic algorithm (NSGA-II), integrated with a transient, 2- dimensional sequentially coupled thermo...
Directory of Open Access Journals (Sweden)
B. Kuldeep
2015-06-01
Full Text Available Fractional calculus has recently been identified as a very important mathematical tool in the field of signal processing. Digital filters designed by fractional derivatives give more accurate frequency response in the prescribed frequency region. Digital filters are most important part of multi-rate filter bank systems. In this paper, an improved method based on fractional derivative constraints is presented for the design of two-channel quadrature mirror filter (QMF bank. The design problem is formulated as minimization of L2 error of filter bank transfer function in passband, stopband interval and at quadrature frequency, and then Lagrange multiplier method with fractional derivative constraints is applied to solve it. The proposed method is then successfully applied for the design of two-channel QMF bank with higher order filter taps. Performance of the QMF bank design is then examined through study of various parameters such as passband error, stopband error, transition band error, peak reconstruction error (PRE, stopband attenuation (As. It is found that, the good design can be obtained with the change of number and value of fractional derivative constraint coefficients.
Management of Constraint Generators in Fashion Store Design Processes
DEFF Research Database (Denmark)
Borch Münster, Mia; Haug, Anders
2017-01-01
of the literature and eight case studies of fashion store design projects. Findings: The paper shows that the influence of the constraint generators decreases during the design process except for supplier-generated constraints, which increase in the final stages of the design process. The paper argues...... is on fashion store design, the findings may, to some degree, be applicable to other types of store design projects. Practical implications: The understandings provided by this paper may help designers to deal proactively with constraints, reducing the use of resources to alter design proposals. Originality......Purpose: Retail design concepts are complex designs meeting functional and aesthetic demands from various constraint generators. However, the literature on this topic is sparse and offers only little support for store designers to deal with such challenges. To address this issue, the purpose...
Using soft constraints to guide users in flexible business process management systems
DEFF Research Database (Denmark)
Stefansen, Christian; Borch, Signe Ellegård
2008-01-01
Current Business Process Management Systems (BPMS) allow designers to specify processes in highly expressive languages supporting numerous control flow constructs, exceptions, complex predicates, etc., but process specifications are expressed in terms of hard constraints, and this leads...... to an unfortunate trade off: information about preferred practices must either be abandoned or promoted to hard constraints. If abandoned, the BPMS cannot guide its users; if promoted to hard constraints, it becomes a hindrance when unanticipated deviations occur. Soft constraints can make this trade-off less...... painful. Soft constraints specify what rules can be violated and by how much. With soft constraints, the BPMS knows what deviations it can permit, and it can guide the user through the process. The BPMS should allow designers to easily specify soft goals and allow its users to immediately see...
Design variables and constraints in fashion store design processes
DEFF Research Database (Denmark)
Haug, Anders; Borch Münster, Mia
2015-01-01
is to identify the most important store design variables, organise these variables into categories, understand the design constraints between categories, and determine the most influential stakeholders. Design/methodology/approach: – Based on a discussion of existing literature, the paper defines a framework...... into categories, provides an understanding of constraints between categories of variables, and identifies the most influential stakeholders. The paper demonstrates that the fashion store design task can be understood through a system perspective, implying that the store design task becomes a matter of defining......Purpose: – Several frameworks of retail store environment variables exist, but as shown by this paper, they are not particularly well-suited for supporting fashion store design processes. Thus, in order to provide an improved understanding of fashion store design, the purpose of this paper...
A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems
Directory of Open Access Journals (Sweden)
Abhishek Bhatia
2015-03-01
Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.
Optimization of Wireless Transceivers under Processing Energy Constraints
Wang, Gaojian; Ascheid, Gerd; Wang, Yanlu; Hanay, Oner; Negra, Renato; Herrmann, Matthias; Wehn, Norbert
2017-09-01
Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system's overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.
Eleiwi, Fadi
2017-05-08
An Observer-based Perturbation Extremum Seeking Control (PESC) is proposed for a Direct-Contact Membrane Distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D Advection-Diffusion Equation (ADE) model which has pump flow rates as process inputs. The objective of the controller is to optimize the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analyzed, and simulations based on real DCMD process parameters for each control input are provided.
Eleiwi, Fadi; Laleg-Kirati, Taous Meriem
2018-06-01
An observer-based perturbation extremum seeking control is proposed for a direct-contact membrane distillation (DCMD) process. The process is described with a dynamic model that is based on a 2D advection-diffusion equation model which has pump flow rates as process inputs. The objective of the controller is to optimise the trade-off between the permeate mass flux and the energy consumption by the pumps inside the process. Cases of single and multiple control inputs are considered through the use of only the feed pump flow rate or both the feed and the permeate pump flow rates. A nonlinear Lyapunov-based observer is designed to provide an estimation for the temperature distribution all over the designated domain of the DCMD process. Moreover, control inputs are constrained with an anti-windup technique to be within feasible and physical ranges. Performance of the proposed structure is analysed, and simulations based on real DCMD process parameters for each control input are provided.
Wang, H.; Chen, S.; Tao, C.; Qiu, L.
2017-12-01
High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing
Energy Technology Data Exchange (ETDEWEB)
Chen, Y.L.; Wang, G.Z., E-mail: gzwang@ecust.edu.cn; Xuan, F.Z.; Tu, S.T.
2015-04-15
Highlights: • Solution of constraint parameter τ* for through-wall cracked pipes has been obtained. • Constraint increases with increasing crack length and radius–thickness ratio of pipes. • Constraint-dependent LBB curve for through-wall cracked pipes has been constructed. • For increasing accuracy of LBB assessments, constraint effect should be considered. - Abstract: The leak-before-break (LBB) concept has been widely applied in the structural integrity assessments of pressured pipes in nuclear power plants. However, the crack-tip constraint effects in LBB analyses and designs cannot be incorporated. In this paper, by using three-dimensional finite element calculations, the modified load-independent T-stress constraint parameter τ* for circumferential through-wall cracked pipes with different geometries and crack sizes has been analyzed under different loading conditions, and the solutions of the crack-tip constraint parameter τ* have been obtained. Based on the τ* solutions and constraint-dependent J–R curves of a steel, the constraint-dependent LBB (leak-before-break) curves have been constructed. The results show that the constraint τ* increases with increasing crack length θ, mean radius R{sub m} and radius–thickness ratio R{sub m}/t of the pipes. In LBB analyses, the critical crack length calculated by the J–R curve of the standard high constraint specimen for pipes with shorter cracks is over-conservative, and the degree of conservatism increases with decreasing crack length θ, R{sub m} and R{sub m}/t. Therefore, the constraint-dependent LBB curves should be constructed to modify the over-conservatism and increase accuracy of LBB assessments.
KiDS-450: the tomographic weak lensing power spectrum and constraints on cosmological parameters
Köhlinger, F.; Viola, M.; Joachimi, B.; Hoekstra, H.; van Uitert, E.; Hildebrandt, H.; Choi, A.; Erben, T.; Heymans, C.; Joudaki, S.; Klaes, D.; Kuijken, K.; Merten, J.; Miller, L.; Schneider, P.; Valentijn, E. A.
2017-11-01
We present measurements of the weak gravitational lensing shear power spectrum based on 450 ° ^2 of imaging data from the Kilo Degree Survey. We employ a quadratic estimator in two and three redshift bins and extract band powers of redshift autocorrelation and cross-correlation spectra in the multipole range 76 ≤ ℓ ≤ 1310. The cosmological interpretation of the measured shear power spectra is performed in a Bayesian framework assuming a ΛCDM model with spatially flat geometry, while accounting for small residual uncertainties in the shear calibration and redshift distributions as well as marginalizing over intrinsic alignments, baryon feedback and an excess-noise power model. Moreover, massive neutrinos are included in the modelling. The cosmological main result is expressed in terms of the parameter combination S_8 ≡ σ _8 √{Ω_m/0.3} yielding S8 = 0.651 ± 0.058 (three z-bins), confirming the recently reported tension in this parameter with constraints from Planck at 3.2σ (three z-bins). We cross-check the results of the three z-bin analysis with the weaker constraints from the two z-bin analysis and find them to be consistent. The high-level data products of this analysis, such as the band power measurements, covariance matrices, redshift distributions and likelihood evaluation chains are available at http://kids.strw.leidenuniv.nl.
Nucleon Edm from Atomic Systems and Constraints on Supersymmetry Parameters
Oshima, Sachiko; Nihei, Takeshi; Fujita, Takehisa
2005-01-01
The nucleon EDM is shown to be directly related to the EDM of atomic systems. From the observed EDM values of the atomic Hg system, the neutron EDM can be extracted, which gives a very stringent constraint on the supersymmetry parameters. It is also shown that the measurement of Nitrogen and Thallium atomic systems should provide important information on the flavor dependence of the quark EDM. We perform numerical analyses on the EDM of neutron, proton and electron in the minimal supersymmetr...
Big Bang Nucleosynthesis and Cosmological Constraints on Neutrino Oscillation Parameters
Kirilova, Daniela P; Kirilova, Daniela; Chizhov, Mihail
2001-01-01
We present a review of cosmological nucleosynthesis (CN) with neutrino oscillations, discussing the different effects of oscillations on CN, namely: increase of the effective degrees of freedom during CN, spectrum distortion of the oscillating neutrinos, neutrino number density depletion, and growth of neutrino-antineutrino asymmetry due to active-sterile oscillations. We discuss the importance of these effects for the primordial yield of helium-4. Primordially produced He-4 value is obtained in a selfconsistent study of the nucleons and the oscillating neutrinos. The effects of spectrum distortion, depletion and neutrino-antineutrino asymmetry growth on helium-4 production are explicitly calculated. An update of the cosmological constraints on active-sterile neutrino oscillations parameters is presented, giving the values: delta m^2 sin^8 (2 theta) 0, and |delta m^2| < 8.2 x 10^{-10} eV^2 at large mixing angles for delta m^2 < 0. According to these constraints, besides the active-sterile LMA solution,...
Managing Constraint Generators in Retail Design Processes
DEFF Research Database (Denmark)
Münster, Mia Borch; Haug, Anders
case studies of fashion store design projects, the present paper addresses this gap. The and six case studies of fashion store design projects, the present paper sheds light on the types of constraints generated by the relevant constraint generators. The paper shows that in the cases studied......Retail design concepts are complex designs meeting functional and aesthetic demands. During a design process a retail designer has to consider various constraint generators such as stakeholder interests, physical limitations and restrictions. Obviously the architectural site, legislators...... and landlords need to be considered as well as the interest of the client and brand owner. Furthermore the users need to be taken into account in order to develop an interesting and functional shopping and working environments. Finally, suppliers and competitors may influence the design with regard...
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Efficient classification of complete parameter regions based on semidefinite programming
Directory of Open Access Journals (Sweden)
Parrilo Pablo A
2007-01-01
Full Text Available Abstract Background Current approaches to parameter estimation are often inappropriate or inconvenient for the modelling of complex biological systems. For systems described by nonlinear equations, the conventional approach is to first numerically integrate the model, and then, in a second a posteriori step, check for consistency with experimental constraints. Hence, only single parameter sets can be considered at a time. Consequently, it is impossible to conclude that the "best" solution was identified or that no good solution exists, because parameter spaces typically cannot be explored in a reasonable amount of time. Results We introduce a novel approach based on semidefinite programming to directly identify consistent steady state concentrations for systems consisting of mass action kinetics, i.e., polynomial equations and inequality constraints. The duality properties of semidefinite programming allow to rigorously certify infeasibility for whole regions of parameter space, thus enabling the simultaneous multi-dimensional analysis of entire parameter sets. Conclusion Our algorithm reduces the computational effort of parameter estimation by several orders of magnitude, as illustrated through conceptual sample problems. Of particular relevance for systems biology, the approach can discriminate between structurally different candidate models by proving inconsistency with the available data.
Nucleon EDM from atomic systems and constraints on supersymmetry parameters
International Nuclear Information System (INIS)
Oshima, Sachiko; Nihei, Takeshi; Fujita, Takehisa
2005-01-01
The nucleon EDM is shown to be directly related to the EDM of atomic systems. From the observed EDM values of the atomic Hg system, the neutron EDM can be extracted, which gives a very stringent constraint on the supersymmetry parameters. It is also shown that the measurement of Nitrogen and Thallium atomic systems should provide important information on the flavor dependence of the quark EDM. We perform numerical analyses on the EDM of neutron, proton and electron in the minimal supersymmetric standard model with CP-violating phases. We demonstrate that the new limit on the neutron EDM extracted from atomic systems excludes a wide parameter region of supersymmetry breaking masses above 1 TeV, while the old limit excludes only a small mass region below 1 TeV. (author)
Mandelbaum, Rachel; Slosar, Anže; Baldauf, Tobias; Seljak, Uroš; Hirata, Christopher M.; Nakajima, Reiko; Reyes, Reinabelle; Smith, Robert E.
2013-06-01
Recent studies have shown that the cross-correlation coefficient between galaxies and dark matter is very close to unity on scales outside a few virial radii of galaxy haloes, independent of the details of how galaxies populate dark matter haloes. This finding makes it possible to determine the dark matter clustering from measurements of galaxy-galaxy weak lensing and galaxy clustering. We present new cosmological parameter constraints based on large-scale measurements of spectroscopic galaxy samples from the Sloan Digital Sky Survey (SDSS) data release 7. We generalize the approach of Baldauf et al. to remove small-scale information (below 2 and 4 h-1 Mpc for lensing and clustering measurements, respectively), where the cross-correlation coefficient differs from unity. We derive constraints for three galaxy samples covering 7131 deg2, containing 69 150, 62 150 and 35 088 galaxies with mean redshifts of 0.11, 0.28 and 0.40. We clearly detect scale-dependent galaxy bias for the more luminous galaxy samples, at a level consistent with theoretical expectations. When we vary both σ8 and Ωm (and marginalize over non-linear galaxy bias) in a flat Λ cold dark matter model, the best-constrained quantity is σ8(Ωm/0.25)0.57 = 0.80 ± 0.05 (1σ, stat. + sys.), where statistical and systematic errors (photometric redshift and shear calibration) have comparable contributions, and we have fixed ns = 0.96 and h = 0.7. These strong constraints on the matter clustering suggest that this method is competitive with cosmic shear in current data, while having very complementary and in some ways less serious systematics. We therefore expect that this method will play a prominent role in future weak lensing surveys. When we combine these data with Wilkinson Microwave Anisotropy Probe 7-year (WMAP7) cosmic microwave background (CMB) data, constraints on σ8, Ωm, H0, wde and ∑mν become 30-80 per cent tighter than with CMB data alone, since our data break several parameter
First passage time for a diffusive process under a geometric constraint
International Nuclear Information System (INIS)
Tateishi, A A; Michels, F S; Dos Santos, M A F; Lenzi, E K; Ribeiro, H V
2013-01-01
We investigate the solutions, survival probability, and first passage time for a two-dimensional diffusive process subjected to the geometric constraints of a backbone structure. We consider this process governed by a fractional Fokker–Planck equation by taking into account the boundary conditions ρ(0,y;t) = ρ(∞,y;t) = 0, ρ(x, ± ∞;t) = 0, and an arbitrary initial condition. Our results show an anomalous spreading and, consequently, a nonusual behavior for the survival probability and for the first passage time distribution that may be characterized by different regimes. In addition, depending on the choice of the parameters present in the fractional Fokker–Planck equation, the survival probability indicates that part of the system may be trapped in the branches of the backbone structure. (paper)
Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters
Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen
2016-12-01
This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.
Theory of constraints for publicly funded health systems.
Sadat, Somayeh; Carter, Michael W; Golden, Brian
2013-03-01
Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.
Analysis report for WIPP colloid model constraints and performance assessment parameters
Energy Technology Data Exchange (ETDEWEB)
Mariner, Paul E.; Sassani, David Carl
2014-03-01
An analysis of the Waste Isolation Pilot Plant (WIPP) colloid model constraints and parameter values was performed. The focus of this work was primarily on intrinsic colloids, mineral fragment colloids, and humic substance colloids, with a lesser focus on microbial colloids. Comments by the US Environmental Protection Agency (EPA) concerning intrinsic Th(IV) colloids and Mg-Cl-OH mineral fragment colloids were addressed in detail, assumptions and data used to constrain colloid model calculations were evaluated, and inconsistencies between data and model parameter values were identified. This work resulted in a list of specific conclusions regarding model integrity, model conservatism, and opportunities for improvement related to each of the four colloid types included in the WIPP performance assessment.
Setting priorities in health care organizations: criteria, processes, and parameters of success.
Gibson, Jennifer L; Martin, Douglas K; Singer, Peter A
2004-09-08
Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.
Symmetry Parameter Constraints from a Lower Bound on Neutron-matter Energy
Energy Technology Data Exchange (ETDEWEB)
Tews, Ingo [Institute for Nuclear Theory, University of Washington, Seattle, WA 98195-1550 (United States); Lattimer, James M. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States); Ohnishi, Akira [Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502 (Japan); Kolomeitsev, Evgeni E., E-mail: itews@uw.edu, E-mail: james.lattimer@stonybrook.edu, E-mail: ohnishi@yukawa.kyoto-u.ac.jp, E-mail: e.kolomeitsev@gsi.de [Faculty of Natural Sciences, Matej Bel University, Tajovskeho 40, SK-97401 Banska Bystrica (Slovakia)
2017-10-20
We propose the existence of a lower bound on the energy of pure neutron matter (PNM) on the basis of unitary-gas considerations. We discuss its justification from experimental studies of cold atoms as well as from theoretical studies of neutron matter. We demonstrate that this bound results in limits to the density-dependent symmetry energy, which is the difference between the energies of symmetric nuclear matter and PNM. In particular, this bound leads to a lower limit to the volume symmetry energy parameter S {sub 0}. In addition, for assumed values of S {sub 0} above this minimum, this bound implies both upper and lower limits to the symmetry energy slope parameter L , which describes the lowest-order density dependence of the symmetry energy. A lower bound on neutron-matter incompressibility is also obtained. These bounds are found to be consistent with both recent calculations of the energies of PNM and constraints from nuclear experiments. Our results are significant because several equations of state that are currently used in astrophysical simulations of supernovae and neutron star mergers, as well as in nuclear physics simulations of heavy-ion collisions, have symmetry energy parameters that violate these bounds. Furthermore, below the nuclear saturation density, the bound on neutron-matter energies leads to a lower limit to the density-dependent symmetry energy, which leads to upper limits to the nuclear surface symmetry parameter and the neutron-star crust–core boundary. We also obtain a lower limit to the neutron-skin thicknesses of neutron-rich nuclei. Above the nuclear saturation density, the bound on neutron-matter energies also leads to an upper limit to the symmetry energy, with implications for neutron-star cooling via the direct Urca process.
Facilitators and constraints at each stage of the migration decision process.
Kley, Stefanie
2017-10-01
Behavioural models of migration emphasize the importance of migration decision-making for the explanation of subsequent behaviour. But empirical migration research regularly finds considerable gaps between those who intend to migrate and those who actually realize their intention. This paper applies the Theory of Planned Behaviour, enriched by the Rubicon model, to test specific hypotheses about distinct effects of facilitators and constraints on specific stages of migration decision-making and behaviour. The data come from a tailor-made panel survey based on random samples of people drawn from two German cities in 2006-07. The results show that in conventional models the effects of facilitators and constraints on migration decision-making are likely to be underestimated. Splitting the process of migration decision-making into a pre-decisional and a pre-actional phase helps to avoid bias in the estimated effects of facilitators and constraints on both migration decision-making and migration behaviour.
Design constraints for electron-positron linear colliders
International Nuclear Information System (INIS)
Mondelli, A.; Chernin, D.
1991-01-01
A prescription for examining the design constraints in the e + -e - linear collider is presented. By specifying limits on certain key quantities, an allowed region of parameter space can be presented, hopefully clarifying some of the design options. The model starts with the parameters at the interaction point (IP), where the expressions for the luminosity, the disruption parameter, beamstrahlung, and average beam power constitute four relations among eleven IP parameters. By specifying the values of five of these quantities, and using these relationships, the unknown parameter space can be reduced to a two-dimensional space. Curves of constraint can be plotted in this space to define an allowed operating region. An accelerator model, based on a modified, scaled SLAC structure, can then be used to derive the corresponding parameter space including the constraints derived from power consumption and wake field effects. The results show that longer, lower gradient accelerators are advantageous
Evaluating Direct Manipulation Operations for Constraint-Based Layout
Zeidler , Clemens; Lutteroth , Christof; Stuerzlinger , Wolfgang; Weber , Gerald
2013-01-01
Part 11: Interface Layout and Data Entry; International audience; Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are more powerful than other ones. However, they are also more complex and their layouts are prone to problems that usually require direct editing of constraints. Today, designers commonly use GUI builders to specify GUIs. The complexities of traditional approaches to constraint-based layouts pose c...
Judgement of Design Scheme Based on Flexible Constraint in ICAD
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The conception of flexible constraint is proposed in the paper. The solution of flexible constraint is in special range, and maybe different in different instances of same design scheme. The paper emphasis on how to evaluate and optimize a design scheme with flexible constraints based on the satisfaction degree function defined on flexible constraints. The conception of flexible constraint is used to solve constraint conflict and design optimization in complicated constraint-based assembly design by the PFM parametrization assembly design system. An instance of gear-box design is used for verifying optimization method.
Event-related Potentials Reflecting the Processing of Phonological Constraint Violations
Domahs, Ulrike; Kehrein, Wolfgang; Knaus, Johannes; Wiese, Richard; Schlesewsky, Matthias
2009-01-01
Flow are violations of phonological constraints processed in word comprehension? The present article reports the results of ail event-related potentials (ERP) Study oil a phonological constraint of German that disallows identical segments within it syllable or word (CC(i)VC(i)). We examined three
Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter
Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon
2018-03-01
We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.
Jung-Woon Yoo, John
2016-06-01
Since customer preferences change rapidly, there is a need for design processes with shorter product development cycles. Modularization plays a key role in achieving mass customization, which is crucial in today's competitive global market environments. Standardized interfaces among modularized parts have facilitated computational product design. To incorporate product size and weight constraints during computational design procedures, a mixed integer programming formulation is presented in this article. Product size and weight are two of the most important design parameters, as evidenced by recent smart-phone products. This article focuses on the integration of geometric, weight and interface constraints into the proposed mathematical formulation. The formulation generates the optimal selection of components for a target product, which satisfies geometric, weight and interface constraints. The formulation is verified through a case study and experiments are performed to demonstrate the performance of the formulation.
Event-related potentials reflecting the processing of phonological constraint violations
Domahs, U.; Kehrein, W.; Knaus, J.; Wiese, R.; Schlesewsky, M.
2009-01-01
How are violations of phonological constraints processed in word comprehension? The present article reports the results of an event-related potentials (ERP) study on a phonological constraint of German that disallows identical segments within a syllable or word (CC iVCi). We examined three types of
Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping
Directory of Open Access Journals (Sweden)
Gangchen Hua
2017-01-01
Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.
Optimization of cutting parameters for machining time in turning process
Mavliutov, A. R.; Zlotnikov, E. G.
2018-03-01
This paper describes the most effective methods for nonlinear constraint optimization of cutting parameters in the turning process. Among them are Linearization Programming Method with Dual-Simplex algorithm, Interior Point method, and Augmented Lagrangian Genetic Algorithm (ALGA). Every each of them is tested on an actual example – the minimization of production rate in turning process. The computation was conducted in the MATLAB environment. The comparative results obtained from the application of these methods show: The optimal value of the linearized objective and the original function are the same. ALGA gives sufficiently accurate values, however, when the algorithm uses the Hybrid function with Interior Point algorithm, the resulted values have the maximal accuracy.
On capacity tradeoffs in secure DS-CDMA packet communications with QOS constraints
International Nuclear Information System (INIS)
Sattar, F.; Mufti, M.
2012-01-01
This paper presents a mathematical framework for analysis of effect of counter mode (CTR) encryption on the traffic capacity of packet communication systems based on direct-sequence, code-division, multiple-access (DS-CDMA). We specify QoS constraints in terms of minimum acceptable mean opinion score (MOS) of voice payload, maximum permissible resource utilization for CTR-mode re-keying and DS-CDMA processing gain. We quantify the trade-offs in system capacity as a function of these constraints. Results show that application of CTR encryption causes error expansion and respecting the QoS constraints while satisfying the desired encryption parameters results in reduction of traffic capacity. (author)
Mechanical properties correlation to processing parameters for advanced alumina based refractories
Directory of Open Access Journals (Sweden)
Dimitrijević Marija M.
2012-01-01
Full Text Available Alumina based refractories are usually used in metallurgical furnaces and their thermal shock resistance is of great importance. In order to improve thermal shock resistance and mechanical properties of alumina based refractories short ceramic fibers were added to the material. SEM technique was used to compare the microstructure of specimens and the observed images gave the porosity and morphological characteristics of pores in the specimens. Standard compression test was used to determine the modulus of elasticity and compression strength. Results obtained from thermal shock testing and mechanical properties measurements were used to establish regression models that correlated specimen properties to process parameters.
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.
Risk constraint measures developed for the outcome-based strategy for tank waste management
International Nuclear Information System (INIS)
Harper, B.L.; Gajewski, S.J.; Glantz, C.L.
1996-09-01
This report is one of a series of supporting documents for the outcome-based characterization strategy developed by PNNL. This report presents a set of proposed risk measures with risk constraint (acceptance) levels for use in the Value of Information process used in the NCS. The characterization strategy has developed a risk-based Value of Information (VOI) approach for comparing the cost-effectiveness of characterizing versus mitigating particular waste tanks or tank clusters. The preference between characterizing or mitigating in order to prevent an accident depends on the cost of those activities relative to the cost of the consequences of the accident. The consequences are defined as adverse impacts measured across a broad set of risk categories such as worker dose, public cancers, ecological harm, and sociocultural impacts. Within each risk measure, various open-quotes constraint levelsclose quotes have been identified that reflect regulatory standards or conventionally negotiated thresholds of harm to Hanford resources and values. The cost of consequences includes the open-quotes costs close-quote of exceeding those constraint levels as well as a strictly linear costing per unit of impact within each of the risk measures. In actual application, VOI based-decision making is an iterative process, with a preliminary low-precision screen of potential technical options against the major risk constraints, followed by VOI analysis to determine the cost-effectiveness of gathering additional information and to select a preferred technical option, and finally a posterior screen to determine whether the preferred option meets all relevant risk constraints and acceptability criteria
Setting priorities in health care organizations: criteria, processes, and parameters of success
Directory of Open Access Journals (Sweden)
Martin Douglas K
2004-09-01
Full Text Available Abstract Background Hospitals and regional health authorities must set priorities in the face of resource constraints. Decision-makers seek practical ways to set priorities fairly in strategic planning, but find limited guidance from the literature. Very little has been reported from the perspective of Board members and senior managers about what criteria, processes and parameters of success they would use to set priorities fairly. Discussion We facilitated workshops for board members and senior leadership at three health care organizations to assist them in developing a strategy for fair priority setting. Workshop participants identified 8 priority setting criteria, 10 key priority setting process elements, and 6 parameters of success that they would use to set priorities in their organizations. Decision-makers in other organizations can draw lessons from these findings to enhance the fairness of their priority setting decision-making. Summary Lessons learned in three workshops fill an important gap in the literature about what criteria, processes, and parameters of success Board members and senior managers would use to set priorities fairly.
Relaxation of the lower frit loading constraint for DWPF process control
International Nuclear Information System (INIS)
Brown, K.G.
2000-01-01
The lower limit on the frit loading parameter when measurement uncertainty is introduced has impacted DWPF performance during immobilization of Tank 42 Sludge; therefore, any defensible relaxation or omission of this constraint should correspondingly increase DWPF waste loading and efficiency. Waste loading should be increased because the addition of frit is the current remedy for exceeding the lower frit loading constraint. For example, frit was added to DWPF SME Batches 94, 97 and 98 to remedy these batches for low frit loading. Attempts were also made to add frit in addition to the optimum computed to assure the lower frit loading constraint would be satisfied; however, approximately half of the SME Batches produced after Batch 98 have violated the lower frit loading constraint. If the DWPF batches did not have to be remediated and additional frit added because of the lower frit loading limit, then both, the performance of the DWPF process and the waste loading in the glass produced would be increased. Before determining whether or not the lower frit loading limit can be relaxed or omitted, the origin of this and the other constraints related to durability prediction must be examined. The lower limit loading constraint results from the need to make highly durable glass in DWPF. It is required that DWPF demonstrate that the glass produced would have durability that is at least two standard deviations greater than that of the Environmental Assessment (EA) glass. Glass durability cannot be measured in situ, it must be predicted from composition which can be measured. Fortunately, the leaching characteristics of homogeneous waste glasses is strongly related to the total molar free energy of the constituent species. Thus the waste acceptance specification has been translated into a requirement that the total molar free energy associated with the glass composition that would be produced from a DWPF melter feed batch be less than that of the EA glass accounting for
Directory of Open Access Journals (Sweden)
Hiroyuki Goto
2013-07-01
Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software
DEFF Research Database (Denmark)
Drejer, N.; Kristensen, C.H.
1993-01-01
This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......
Consistency maintenance for constraint in role-based access control model
Institute of Scientific and Technical Information of China (English)
韩伟力; 陈刚; 尹建伟; 董金祥
2002-01-01
Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.
Detecting relic gravitational waves in the CMB: Optimal parameters and their constraints
International Nuclear Information System (INIS)
Zhao, W.; Baskaran, D.
2009-01-01
The prospect of detecting relic gravitational waves, through their imprint in the cosmic microwave background radiation, provides an excellent opportunity to study the very early Universe. In the simplest viable theoretical models the relic gravitational wave background is characterized by two parameters, the tensor-to-scalar ratio r and the tensor spectral index n t . In this paper, we analyze the potential joint constraints on these two parameters, r and n t , using the data from the upcoming cosmic microwave background radiation experiments. Introducing the notion of the best-pivot multipole l t *, we find that at this pivot multipole the parameters r and n t are uncorrelated, and have the smallest variances. We derive the analytical formulas for the best-pivot multipole number l t *, and the variances of the parameters r and n t . We verify these analytical calculations using numerical simulation methods, and find agreement to within 20%. The analytical results provide a simple way to estimate the detection ability for the relic gravitational waves by the future observations of the cosmic microwave background radiation.
Hower, Walter; Graf, Winfried H.
1995-01-01
The present work compiles numerous papers in the area of computer-aided design, graphics, layout configuration, and user interfaces in general. There is nearly no conference on graphics, multimedia, and user interfaces that does not include a section on constraint-based graphics; on the other hand most conferences on constraint processing favour applications in graphics. This work of bibliographical pointers may serve as a basis for a detailed and comprehensive survey of this important and ch...
Export constraints facing Lesotho-based manufacturing enterprises
Directory of Open Access Journals (Sweden)
Motšelisi C. Mokhethi
2015-07-01
Full Text Available Orientation: Exporting is preferred by many enterprises as the mode of foreign entry as it requires less commitment of organisational resources and offers flexibility of managerial actions. However, enterprises face a number of challenges when attempting to initiate exports or expand their export operations. Research purpose: This study was undertaken to determine the characteristics and composition of export barriers constraining exporting by Lesotho-based manufacturing enterprises. Motivation for the study: Lesotho is faced with low destination diversity and low diversity in export products. Research design, approach and method: Data was collected from 162 Lesotho-based manufacturing enterprises through a self-administered questionnaire. Main findings: In its findings, the study firstly identified international constraints, distribution constraints and financial constraints as factors constraining exporting. Secondly, it was determined that three exporting constraints, all internal to the enterprise and all related to one factor (namely financial constraint hampered exporting. Lastly, the ANOVA results revealed that the perceptions of export constraints differed according to the enterprise characteristics, enterprise size, ownership and type of industry. Contribution/value-add: With the majority of enterprises in this study being identified as micro-enterprises, the government of Lesotho needs to pay particular attention to addressing the export needs of these enterprises in order to enable them to participate in exporting activities − especially considering that they can play a pivotal role in the alleviation of poverty, job creation and economic rejuvenation.
CD4+ T-cell epitope prediction using antigen processing constraints.
Mettu, Ramgopal R; Charles, Tysheena; Landry, Samuel J
2016-05-01
T-cell CD4+ epitopes are important targets of immunity against infectious diseases and cancer. State-of-the-art methods for MHC class II epitope prediction rely on supervised learning methods in which an implicit or explicit model of sequence specificity is constructed using a training set of peptides with experimentally tested MHC class II binding affinity. In this paper we present a novel method for CD4+ T-cell eptitope prediction based on modeling antigen-processing constraints. Previous work indicates that dominant CD4+ T-cell epitopes tend to occur adjacent to sites of initial proteolytic cleavage. Given an antigen with known three-dimensional structure, our algorithm first aggregates four types of conformational stability data in order to construct a profile of stability that allows us to identify regions of the protein that are most accessible to proteolysis. Using this profile, we then construct a profile of epitope likelihood based on the pattern of transitions from unstable to stable regions. We validate our method using 35 datasets of experimentally measured CD4+ T cell responses of mice bearing I-Ab or HLA-DR4 alleles as well as of human subjects. Overall, our results show that antigen processing constraints provide a significant source of predictive power. For epitope prediction in single-allele systems, our approach can be combined with sequence-based methods, or used in instances where little or no training data is available. In multiple-allele systems, sequence-based methods can only be used if the allele distribution of a population is known. In contrast, our approach does not make use of MHC binding prediction, and is thus agnostic to MHC class II genotypes. Copyright © 2016 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Frida Iswinning Diah; Slamet Santosa
2012-01-01
Design and construction the identification of process parameters using personal computer based on serial communication PLC M-series has been done. The function of this device is to identify the process parameters of a system (plan), to which then be analyzed and conducted a follow-up given to the plan by the user. The main component of this device is the M-Series T100MD1616 PLC and personal computer (PC). In this device the data plan parameters obtained from the corresponding sensor outputs in the form of voltage or current. While the analog parameter data is adjusted to the ADC analog input of the PLC using a signal conditioning system. Then, as the parameter is processed by the PLC then sent to a PC via RS232 to be displayed in the form of graphs or tables and stored in the database. Software to program the database is created using Visual Basic Programming V-6. The device operation test is performed for the measurement of temperature parameter and vacuum level on the plasma nitriding machine. The results indicate that the device has functioning as an identification device parameters process of plasma nitriding machine. (author)
International Nuclear Information System (INIS)
Joseph, Joby; Muthukumaran, S.
2016-01-01
Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters
Energy Technology Data Exchange (ETDEWEB)
Joseph, Joby; Muthukumaran, S. [National Institute of Technology, Tamil Nadu (India)
2016-01-15
Abundant improvements have occurred in materials handling, especially in metal joining. Pulsed current gas tungsten arc welding (PCGTAW) is one of the consequential fusion techniques. In this work, PCGTAW of AISI 4135 steel engendered through powder metallurgy (P/M) has been executed, and the process parameters have been highlighted applying Taguchi's L9 orthogonal array. The results show that the peak current (Ip), gas flow rate (GFR), welding speed (WS) and base current (Ib) are the critical constraints in strong determinant of the Tensile strength (TS) as well as percentage of elongation (% Elong) of the joint. The practical impact of applying Genetic algorithm (GA) and Simulated annealing (SA) to PCGTAW process has been authenticated by means of calculating the deviation between predicted and experimental welding process parameters.
Assessment of Constraint Effects based on Local Approach
International Nuclear Information System (INIS)
Lee, Tae Rin; Chang, Yoon Suk; Choi, Jae Boong; Seok, Chang Sung; Kim, Young Jin
2005-01-01
Traditional fracture mechanics has been used to ensure a structural integrity, in which the geometry independence is assumed in crack tip deformation and fracture toughness. However, the assumption is applicable only within limited conditions. To address fracture covering a broad range of loading and crack geometries, two-parameter global approach and local approach have been proposed. The two-parameter global approach can quantify the load and crack geometry effects by adopting T-stress or Q-parameter but time-consuming and expensive since lots of experiments and finite element (FE) analyses are necessary. On the other hand, the local approach evaluates the load and crack geometry effects based on damage model. Once material specific fitting constants are determined from a few experiments and FE analyses, the fracture resistance characteristics can be obtained by numerical simulation. The purpose of this paper is to investigate constraint effects for compact tension (CT) specimens with different in-plane or out-of-plane size using local approach. Both modified GTN model and Rousselier model are adopted to examine the ductile fracture behavior of SA515 Gr.60 carbon steel at high temperature. The fracture resistance (J-R) curves are estimated through numerical analysis, compared with corresponding experimental results and, then, crack length, thickness and side-groove effects are evaluated
Screening key parameters related to passive system performance based on Analytic Hierarchy Process
International Nuclear Information System (INIS)
Ma, Guohang; Yu, Yu; Huang, Xiong; Peng, Yuan; Ma, Nan; Shan, Zuhua; Niu, Fenglei; Wang, Shengfei
2015-01-01
Highlights: • An improved AHP method is presented for screening key parameters used in passive system reliability analysis. • We take the special bottom parameters as criterion for calculation and the abrupt change of the results are verified. • Combination weights are also affected by uncertainty of input parameters. - Abstract: Passive safety system is widely used in the new generation nuclear power plant (NPP) designs such as AP1000 to improve the reactor safety benefitting from its simple construction and less request for human intervene. However, the functional failure induced by uncertainty in the system thermal–hydraulic (T–H) performance becomes one of the main contributors to system operational failure since the system operates based on natural circulation, which should be considered in the system reliability evaluation. In order to improve the calculation efficiency the key parameters which significantly affect the system T–H characteristics can be screened and then be analyzed in detail. The Analytical Hierarchy Process (AHP) is one of the efficient methods to analyze the influence of the parameters on a passive system based on the experts’ experience. The passive containment cooling system (PCCS) in AP1000 is one of the typical passive safety systems, nevertheless too many parameters need to be analyzed and the T–H model itself is more complicated, so the traditional AHP method should be mended to use for screening key parameters efficiently. In this paper, we adapt the improved method in hierarchy construction and experts’ opinions integration, some parameters at the bottom justly in the traditional hierarchy are studied as criterion layer in improved AHP, the rationality of the method and the effect of abrupt change with the data are verified. The passive containment cooling system (PCCS) in AP1000 is evaluated as an example, and four key parameters are selected from 49 inputs
Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints
Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.
2018-05-01
Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.
ROBUST AND ACCURATE IMAGE-BASED GEOREFERENCING EXPLOITING RELATIVE ORIENTATION CONSTRAINTS
Directory of Open Access Journals (Sweden)
S. Cavegn
2018-05-01
Full Text Available Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2–3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.
An adaptive ES with a ranking based constraint handling strategy
Directory of Open Access Journals (Sweden)
Kusakci Ali Osman
2014-01-01
Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.
DEFF Research Database (Denmark)
Tutum, Cem Celal; Schmidt, Henrik Nikolaj Blicher; Hattel, Jesper Henri
2008-01-01
In the present paper, numerical optimization of the process parameters, i.e. tool rotation speed and traverse speed, aiming minimization of the two conflicting objectives, i.e. the residual stresses and welding time, subjected to process-specific thermal constraints in friction stir welding......, is investigated. The welding process is simulated in 2-dimensions with a sequentially coupled transient thermo-mechanical model using ANSYS. The numerical optimization problem is implemented in modeFRONTIER and solved using the Multi-Objective Genetic Algorithm (MOGA-II). An engineering-wise evaluation or ranking...
Multiparameter Elastic Full Waveform Inversion With Facies Constraints
Zhang, Zhendong
2017-08-17
Full waveform inversion (FWI) aims fully benefit from all the data characteristics to estimate the parameters describing the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion as a tool beyond acoustic imaging applications, for example in reservoir analysis, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Adding rock physics constraints does help to mitigate these issues, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a boundary condition for the whole area. Since certain rock formations inside the Earth admit consistent elastic properties and relative values of elastic and anisotropic parameters (facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel confidence map based approach to utilize the facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such a confidence map using Bayesian theory, in which the confidence map is updated at each iteration of the inversion using both the inverted models and a prior information. The numerical examples show that the proposed method can reduce the trade-offs and also can improve the resolution of the inverted elastic and anisotropic properties.
Chang, Su-Chao; Chou, Chi-Min
2012-11-01
The objective of this study was to determine empirically the role of constraint-based and dedication-based influences as drivers of the intention to continue using online shopping websites. Constraint-based influences consist of two variables: trust and perceived switching costs. Dedication-based influences consist of three variables: satisfaction, perceived usefulness, and trust. The current results indicate that both constraint-based and dedication-based influences are important drivers of the intention to continue using online shopping websites. The data also shows that trust has the strongest total effect on online shoppers' intention to continue using online shopping websites. In addition, the results indicate that the antecedents of constraint-based influences, technical bonds (e.g., perceived operational competence and perceived website interactivity) and social bonds (e.g., perceived relationship investment, community building, and intimacy) have indirect positive effects on the intention to continue using online shopping websites. Based on these findings, this research suggests that online shopping websites should build constraint-based and dedication-based influences to enhance user's continued online shopping behaviors simultaneously.
Route constraints model based on polychromatic sets
Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu
2018-03-01
With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.
Simulating non-holonomic constraints within the LCP-based simulation framework
DEFF Research Database (Denmark)
Ellekilde, Lars-Peter; Petersen, Henrik Gordon
2006-01-01
be incorporated directly, and derive formalism for how the non-holonomic contact constraints can be modelled as a combination of non-holonomic equality constraints and ordinary contacts constraints. For each of these three we are able to guarantee solvability, when using Lemke's algorithm. A number of examples......In this paper, we will extend the linear complementarity problem-based rigid-body simulation framework with non-holonomic constraints. We consider three different types of such, namely equality, inequality and contact constraints. We show how non-holonomic equality and inequality constraints can...... are included to demonstrate the non-holonomic constraints. Udgivelsesdato: Marts...
A Constraint programming-based genetic algorithm for capacity output optimization
Directory of Open Access Journals (Sweden)
Kate Ean Nee Goh
2014-10-01
Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.
Read, S J; Vanman, E J; Miller, L C
1997-01-01
We argue that recent work in connectionist modeling, in particular the parallel constraint satisfaction processes that are central to many of these models, has great importance for understanding issues of both historical and current concern for social psychologists. We first provide a brief description of connectionist modeling, with particular emphasis on parallel constraint satisfaction processes. Second, we examine the tremendous similarities between parallel constraint satisfaction processes and the Gestalt principles that were the foundation for much of modem social psychology. We propose that parallel constraint satisfaction processes provide a computational implementation of the principles of Gestalt psychology that were central to the work of such seminal social psychologists as Asch, Festinger, Heider, and Lewin. Third, we then describe how parallel constraint satisfaction processes have been applied to three areas that were key to the beginnings of modern social psychology and remain central today: impression formation and causal reasoning, cognitive consistency (balance and cognitive dissonance), and goal-directed behavior. We conclude by discussing implications of parallel constraint satisfaction principles for a number of broader issues in social psychology, such as the dynamics of social thought and the integration of social information within the narrow time frame of social interaction.
Parallel constraint satisfaction in memory-based decisions.
Glöckner, Andreas; Hodges, Sara D
2011-01-01
Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.
Transforming Existing Procedural Business Processes into a Constraint-Based Formalism
dr. Martijn Zoet; Eline de Haan; Floor Vermeer; Jeroen van Grondelle; Slinger Jansen
2013-01-01
Many organizations use business process management to manage and model their processes. Currently, flow-based process formalisms, such as BPMN, are considered the standard for modeling processes. However, recent literature describes several limitations of this type of formalism that can be solved by
Tie Points Extraction for SAR Images Based on Differential Constraints
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
A novel constraint for thermodynamically designing DNA sequences.
Directory of Open Access Journals (Sweden)
Qiang Zhang
Full Text Available Biotechnological and biomolecular advances have introduced novel uses for DNA such as DNA computing, storage, and encryption. For these applications, DNA sequence design requires maximal desired (and minimal undesired hybridizations, which are the product of a single new DNA strand from 2 single DNA strands. Here, we propose a novel constraint to design DNA sequences based on thermodynamic properties. Existing constraints for DNA design are based on the Hamming distance, a constraint that does not address the thermodynamic properties of the DNA sequence. Using a unique, improved genetic algorithm, we designed DNA sequence sets which satisfy different distance constraints and employ a free energy gap based on a minimum free energy (MFE to gauge DNA sequences based on set thermodynamic properties. When compared to the best constraints of the Hamming distance, our method yielded better thermodynamic qualities. We then used our improved genetic algorithm to obtain lower-bound DNA sequence sets. Here, we discuss the effects of novel constraint parameters on the free energy gap.
Development of safety analysis and constraint detection techniques for process interaction errors
Energy Technology Data Exchange (ETDEWEB)
Fan, Chin-Feng, E-mail: csfanc@saturn.yzu.edu.tw [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China); Tsai, Shang-Lin; Tseng, Wan-Hui [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China)
2011-02-15
Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.
Development of safety analysis and constraint detection techniques for process interaction errors
International Nuclear Information System (INIS)
Fan, Chin-Feng; Tsai, Shang-Lin; Tseng, Wan-Hui
2011-01-01
Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.
Energy Technology Data Exchange (ETDEWEB)
Ye Shengying [College of Food Science, South China Agricultural University, Wushan, Guangzhou, GD 510640 (China)], E-mail: yesy@scau.edu.cn; Qiu Yuanxin; Song Xianliang; Luo Shucan [College of Food Science, South China Agricultural University, Wushan, Guangzhou, GD 510640 (China)
2009-03-15
The processing parameters for ultrasound and {sup 60}Co-{gamma} irradiation were optimized for their ability to inactivate Lactobacillus sporogenes in tomato paste using a systematic experimental design based on response surface methodology. Ultrasonic power, ultrasonic processing time and irradiation dose were explored and a central composite rotation design was adopted as the experimental plan, and a least-squares regression model was obtained. The significant influential factors for the inactivation rate of L. sporogenes were obtained from the quadratic model and the t-test analyses for each process parameter. Confirmation of the experimental results indicated that the proposed model was reasonably accurate and could be used to describe the efficacy of the treatments for inactivating L. sporogenes within the limits of the factors studied. The optimized processing parameters were found to be an ultrasonic power of 120 W with a processing time of 25 min and an irradiation dose of 6.5 kGy. These were measured under the constraints of parameter limitation, based on the Monte Carlo searching method and the quadratic model of the response surface methodology, including the a/b value of the Hunter color scale of tomato paste. Nevertheless, the ultrasound treatment prior to irradiation for the inactivation of L. sporogenes in tomato paste was unsuitable for reducing the irradiation dose.
Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints
Directory of Open Access Journals (Sweden)
Raphaël Beamonte
2016-01-01
Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.
Location-Dependent Query Processing Under Soft Real-Time Constraints
Directory of Open Access Journals (Sweden)
Zoubir Mammeri
2009-01-01
Full Text Available In recent years, mobile devices and applications achieved an increasing development. In database field, this development required methods to consider new query types like location-dependent queries (i.e. the query results depend on the query issuer location. Although several researches addressed problems related to location-dependent query processing, a few works considered timing requirements that may be associated with queries (i.e., the query results must be delivered to mobile clients on time. The main objective of this paper is to propose a solution for location-dependent query processing under soft real-time constraints. Hence, we propose methods to take into account client location-dependency and to maximize the percentage of queries respecting their deadlines. We validate our proposal by implementing a prototype based on Oracle DBMS. Performance evaluation results show that the proposed solution optimizes the percentage of queries meeting their deadlines and the communication cost.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-03-16
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW-LDPE-SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Directory of Open Access Journals (Sweden)
Luquan Ren
2017-03-01
Full Text Available Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA. The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity.
Ren, Luquan; Zhou, Xueli; Song, Zhengyi; Zhao, Che; Liu, Qingping; Xue, Jingze; Li, Xiujuan
2017-01-01
Recently, with a broadening range of available materials and alteration of feeding processes, several extrusion-based 3D printing processes for metal materials have been developed. An emerging process is applicable for the fabrication of metal parts into electronics and composites. In this paper, some critical parameters of extrusion-based 3D printing processes were optimized by a series of experiments with a melting extrusion printer. The raw materials were copper powder and a thermoplastic organic binder system and the system included paraffin wax, low density polyethylene, and stearic acid (PW–LDPE–SA). The homogeneity and rheological behaviour of the raw materials, the strength of the green samples, and the hardness of the sintered samples were investigated. Moreover, the printing and sintering parameters were optimized with an orthogonal design method. The influence factors in regard to the ultimate tensile strength of the green samples can be described as follows: infill degree > raster angle > layer thickness. As for the sintering process, the major factor on hardness is sintering temperature, followed by holding time and heating rate. The highest hardness of the sintered samples was very close to the average hardness of commercially pure copper material. Generally, the extrusion-based printing process for producing metal materials is a promising strategy because it has some advantages over traditional approaches for cost, efficiency, and simplicity. PMID:28772665
Constraints on pre-big-bang parameter space from CMBR anisotropies
International Nuclear Information System (INIS)
Bozza, V.; Gasperini, M.; Giovannini, M.; Veneziano, G.
2003-01-01
The so-called curvaton mechanism--a way to convert isocurvature perturbations into adiabatic ones--is investigated both analytically and numerically in a pre-big-bang scenario where the role of the curvaton is played by a sufficiently massive Kalb-Ramond axion of superstring theory. When combined with observations of CMBR anisotropies at large and moderate angular scales, the present analysis allows us to constrain quite considerably the parameter space of the model: in particular, the initial displacement of the axion from the minimum of its potential and the rate of evolution of the compactification volume during pre-big-bang inflation. The combination of theoretical and experimental constraints favors a slightly blue spectrum of scalar perturbations, and/or a value of the string scale in the vicinity of the SUSY GUT scale
Constraints on pre-big bang parameter space from CMBR anisotropies
Bozza, Valerio; Giovannini, Massimo; Veneziano, Gabriele
2003-01-01
The so-called curvaton mechanism --a way to convert isocurvature perturbations into adiabatic ones-- is investigated both analytically and numerically in a pre-big bang scenario where the role of the curvaton is played by a sufficiently massive Kalb--Ramond axion of superstring theory. When combined with observations of CMBR anisotropies at large and moderate angular scales, the present analysis allows us to constrain quite considerably the parameter space of the model: in particular, the initial displacement of the axion from the minimum of its potential and the rate of evolution of the compactification volume during pre-big bang inflation. The combination of theoretical and experimental constraints favours a slightly blue spectrum of scalar perturbations, and/or a value of the string scale in the vicinity of the SUSY-GUT scale.
Constraint-based solver for the Military unit path finding problem
CSIR Research Space (South Africa)
Leenen, L
2010-04-01
Full Text Available -based approach because it requires flexibility in modelling. The authors formulate the MUPFP as a constraint satisfaction problem and a constraint-based extension of the search algorithm. The concept demonstrator uses a provided map, for example taken from Google...
Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta
2016-06-01
With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.
HOROPLAN: computer-assisted nurse scheduling using constraint-based programming.
Darmoni, S J; Fajner, A; Mahé, N; Leforestier, A; Vondracek, M; Stelian, O; Baldenweck, M
1995-01-01
Nurse scheduling is a difficult and time consuming task. The schedule has to determine the day to day shift assignments of each nurse for a specified period of time in a way that satisfies the given requirements as much as possible, taking into account the wishes of nurses as closely as possible. This paper presents a constraint-based, artificial intelligence approach by describing a prototype implementation developed with the Charme language and the first results of its use in the Rouen University Hospital. Horoplan implements a non-cyclical constraint-based scheduling, using some heuristics. Four levels of constraints were defined to give a maximum of flexibility: French level (e.g. number of worked hours in a year), hospital level (e.g. specific day-off), department level (e.g. specific shift) and care unit level (e.g. specific pattern for week-ends). Some constraints must always be verified and can not be overruled and some constraints can be overruled at a certain cost. Rescheduling is possible at any time specially in case of an unscheduled absence.
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Energy Technology Data Exchange (ETDEWEB)
Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)
2015-04-15
Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.
Energy Technology Data Exchange (ETDEWEB)
Simard, G.; et al.
2017-12-20
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 deg$^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($\\Lambda$CDM), and to models with single-parameter extensions to $\\Lambda$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$ from the lensing data alone with relatively weak priors placed on the other $\\Lambda$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $\\Lambda$CDM model. We find $\\Omega_k = -0.012^{+0.021}_{-0.023}$ or $M_{\
Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints
Dongnan, L.; Wan, W.; Zhao, J.
2017-12-01
Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Geert Deconinck
2015-12-01
Full Text Available The charging of electric vehicles (EVs impacts the distribution grid, and its cost depends on the price of electricity when charging. An aggregator that is responsible for a large fleet of EVs can use a market-based control algorithm to coordinate the charging of these vehicles, in order to minimize the costs. In such an optimization, the operational parameters of the distribution grid, to which the EVs are connected, are not considered. This can lead to violations of the technical constraints of the grid (e.g., under-voltage, phase unbalances; for example, because many vehicles start charging simultaneously when the price is low. An optimization that simultaneously takes the economic and technical aspects into account is complex, because it has to combine time-driven control at the market level with event-driven control at the operational level. Different case studies investigate under which circumstances the market-based control, which coordinates EV charging, conflicts with the operational constraints of the distribution grid. Especially in weak grids, phase unbalance and voltage issues arise with a high share of EVs. A low-level voltage droop controller at the charging point of the EV can be used to avoid many grid constraint violations, by reducing the charge power if the local voltage is too low. While this action implies a deviation from the cost-optimal operating point, it is shown that this has a very limited impact on the business case of an aggregator, and is able to comply with the technical distribution grid constraints, even in weak distribution grids with many EVs.
Intelligent methods for the process parameter determination of plastic injection molding
Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn
2018-03-01
Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.
The evolution of process-based hydrologic models
Clark, Martyn P.; Bierkens, Marc F.P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R.N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.
2017-01-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this
Goldratt's thinking process applied to the budget constraints of a Texas MHMR facility.
Taylor, Lloyd J; Churchwell, Lana
2004-01-01
Managers for years have known that the best way to run a business is to constantly be looking for ways to improve the way to do business. The barrier has been the ability to identify and solve the right problems. Eliyahu Goldratt (1992c), in his book The Goal, uses a love story format to illustrate his "Theory of Constraints." In Goldratt's (1994) next book, It's Not Luck, he further illustrates this powerful technique called "The Thinking Process" which is based on the Socratic method, using the "if ... then" reasoning process, The first step is to identify UDEs or undesirable effects within the organization and then use these UDEs to create a Current Reality Tree (CRT) which helps to identify the core problem. Next, use an Evaporating Cloud to come up with ideas and a way to break the constraint. Finally, use the injections in the Evaporating Cloud to create a Future Reality Tree, further validating the idea and making sure it does not create any negative effects. In this article, the "Thinking Process" will be used to identify and solve problems related to the General Medical Department of an MHMR State Hospital.
Conditions and constraints of food processing in space
Fu, B.; Nelson, P. E.; Mitchell, C. A. (Principal Investigator)
1994-01-01
Requirements and constraints of food processing in space include a balanced diet, food variety, stability for storage, hardware weight and volume, plant performance, build-up of microorganisms, and waste processing. Lunar, Martian, and space station environmental conditions include variations in atmosphere, day length, temperature, gravity, magnetic field, and radiation environment. Weightlessness affects fluid behavior, heat transfer, and mass transfer. Concerns about microbial behavior include survival on Martian and lunar surfaces and in enclosed environments. Many present technologies can be adapted to meet space conditions.
Beyond mechanistic interaction: Value-based constraints on meaning in language
Directory of Open Access Journals (Sweden)
Joanna eRączaszek-Leonardi
2015-10-01
Full Text Available According to situated, embodied, distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize the coordinative function of language treat language as a system of replicable constraints that work both on individuals and on interactions. In this paper we argue that the integration of replicable constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple time-scales. Since the ontogenetic timescale offers a convenient window into a process of the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.
Sentence processing in an artificial language: Learning and using combinatorial constraints.
Amato, Michael S; MacDonald, Maryellen C
2010-07-01
A study combining artificial grammar and sentence comprehension methods investigated the learning and online use of probabilistic, nonadjacent combinatorial constraints. Participants learned a small artificial language describing cartoon monsters acting on objects. Self-paced reading of sentences in the artificial language revealed comprehenders' sensitivity to nonadjacent combinatorial constraints, without explicit awareness of the probabilities embedded in the language. These results show that even newly-learned constraints have an identifiable effect on online sentence processing. The rapidity of learning in this paradigm relative to others has implications for theories of implicit learning and its role in language acquisition. 2010 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Suryakant B. Chandgude
2015-09-01
Full Text Available The optimum selection of process parameters has played an important role for improving the surface finish, minimizing tool wear, increasing material removal rate and reducing machining time of any machining process. In this paper, optimum parameters while machining AISI D2 hardened steel using solid carbide TiAlN coated end mill has been investigated. For optimization of process parameters along with multiple quality characteristics, principal components analysis method has been adopted in this work. The confirmation experiments have revealed that to improve performance of cutting; principal components analysis method would be a useful tool.
A constraint-based approach to intelligent support of nuclear reactor design
International Nuclear Information System (INIS)
Furuta, Kazuo
1993-01-01
Constraint is a powerful representation to formulate and solve problems in design; a constraint-based approach to intelligent support of nuclear reactor design is proposed. We first discuss the features of the approach, and then present the architecture of a nuclear reactor design support system under development. In this design support system, the knowledge base contains constraints useful to structure the design space as object class definitions, and several types of constraint resolvers are provided as design support subsystems. The adopted method of constraint resolution are explained in detail. The usefulness of the approach is demonstrated using two design problems: Design window search and multiobjective optimization in nuclear reactor design. (orig./HP)
TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS
Directory of Open Access Journals (Sweden)
X. Xiong
2018-04-01
Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Optimizing Maintenance of Constraint-Based Database Caches
Klein, Joachim; Braun, Susanne
Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.
Constraint-based modeling and kinetic analysis of the Smad dependent TGF-beta signaling pathway.
Directory of Open Access Journals (Sweden)
Zhike Zi
Full Text Available BACKGROUND: Investigation of dynamics and regulation of the TGF-beta signaling pathway is central to the understanding of complex cellular processes such as growth, apoptosis, and differentiation. In this study, we aim at using systems biology approach to provide dynamic analysis on this pathway. METHODOLOGY/PRINCIPAL FINDINGS: We proposed a constraint-based modeling method to build a comprehensive mathematical model for the Smad dependent TGF-beta signaling pathway by fitting the experimental data and incorporating the qualitative constraints from the experimental analysis. The performance of the model generated by constraint-based modeling method is significantly improved compared to the model obtained by only fitting the quantitative data. The model agrees well with the experimental analysis of TGF-beta pathway, such as the time course of nuclear phosphorylated Smad, the subcellular location of Smad and signal response of Smad phosphorylation to different doses of TGF-beta. CONCLUSIONS/SIGNIFICANCE: The simulation results indicate that the signal response to TGF-beta is regulated by the balance between clathrin dependent endocytosis and non-clathrin mediated endocytosis. This model is useful to be built upon as new precise experimental data are emerging. The constraint-based modeling method can also be applied to quantitative modeling of other signaling pathways.
Parameter optimization of electrochemical machining process using black hole algorithm
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
Correlation-based decimation in constraint satisfaction problems
International Nuclear Information System (INIS)
Higuchi, Saburo; Mezard, Marc
2010-01-01
We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.
An experimental study on effect of process parameters in deep ...
African Journals Online (AJOL)
The effects of various deep drawing process parameters were determined by experimental study with the use of Taguchi fractional factorial design and analysis of variance for AA6111 Aluminum alloy. The optimum process parameters were determined based on their influence on the thickness variation at different regions ...
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-01-01
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505
Reduction of robot base parameters
International Nuclear Information System (INIS)
Vandanjon, P.O.
1995-01-01
This paper is a new step in the search of minimum dynamic parameters of robots. In spite of planing exciting trajectories and using base parameters, some parameters remain not identifiable due to the perturbation effects. In this paper, we propose methods to reduce the set of base parameters in order to get an essential set of parameters. This new set defines a simplified identification model witch improves the noise immunity of the estimation process. It contributes also in reducing the computation burden of a simplified dynamic model. Different methods are proposed and are classified in two parts: methods, witch perform reduction and identification together, come from statistical field and methods, witch reduces the model before the identification thanks to a priori information, come from numerical field like the QR factorization. Statistical tools and QR reduction are shown to be efficient and adapted to determine the essential parameters. They can be applied to open-loop, or graph structured rigid robot, as well as flexible-link robot. Application for the PUMA 560 robot is given. (authors). 9 refs., 4 tabs
Reduction of robot base parameters
Energy Technology Data Exchange (ETDEWEB)
Vandanjon, P O [CEA Centre d` Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Gautier, M [Nantes Univ., 44 (France)
1996-12-31
This paper is a new step in the search of minimum dynamic parameters of robots. In spite of planing exciting trajectories and using base parameters, some parameters remain not identifiable due to the perturbation effects. In this paper, we propose methods to reduce the set of base parameters in order to get an essential set of parameters. This new set defines a simplified identification model witch improves the noise immunity of the estimation process. It contributes also in reducing the computation burden of a simplified dynamic model. Different methods are proposed and are classified in two parts: methods, witch perform reduction and identification together, come from statistical field and methods, witch reduces the model before the identification thanks to a priori information, come from numerical field like the QR factorization. Statistical tools and QR reduction are shown to be efficient and adapted to determine the essential parameters. They can be applied to open-loop, or graph structured rigid robot, as well as flexible-link robot. Application for the PUMA 560 robot is given. (authors). 9 refs., 4 tabs.
Proposal of Constraints Analysis Method Based on Network Model for Task Planning
Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro
Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.
Directory of Open Access Journals (Sweden)
Vijay Kumar
2016-01-01
Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.
International Nuclear Information System (INIS)
Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A
2009-01-01
Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)
Transactive-Market-Based Operation of Distributed Electrical Energy Storage with Grid Constraints
Directory of Open Access Journals (Sweden)
M. Nazif Faqiry
2017-11-01
Full Text Available In a transactive energy market, distributed energy resources (DERs such as dispatchable distributed generators (DGs, electrical energy storages (EESs, distribution-scale load aggregators (LAs, and renewable energy sources (RESs have to earn their share of supply or demand through a bidding process. In such a market, the distribution system operator (DSO may optimally schedule these resources, first in a forward market, i.e., day-ahead, and in a real-time market later on, while maintaining a reliable and economic distribution grid. In this paper, an efficient day-ahead scheduling of these resources, in the presence of interaction with wholesale market at the locational marginal price (LMP, is studied. Due to inclusion of EES units with integer constraints, a detailed mixed integer linear programming (MILP formulation that incorporates simplified DistFlow equations to account for grid constraints is proposed. Convex quadratic line and transformer apparent power flow constraints have been linearized using an outer approximation. The proposed model schedules DERs based on distribution locational marginal price (DLMP, which is obtained as the Lagrange multiplier of the real power balance constraint at each distribution bus while maintaining physical grid constraints such as line limits, transformer limits, and bus voltage magnitudes. Case studies are performed on a modified IEEE 13-bus system with high DER penetration. Simulation results show the validity and efficiency of the proposed model.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
New Boundary Constraints for Elliptic Systems used in Grid Generation Problems
Kaul, Upender K.; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.
An Interval-Valued Approach to Business Process Simulation Based on Genetic Algorithms and the BPMN
Directory of Open Access Journals (Sweden)
Mario G.C.A. Cimino
2014-05-01
Full Text Available Simulating organizational processes characterized by interacting human activities, resources, business rules and constraints, is a challenging task, because of the inherent uncertainty, inaccuracy, variability and dynamicity. With regard to this problem, currently available business process simulation (BPS methods and tools are unable to efficiently capture the process behavior along its lifecycle. In this paper, a novel approach of BPS is presented. To build and manage simulation models according to the proposed approach, a simulation system is designed, developed and tested on pilot scenarios, as well as on real-world processes. The proposed approach exploits interval-valued data to represent model parameters, in place of conventional single-valued or probability-valued parameters. Indeed, an interval-valued parameter is comprehensive; it is the easiest to understand and express and the simplest to process, among multi-valued representations. In order to compute the interval-valued output of the system, a genetic algorithm is used. The resulting process model allows forming mappings at different levels of detail and, therefore, at different model resolutions. The system has been developed as an extension of a publicly available simulation engine, based on the Business Process Model and Notation (BPMN standard.
Goossens, S. J.; Sabaka, T. J.; Genova, A.; Mazarico, E. M.; Nicholas, J. B.; Neumann, G. A.; Lemoine, F. G.
2017-12-01
The crust of a terrestrial planet is formed by differentiation processes in its early history, followed by magmatic evolution of the planetary surface. It is further modified through impact processes. Knowledge of the crustal structure can thus place constraints on the planet's formation and evolution. In particular, the average bulk density of the crust is a fundamental parameter in geophysical studies, such as the determination of crustal thickness, studies of the mechanisms of topography support, and the planet's thermo-chemical evolution. Yet even with in-situ samples available, the crustal density is difficult to determine unambiguously, as exemplified by the results for the Gravity and Recovery Interior Laboratory (GRAIL) mission, which found an average crustal density for the Moon that was lower than generally assumed. The GRAIL results were possible owing to the combination of its high-resolution gravity and high-resolution topography obtained by the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter (LRO), and high correlations between the two datasets. The crustal density can be determined by its contribution to the gravity field of a planet, but at long wavelengths flexure effects can dominate. On the other hand, short-wavelength gravity anomalies are difficult to measure, and either not determined well enough (other than at the Moon), or their power is suppressed by the standard `Kaula' regularization constraint applied during inversion of the gravity field from satellite tracking data. We introduce a new constraint that has infinite variance in one direction, called xa . For constraint damping factors that go to infinity, it can be shown that the solution x becomes equal to a scale factor times xa. This scale factor is completely determined by the data, and we call our constraint rank-minus-1 (RM1). If we choose xa to be topography-induced gravity, then we can estimate the average bulk crustal density directly from the data
Janardhanan, S.; Datta, B.
2011-12-01
saltwater intrusion are considered. The salinity levels resulting at strategic locations due to these pumping are predicted using the ensemble surrogates and are constrained to be within pre-specified levels. Different realizations of the concentration values are obtained from the ensemble predictions corresponding to each candidate solution of pumping. Reliability concept is incorporated as the percent of the total number of surrogate models which satisfy the imposed constraints. The methodology was applied to a realistic coastal aquifer system in Burdekin delta area in Australia. It was found that all optimal solutions corresponding to a reliability level of 0.99 satisfy all the constraints and as reducing reliability level decreases the constraint violation increases. Thus ensemble surrogate model based simulation-optimization was found to be useful in deriving multi-objective optimal pumping strategies for coastal aquifers under parameter uncertainty.
Dimensional modeling: beyond data processing constraints.
Bunardzic, A
1995-01-01
The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty
Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.
2013-12-01
Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.
DEFF Research Database (Denmark)
Dove, Graham; Biskjær, Michael Mose; Lundqvist, Caroline Emilie
2017-01-01
groups of students building three models each. We studied groups building with traditional plastic bricks and also using a digital environment. The building tasks students undertake, and our subsequent analysis, are informed by the role constraints and ambiguity play in creative processes. Based...
FORECASTING COSMOLOGICAL PARAMETER CONSTRAINTS FROM NEAR-FUTURE SPACE-BASED GALAXY SURVEYS
International Nuclear Information System (INIS)
Pavlov, Anatoly; Ratra, Bharat; Samushia, Lado
2012-01-01
The next generation of space-based galaxy surveys is expected to measure the growth rate of structure to a level of about one percent over a range of redshifts. The rate of growth of structure as a function of redshift depends on the behavior of dark energy and so can be used to constrain parameters of dark energy models. In this work, we investigate how well these future data will be able to constrain the time dependence of the dark energy density. We consider parameterizations of the dark energy equation of state, such as XCDM and ωCDM, as well as a consistent physical model of time-evolving scalar field dark energy, φCDM. We show that if the standard, specially flat cosmological model is taken as a fiducial model of the universe, these near-future measurements of structure growth will be able to constrain the time dependence of scalar field dark energy density to a precision of about 10%, which is almost an order of magnitude better than what can be achieved from a compilation of currently available data sets.
Directory of Open Access Journals (Sweden)
Guo Yu
2016-01-01
Full Text Available As a major step surface mount technology, reflow process is the key factor affecting the quality of the final product. The setting parameters and characteristic value of temperature curve shows a nonlinear relationship. So parameter impacts on characteristic values are analyzed and the parameters adjustment process based on orthogonal experiment is proposed in the paper. First, setting parameters are determined and the orthogonal test is designed according to production conditions. Then each characteristic value for temperature profile is calculated. Further, multi-index orthogonal experiment is analyzed for acquiring the setting parameters which impacts the PCBA product quality greater. Finally, reliability prediction is carried out considering the main influencing parameters for providing a theoretical basis of parameters adjustment and product quality evaluation in engineering process.
Large-Scale Constraint-Based Pattern Mining
Zhu, Feida
2009-01-01
We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…
Defect assessments of pipelines based on the FAD approach incorporating constraint effects
Energy Technology Data Exchange (ETDEWEB)
Ruggieri, Claudio; Cravero, Sebastian [Sao Paulo Univ., SP (Brazil)
2005-07-01
This work presents a framework for including constraint effects in the failure assessment diagram (FAD) approach. The procedure builds upon the constraint-based Q methodology to correct measured toughness values using low constraint fracture specimens which modifies the shape of the FAD curve. The approach is applied to predict the failure (burst pressure) of high pressure pipelines with planar defects having different geometries (i.e., crack depth and crack length). The FAD curves are corrected for effects of constraint based on the L{sub r}-Q trajectories for pin-loaded SE(T) specimens. The article shows that inclusion of constraint effects in the FAD approach provides better agreement between experimentally measured burst pressure and predicted values for high pressure pipelines with planar defects. (author)
Directory of Open Access Journals (Sweden)
Li-lian Huang
2013-01-01
Full Text Available The synchronization of nonlinear uncertain chaotic systems is investigated. We propose a sliding mode state observer scheme which combines the sliding mode control with observer theory and apply it into the uncertain chaotic system with unknown parameters and bounded interference. Based on Lyapunov stability theory, the constraints of synchronization and proof are given. This method not only can realize the synchronization of chaotic systems, but also identify the unknown parameters and obtain the correct parameter estimation. Otherwise, the synchronization of chaotic systems with unknown parameters and bounded external disturbances is robust by the design of the sliding surface. Finally, numerical simulations on Liu chaotic system with unknown parameters and disturbances are carried out. Simulation results show that this synchronization and parameter identification has been totally achieved and the effectiveness is verified very well.
Constraint-Based Local Search for Constrained Optimum Paths Problems
Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal
Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G
2014-09-01
Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Proceedings of the 5th International Workshop on Constraints and Language Processing (CSLP 2008)
DEFF Research Database (Denmark)
This research report constitutes the proceedings of the 5th International Workshop on Constraints and Language Processing (CSLP 2008) which is part of the European Summer School in Logic, Language, and Information (ESSLLI 2008), Hamburg, Germany, August 2008.......This research report constitutes the proceedings of the 5th International Workshop on Constraints and Language Processing (CSLP 2008) which is part of the European Summer School in Logic, Language, and Information (ESSLLI 2008), Hamburg, Germany, August 2008....
Microstructure based procedure for process parameter control in rolling of aluminum thin foils
Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph
2018-05-01
In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.
Architectural constraints in IEC 61508: Do they have the intended effect?
International Nuclear Information System (INIS)
Lundteigen, Mary Ann; Rausand, Marvin
2009-01-01
The standards IEC 61508 and IEC 61511 employ architectural constraints to avoid that quantitative assessments alone are used to determine the hardware layout of safety instrumented systems (SIS). This article discusses the role of the architectural constraints, and particularly the safe failure fraction (SFF) as a design parameter to determine the hardware fault tolerance (HFT) and the redundancy level for SIS. The discussion is based on examples from the offshore oil and gas industry, but should be relevant for all applications of SIS. The article concludes that architectural constraints may be required to compensate for systematic failures, but the architectural constraints should not be determined based on the SFF. The SFF is considered to be an unnecessary concept
Directory of Open Access Journals (Sweden)
L. W. de Jonge
2009-08-01
Full Text Available Soil functions and their impact on health, economy, and the environment are evident at the macro scale but determined at the micro scale, based on interactions between soil micro-architecture and the transport and transformation processes occurring in the soil infrastructure comprising pore and particle networks and at their interfaces. Soil structure formation and its resilience to disturbance are highly dynamic features affected by management (energy input, moisture (matric potential, and solids composition and complexation (organic matter and clay interactions. In this paper we review and put into perspective preliminary results of the newly started research program "Soil-it-is" on functional soil architecture. To identify and quantify biophysical constraints on soil structure changes and resilience, we claim that new approaches are needed to better interpret processes and parameters measured at the bulk soil scale and their links to the seemingly chaotic soil inner space behavior at the micro scale. As a first step, we revisit the soil matrix (solids phase and pore system (water and air phases, constituting the complementary and interactive networks of soil infrastructure. For a field-pair with contrasting soil management, we suggest new ways of data analysis on measured soil-gas transport parameters at different moisture conditions to evaluate controls of soil matrix and pore network formation. Results imply that some soils form sponge-like pore networks (mostly healthy soils in terms of agricultural and environmental functions, while other soils form pipe-like structures (agriculturally poorly functioning soils, with the difference related to both complexation of organic matter and degradation of soil structure. The recently presented Dexter et al. (2008 threshold (ratio of clay to organic carbon of 10 kg kg^{−1} is found to be a promising constraint for a soil's ability to maintain or regenerate functional structure. Next
Constraint-aware interior layout exploration for pre-cast concrete-based buildings
Liu, Han
2013-05-03
Creating desirable layouts of building interiors is a complex task as designers have to manually adhere to various local and global considerations arising from competing practical and design considerations. In this work, we present an interactive design tool to create desirable floorplans by computationally conforming to such design constraints. Specifically, we support three types of constraints: (i) functional constraints such as number of rooms, connectivity among the rooms, target room areas, etc.; (ii) design considerations such as user modifications and preferences, and (iii) fabrication constraints such as cost and convenience of manufacturing. Based on user specifications, our system automatically generates multiple floor layouts with associated 3D geometry that all satisfy the design specifications and constraints, thus exposing only the desirable family of interior layouts to the user. In this work, we focus on pre-cast concrete-based constructions, which lead to interesting discrete and continuous optimization possibilities. We test our framework on a range of complex real-world specifications and demonstrate the control and expressiveness of the exposed design space relieving the users of the task of manually adhering to non-local functional and fabrication constraints. © 2013 Springer-Verlag Berlin Heidelberg.
Constraint-based job shop scheduling with ILOG SCHEDULER
Nuijten, W.P.M.; Le Pape, C.
1998-01-01
We introduce constraint-based scheduling and discuss its main principles. An approximation algorithm based on tree search is developed for the job shop scheduling problem using ILOG SCHEDULER. A new way of calculating lower bounds on the makespan of the job shop scheduling problem is presented and
Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei
2017-06-01
A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.
Monocular Visual Odometry Based on Trifocal Tensor Constraint
Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.
2018-02-01
For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.
Logic-based methods for optimization combining optimization and constraint satisfaction
Hooker, John
2011-01-01
A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible
Directory of Open Access Journals (Sweden)
Zhenggang Du
2015-03-01
Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also
Directory of Open Access Journals (Sweden)
Wei Huang
2017-03-01
Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.
Estimation of metallurgical parameters of flotation process from froth visual features
Directory of Open Access Journals (Sweden)
Mohammad Massinaei
2015-06-01
Full Text Available The estimation of metallurgical parameters of flotation process from froth visual features is the ultimate goal of a machine vision based control system. In this study, a batch flotation system was operated under different process conditions and metallurgical parameters and froth image data were determined simultaneously. Algorithms have been developed for measuring textural and physical froth features from the captured images. The correlation between the froth features and metallurgical parameters was successfully modeled, using artificial neural networks. It has been shown that the performance parameters of flotation process can be accurately estimated from the extracted image features, which is of great importance for developing automatic control systems.
Modeling external constraints: Applying expert systems to nuclear plants
International Nuclear Information System (INIS)
Beck, C.E.; Behera, A.K.
1993-01-01
Artificial Intelligence (AI) applications in nuclear plants have received much attention over the past decade. Specific applications that have been addressed include development of models and knowledge-bases, plant maintenance, operations, procedural guidance, risk assessment, and design tools. This paper examines the issue of external constraints, with a focus on the use of Al and expert systems as design tools. It also provides several suggested methods for addressing these constraints within the Al framework. These methods include a State Matrix scheme, a layered structure for the knowledge base, and application of the dynamic parameter concept
Data assimilation method based on the constraints of confidence region
Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng
2018-03-01
The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.
Line-feature-based calibration method of structured light plane parameters for robot hand-eye system
Qi, Yuhan; Jing, Fengshui; Tan, Min
2013-03-01
For monocular-structured light vision measurement, it is essential to calibrate the structured light plane parameters in addition to the camera intrinsic parameters. A line-feature-based calibration method of structured light plane parameters for a robot hand-eye system is proposed. Structured light stripes are selected as calibrating primitive elements, and the robot moves from one calibrating position to another with constraint in order that two misaligned stripe lines are generated. The images of stripe lines could then be captured by the camera fixed at the robot's end link. During calibration, the equations of two stripe lines in the camera coordinate system are calculated, and then the structured light plane could be determined. As the robot's motion may affect the effectiveness of calibration, so the robot's motion constraints are analyzed. A calibration experiment and two vision measurement experiments are implemented, and the results reveal that the calibration accuracy can meet the precision requirement of robot thick plate welding. Finally, analysis and discussion are provided to illustrate that the method has a high efficiency fit for industrial in-situ calibration.
DEFF Research Database (Denmark)
Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca
2010-01-01
We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....
Directory of Open Access Journals (Sweden)
Suresh Kumar
2014-10-01
Full Text Available In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann–Robertson–Walker space–time filled with ordinary matter (baryonic, radiation, dark matter and dark energy, where the latter two components are described by Chevallier–Polarski–Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.
International Nuclear Information System (INIS)
Kumar, Suresh; Xu, Lixin
2014-01-01
In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann–Robertson–Walker space–time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier–Polarski–Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
Sequential grouping constraints on across-channel auditory processing
DEFF Research Database (Denmark)
Oxenham, Andrew J.; Dau, Torsten
2005-01-01
Søren Buus was one of the pioneers in the study of across-channel auditory processing. His influential 1985 paper showed that introducing slow fluctuations to a low-frequency masker could reduce the detection thresholds of a high-frequency signal by as much as 25 dB [S. Buus, J. Acoust. Soc. Am. 78......, 1958–1965 (1985)]. Søren explained this surprising result in terms of the spread of masker excitation and across-channel processing of envelope fluctuations. A later study [S. Buus and C. Pan, J. Acoust. Soc. Am. 96, 1445–1457 (1994)] pioneered the use of the same stimuli in tasks where across......-channel processing could either help or hinder performance. In the present set of studies we also use paradigms in which across-channel processing can lead to either improvement or deterioration in performance. We show that sequential grouping constraints can affect both types of paradigm. In particular...
Future Cosmological Constraints From Fast Radio Bursts
Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus
2018-03-01
We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.
Hysteresis modeling based on saturation operator without constraints
International Nuclear Information System (INIS)
Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.
2007-01-01
This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively
Reduction of Constraints: Applicability of the Homogeneity Constraint for Macrobatch 3
International Nuclear Information System (INIS)
Peeler, D.K.
2001-01-01
The Product Composition Control System (PCCS) is used to determine the acceptability of each batch of Defense Waste Processing Facility (DWPF) melter feed in the Slurry Mix Evaporator (SME). This control system imposes several constraints on the composition of the contents of the SME to define acceptability. These constraints relate process or product properties to composition via prediction models. A SME batch is deemed acceptable if its sample composition measurements lead to acceptable property predictions after accounting for modeling, measurement and analytic uncertainties. The baseline document guiding the use of these data and models is ''SME Acceptability Determination for DWPF Process Control (U)'' by Brown and Postles [1996]. A minimum of three PCCS constraints support the prediction of the glass durability from a given SME batch. The Savannah River Technology Center (SRTC) is reviewing all of the PCCS constraints associated with durability. The purpose of this review is to revisit these constraints in light of the additional knowledge gained since the beginning of radioactive operations at DWPF and to identify any supplemental studies needed to amplify this knowledge so that redundant or overly conservative constraints can be eliminated or replaced by more appropriate constraints
Beyond mechanistic interaction: value-based constraints on meaning in language.
Rączaszek-Leonardi, Joanna; Nomikou, Iris
2015-01-01
According to situated, embodied, and distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize this coordinative function treat language as a system of replicable constraints on individual and interactive dynamics. In this paper, we argue that the integration of the replicable-constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such a synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly, the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple timescales. Because the ontogenetic timescale offers a convenient window into the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.
Koutrouli, Natalia; Anagnostopoulos, Fotios; Griva, Fay; Gourounti, Kleanthi; Kolokotroni, Filippa; Efstathiou, Vasia; Mellon, Robert; Papastylianou, Dona; Niakas, Dimitris; Potamianos, Gregory
2016-01-01
Posttraumatic growth (the perception of positive life changes after an encounter with a trauma) often occurs among breast cancer patients and can be influenced by certain demographic, medical, and psychosocial parameters. Social constraints on disclosure (the deprivation of the opportunity to express feelings and thoughts regarding the trauma) and the cognitive processing of the disease seem to be involved in the development of posttraumatic growth. Through the present study the authors aim to: investigate the levels of posttraumatic growth in a sample of 202 women with breast cancer in Greece, explore the relationships between posttraumatic growth and particular demographic, medical, and psychosocial variables according to a proposed model, and test the role of social constraints in the relationship between automatic and deliberate cognitive processing of the trauma. The results showed that posttraumatic growth was evident in the majority of the sample and was associated inversely with age at diagnosis (β = -0.174, p psychological distress (β = -0.394, p = .001), directly with time since diagnosis (β = 0.181, p psychological distress, through reflective rumination (β = 0.323, p = .001). Social constraints were found to moderate the relationship between intrusions and reflective rumination. Implications of the results and suggestions for future research and practice are outlined.
Process for integrating surface drainage constraints on mine planning
Energy Technology Data Exchange (ETDEWEB)
Sawatsky, L.F; Ade, F.L.; McDonald, D.M.; Pullman, B.J. [Golder Associates Ltd., Calgary, AB (Canada)
2009-07-01
Surface drainage for mine closures must be considered during all phases of mine planning and design in order to minimize environmental impacts and reduce costs. This paper discussed methods of integrating mine drainage criteria and associated mine planning constraints into the mine planning process. Drainage constraints included stream diversions; fish compensation channels; collection receptacles for the re-use of process water; separation of closed circuit water from fresh water; and the provision of storage ponds. The geomorphic approach replicated the ability of natural channels to respond to local and regional changes in hydrology as well as channel disturbances from extreme flood events, sedimentation, debris, ice jams, and beaver activity. The approach was designed to enable a sustainable system and provide conveyance capacity for extreme floods without spillage to adjacent watersheds. Channel dimensions, bank and bed materials, sediment loads, bed material supplies and the hydrologic conditions of the analogue stream were considered. Hydrologic analyses were conducted to determine design flood flow. Channel routes, valley slopes, sinuosity, width, and depth were established. It was concluded that by incorporating the geomorphic technique, mine operators and designers can construct self-sustaining drainage systems that require little or no maintenance in the long-term. 7 refs.
Machine self-teaching methods for parameter optimization. Final report, October 1984-August 1986
Energy Technology Data Exchange (ETDEWEB)
Dillard, R.A.
1986-12-01
The problem of determining near-optimum parameter-control logic is addressed for cases where a sensor or communication system is highly flexible and the logic cannot be determined analytically. A system that supports human-like learning of optimum parameters is outlined. The major subsystems are (1) a simulation system (described for a radar example), (2) a performance monitoring system, (3) the learning system, and (4) the initial knowledge used by all subsystems. The initial knowledge is expressed modularly as specifications (e.g., radar constraints, performance measures, and target characteristics), relationships (among parameters, intermediate measures, and component performance measures), and formulas. The intent of the learning system is to relieve the human from the very tedious trial-and-error process of examining performance, selecting and applying curve-fitting methods, and selecting the next trial set of parameters. A learning system to design a simple radar meeting specific performance constraints is described in detail, for experimental purposes, in generic object-based code.
Parameter constraints of grazing response functions. Implications for phytoplankton bloom initiation
Directory of Open Access Journals (Sweden)
Jordi Solé
2016-09-01
Full Text Available Phytoplankton blooms are events of production and accumulation of phytoplankton biomass that influence ecosystem dynamics and may also have effects on socio-economic activities. Among the biological factors that affect bloom dynamics, prey selection by zooplankton may play an important role. Here we consider the initial state of development of an algal bloom and analyse how a reduced grazing pressure can allow an algal species with a lower intrinsic growth rate than a competitor to become dominant. We use a simple model with two microalgal species and one zooplankton grazer to derive general relationships between phytoplankton growth and zooplankton grazing. These relationships are applied to two common grazing response functions in order to deduce the mathematical constraints that the parameters of these functions must obey to allow the dominance of the lower growth rate competitor. To assess the usefulness of the deduced relationships in a more general framework, the results are applied in the context of a multispecies ecosystem model (ERSEM.
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency
Teaching Database Design with Constraint-Based Tutors
Mitrovic, Antonija; Suraweera, Pramuditha
2016-01-01
Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…
Model-based control strategies for systems with constraints of the program type
Jarzębowska, Elżbieta
2006-08-01
The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.
Ozkat, Erkan Caner; Franciosa, Pasquale; Ceglarek, Dariusz
2017-08-01
Remote laser welding technology offers opportunities for high production throughput at a competitive cost. However, the remote laser welding process of zinc-coated sheet metal parts in lap joint configuration poses a challenge due to the difference between the melting temperature of the steel (∼1500 °C) and the vapourizing temperature of the zinc (∼907 °C). In fact, the zinc layer at the faying surface is vapourized and the vapour might be trapped within the melting pool leading to weld defects. Various solutions have been proposed to overcome this problem over the years. Among them, laser dimpling has been adopted by manufacturers because of its flexibility and effectiveness along with its cost advantages. In essence, the dimple works as a spacer between the two sheets in lap joint and allows the zinc vapour escape during welding process, thereby preventing weld defects. However, there is a lack of comprehensive characterization of dimpling process for effective implementation in real manufacturing system taking into consideration inherent changes in variability of process parameters. This paper introduces a methodology to develop (i) surrogate model for dimpling process characterization considering multiple-inputs (i.e. key control characteristics) and multiple-outputs (i.e. key performance indicators) system by conducting physical experimentation and using multivariate adaptive regression splines; (ii) process capability space (Cp-Space) based on the developed surrogate model that allows the estimation of a desired process fallout rate in the case of violation of process requirements in the presence of stochastic variation; and, (iii) selection and optimization of the process parameters based on the process capability space. The proposed methodology provides a unique capability to: (i) simulate the effect of process variation as generated by manufacturing process; (ii) model quality requirements with multiple and coupled quality requirements; and (iii
Automated Generation of OCL Constraints: NL based Approach vs Pattern Based Approach
Directory of Open Access Journals (Sweden)
IMRAN SARWAR BAJWA
2017-04-01
Full Text Available This paper presents an approach used for automated generations of software constraints. In this model, the SBVR (Semantics of Business Vocabulary and Rules based semi-formal representation is obtained from the syntactic and semantic analysis of a NL (Natural Language (such as English sentence. A SBVR representation is easy to translate to other formal languages as SBVR is based on higher-order logic like other formal languages such as OCL (Object Constraint Language. The proposed model endows with a systematic and powerful system of incorporating NL knowledge on the formal languages. A prototype is constructed in Java (an Eclipse plug-in as a proof of the concept. The performance was tested for a few sample texts taken from existing research thesis reports and books
Directory of Open Access Journals (Sweden)
Mungall Christopher J
2010-10-01
Full Text Available Abstract Background The Gene Ontology project supports categorization of gene products according to their location of action, the molecular functions that they carry out, and the processes that they are involved in. Although the ontologies are intentionally developed to be taxon neutral, and to cover all species, there are inherent taxon specificities in some branches. For example, the process 'lactation' is specific to mammals and the location 'mitochondrion' is specific to eukaryotes. The lack of an explicit formalization of these constraints can lead to errors and inconsistencies in automated and manual annotation. Results We have formalized the taxonomic constraints implicit in some GO classes, and specified these at various levels in the ontology. We have also developed an inference system that can be used to check for violations of these constraints in annotations. Using the constraints in conjunction with the inference system, we have detected and removed errors in annotations and improved the structure of the ontology. Conclusions Detection of inconsistencies in taxon-specificity enables gradual improvement of the ontologies, the annotations, and the formalized constraints. This is progressively improving the quality of our data. The full system is available for download, and new constraints or proposed changes to constraints can be submitted online at https://sourceforge.net/tracker/?atid=605890&group_id=36855.
Directory of Open Access Journals (Sweden)
Tore Bakka
2012-01-01
Full Text Available The problem of robust ℋ∞ dynamic output feedback control design with pole placement constraints is studied for a linear parameter-varying model of a floating wind turbine. A nonlinear model is obtained and linearized using the FAST software developed for wind turbines. The main contributions of this paper are threefold. Firstly, a family of linear models are represented based on an affine parameter-varying model structure for a wind turbine system. Secondly, the bounded parameter-varying parameters are removed using upper bounded inequalities in the control design process. Thirdly, the control problem is formulated in terms of linear matrix inequalities (LMIs. The simulation results show a comparison between controller design based on a constant linear model and a controller design for the linear parameter-varying model. The results show the effectiveness of our proposed design technique.
Clark, Martyn; Samaniego, Luis; Freer, Jim
2014-05-01
Multi-model and multi-physics approaches are a popular tool in environmental modelling, with many studies focusing on optimally combining output from multiple model simulations to reduce predictive errors and better characterize predictive uncertainty. However, a careful and systematic analysis of different hydrological models reveals that individual models are simply small permutations of a master modeling template, and inter-model differences are overwhelmed by uncertainty in the choice of the parameter values in the model equations. Furthermore, inter-model differences do not explicitly represent the uncertainty in modeling a given process, leading to many situations where different models provide the wrong results for the same reasons. In other cases, the available morphological data does not support the very fine spatial discretization of the landscape that typifies many modern applications of process-based models. To make the uncertainty characterization problem worse, the uncertain parameter values in process-based models are often fixed (hard-coded), and the models lack the agility necessary to represent the tremendous heterogeneity in natural systems. This presentation summarizes results from a systematic analysis of uncertainty in process-based hydrological models, where we explicitly analyze the myriad of subjective decisions made throughout both the model development and parameter estimation process. Results show that much of the uncertainty is aleatory in nature - given a "complete" representation of dominant hydrologic processes, uncertainty in process parameterizations can be represented using an ensemble of model parameters. Epistemic uncertainty associated with process interactions and scaling behavior is still important, and these uncertainties can be represented using an ensemble of different spatial configurations. Finally, uncertainty in forcing data can be represented using ensemble methods for spatial meteorological analysis. Our systematic
International Nuclear Information System (INIS)
Gan, Dongming; Dias, Jorge; Seneviratne, Lakmal; Dai, Jian S.
2014-01-01
This paper investigates various topologies and mobility of a class of metamorphic parallel mechanisms synthesized with reconfigurable rTPS limbs. Based on the reconfigurable Hooke (rT) joint, the rTPS limb has two phases which result in parallel mechanisms having ability of mobility change. While in one phase the limb has no constraint to the platform, in the other it constrains the spherical joint center to lie on a plane which is used to demonstrate different topologies of the nrTPS metamorphic parallel mechanisms by investigating various relations (parallel or intersecting) among the n constraint planes (n = 2,3,..,6). Geometric constraint equations of the platform rotation matrix and translation vector are set up based on the point-plane constraint, which reveals mobility and redundant geometric conditions of the mechanism topologies. By altering the limbs into the non-constraint phase without constraint plane, new mechanism phases are deduced with mobility change based on each mechanism topology.
CERN LEP2 constraint on 4D QED having a dynamically generated spatial dimension
International Nuclear Information System (INIS)
Cho, G.-C.; Izumi, Etsuko; Sugamoto, Akio
2002-01-01
We study 4D QED in which one spatial dimension is dynamically generated from the 3D action, following the mechanism proposed by Arkani-Hamed, Cohen, and Georgi. In this model, the generated fourth dimension is discretized by an interval parameter a. We examine the phenomenological constraint on the parameter a coming from collider experiments on the QED process e + e - →γγ. It is found that the CERN e + e - collider LEP2 experiments give the constraint of 1/a > or approx. 461 GeV. The expected bound on the same parameter a at a future e + e - linear collider is briefly discussed
Home-based Constraint Induced Movement Therapy Poststroke
Stephen Isbel HScD; Christine Chapparo PhD; David McConnell PhD; Judy Ranka PhD
2014-01-01
Background: This study examined the efficacy of a home-based Constraint Induced Movement Therapy (CI Therapy) protocol with eight poststroke survivors. Method: Eight ABA, single case experiments were conducted in the homes of poststroke survivors. The intervention comprised restraint of the intact upper limb in a mitt for 21 days combined with a home-based and self-directed daily activity regime. Motor changes were measured using The Wolf Motor Function Test (WMFT) and the Motor Activity L...
Reis, Felipe; Machín, Leandro; Rosenthal, Amauri; Deliza, Rosires; Ares, Gastón
2016-12-01
People do not usually process all the available information on packages for making their food choices and rely on heuristics for making their decisions, particularly when having limited time. However, in most consumer studies encourage participants to invest a lot of time for making their choices. Therefore, imposing a time-constraint in consumer studies may increase their ecological validity. In this context, the aim of the present work was to evaluate the influence of a time-constraint on consumer evaluation of pomegranate/orange juice bottles using rating-based conjoint task. A consumer study with 100 participants was carried out, in which they had to evaluate 16 pomegranate/orange fruit juice bottles, differing in bottle design, front-of-pack nutritional information, nutrition claim and processing claim, and to rate their intention to purchase. Half of the participants evaluated the bottle images without time constraint and the other half had a time-constraint of 3s for evaluating each image. Eye-movements were recorded during the evaluation. Results showed that time-constraint when evaluating intention to purchase did not largely modify the way in which consumers visually processed bottle images. Regardless of the experimental condition (with or without time constraint), they tended to evaluate the same product characteristics and to give them the same relative importance. However, a trend towards a more superficial evaluation of the bottles that skipped complex information was observed. Regarding the influence of product characteristics on consumer intention to purchase, bottle design was the variable with the largest relative importance in both conditions, overriding the influence of nutritional or processing characteristics, which stresses the importance of graphic design in shaping consumer perception. Copyright © 2016 Elsevier Ltd. All rights reserved.
Precision constraints on the top-quark effective field theory at future lepton colliders
Energy Technology Data Exchange (ETDEWEB)
Durieux, Gauthier
2017-08-15
We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the e{sup +}e{sup -}→bW{sup +} anti bW{sup -} process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.
Precision constraints on the top-quark effective field theory at future lepton colliders
International Nuclear Information System (INIS)
Durieux, Gauthier
2017-08-01
We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the e + e - →bW + anti bW - process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.
Process parameter influence on Electro-sinter-forging (ESF) of titanium discs
DEFF Research Database (Denmark)
Cannella, Emanuele; Nielsen, Chris Valentin; Bay, Niels
Electro-sinter-forging (ESF) is a sintering process based on the resistance heating principle, which makes it faster than conventional sintering. The process is investigated as a function of the main process parameters, namely compacting pressure, electrical current density and sintering time....... The present work is focused on analysing the influence of these process parameters on the final density of a disc sample made from commercially pure titanium powder. Applying the design of experiments (DoE) approach, the electrical current was seen to be of largest influence. The maximum obtained density...
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Constraints on small-scale cosmological fluctuations from SNe lensing dispersion
International Nuclear Information System (INIS)
Ben-Dayan, Ido; Takahashi, Ryuichi
2015-04-01
We provide predictions on small-scale cosmological density power spectrum from supernova lensing dispersion. Parameterizing the primordial power spectrum with running α and running of running β of the spectral index, we exclude large positive α and β parameters which induce too large lensing dispersions over current observational upper bound. We ran cosmological N-body simulations of collisionless dark matter particles to investigate non-linear evolution of the primordial power spectrum with positive running parameters. The initial small-scale enhancement of the power spectrum is largely erased when entering into the non-linear regime. For example, even if the linear power spectrum at k>10 hMpc -1 is enhanced by 1-2 orders of magnitude, the enhancement much decreases to a factor of 2-3 at late time (z≤1.5). Therefore, the lensing dispersion induced by the dark matter fluctuations weakly constrains the running parameters. When including baryon-cooling effects (which strongly enhance the small-scale clustering), the constraint is comparable or tighter than the PLANCK constraint, depending on the UV cut-off. Further investigations of the non-linear matter spectrum with baryonic processes is needed to reach a firm constraint.
Experiential Learning as a Constraint-Led Process: An Ecological Dynamics Perspective
Brymer, Eric; Davids, Keith
2014-01-01
In this paper we present key ideas for an ecological dynamics approach to learning that reveal the importance of learner-environment interactions to frame outdoor experiential learning. We propose that ecological dynamics provides a useful framework for understanding the interacting constraints of the learning process and for designing learning…
Robust Parameter Coordination for Multidisciplinary Design
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
This paper introduced a robust parameter coordination method to analyze parameter uncertainties so as to predict conflicts and coordinate parameters in multidisciplinary design. The proposed method is based on constraints network, which gives a formulated model to analyze the coupling effects between design variables and product specifications. In this model, interval boxes are adopted to describe the uncertainty of design parameters quantitatively to enhance the design robustness. To solve this constraint network model, a general consistent algorithm framework is designed and implemented with interval arithmetic and the genetic algorithm, which can deal with both algebraic and ordinary differential equations. With the help of this method, designers could infer the consistent solution space from the given specifications. A case study involving the design of a bogie dumping system demonstrates the usefulness of this approach.
International Nuclear Information System (INIS)
Turner, Sam
2011-01-01
The phenomenon of process damping as a stabilising effect in milling has been encountered by machinists since milling and turning began. It is of great importance when milling aerospace alloys where maximum surface speed is limited by excessive tool wear and high speed stability lobes cannot be attained. Much of the established research into regenerative chatter and chatter avoidance has focussed on stability lobe theory with different analytical and time domain models developed to expand on the theory first developed by Trusty and Tobias. Process damping is a stabilising effect that occurs when the surface speed is low relative to the dominant natural frequency of the system and has been less successfully modelled and understood. Process damping is believed to be influenced by the interference of the relief face of the cutting tool with the waveform traced on the cut surface, with material properties and the relief geometry of the tool believed to be key factors governing performance. This study combines experimental trials with Finite Element (FE) simulation in an attempt to identify and understand the key factors influencing process damping performance in titanium milling. Rake angle, relief angle and chip thickness are the variables considered experimentally with the FE study looking at average radial and tangential forces and surface compressive stress. For the experimental study a technique is developed to identify the critical process damping wavelength as a means of measuring process damping performance. For the range of parameters studied, chip thickness is found to be the dominant factor with maximum stable parameters increased by a factor of 17 in the best case. Within the range studied, relief angle was found to have a lesser effect than expected whilst rake angle had an influence.
A constraint-based model of Scheffersomyces stipitis for improved ethanol production
Directory of Open Access Journals (Sweden)
Liu Ting
2012-09-01
Full Text Available Abstract Background As one of the best xylose utilization microorganisms, Scheffersomyces stipitis exhibits great potential for the efficient lignocellulosic biomass fermentation. Therefore, a comprehensive understanding of its unique physiological and metabolic characteristics is required to further improve its performance on cellulosic ethanol production. Results A constraint-based genome-scale metabolic model for S. stipitis CBS 6054 was developed on the basis of its genomic, transcriptomic and literature information. The model iTL885 consists of 885 genes, 870 metabolites, and 1240 reactions. During the reconstruction process, 36 putative sugar transporters were reannotated and the metabolisms of 7 sugars were illuminated. Essentiality study was conducted to predict essential genes on different growth media. Key factors affecting cell growth and ethanol formation were investigated by the use of constraint-based analysis. Furthermore, the uptake systems and metabolic routes of xylose were elucidated, and the optimization strategies for the overproduction of ethanol were proposed from both genetic and environmental perspectives. Conclusions Systems biology modelling has proven to be a powerful tool for targeting metabolic changes. Thus, this systematic investigation of the metabolism of S. stipitis could be used as a starting point for future experiment designs aimed at identifying the metabolic bottlenecks of this important yeast.
Towards automatic parameter tuning of stream processing systems
Bilal, Muhammad
2017-09-27
Optimizing the performance of big-data streaming applications has become a daunting and time-consuming task: parameters may be tuned from a space of hundreds or even thousands of possible configurations. In this paper, we present a framework for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing three benchmark applications in Apache Storm. Our results show that a hill-climbing algorithm that uses a new heuristic sampling approach based on Latin Hypercube provides the best results. Our gray-box algorithm provides comparable results while being two to five times faster.
An analysis to optimize the process parameters of friction stir welded ...
African Journals Online (AJOL)
The friction stir welding (FSW) of steel is a challenging task. Experiments are conducted here, with a tool having a conical pin of 0.4mm clearance. The process parameters are optimized by using the Taguchi technique based on Taguchi's L9 orthogonal array. Experiments have been conducted based on three process ...
Parameter-free resolution of the superposition of stochastic signals
Energy Technology Data Exchange (ETDEWEB)
Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)
2017-01-30
This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.
A Foot-Mounted Inertial Measurement Unit (IMU) Positioning Algorithm Based on Magnetic Constraint.
Wang, Yan; Li, Xin; Zou, Jiaheng
2018-03-01
With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE) and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU) positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.
A Foot-Mounted Inertial Measurement Unit (IMU Positioning Algorithm Based on Magnetic Constraint
Directory of Open Access Journals (Sweden)
Yan Wang
2018-03-01
Full Text Available With the development of related applications, indoor positioning techniques have been more and more widely developed. Based on Wi-Fi, Bluetooth low energy (BLE and geomagnetism, indoor positioning techniques often rely on the physical location of fingerprint information. The focus and difficulty of establishing the fingerprint database are in obtaining a relatively accurate physical location with as little given information as possible. This paper presents a foot-mounted inertial measurement unit (IMU positioning algorithm under the loop closure constraint based on magnetic information. It can provide relatively reliable position information without maps and geomagnetic information and provides a relatively accurate coordinate for the collection of a fingerprint database. In the experiment, the features extracted by the multi-level Fourier transform method proposed in this paper are validated and the validity of loop closure matching is tested with a RANSAC-based method. Moreover, the loop closure detection results show that the cumulative error of the trajectory processed by the graph optimization algorithm is significantly suppressed, presenting a good accuracy. The average error of the trajectory under loop closure constraint is controlled below 2.15 m.
Reliability based topology optimization for continuum structures with local failure constraints
DEFF Research Database (Denmark)
Luo, Yangjun; Zhou, Mingdong; Wang, Michael Yu
2014-01-01
This paper presents an effective method for stress constrained topology optimization problems under load and material uncertainties. Based on the Performance Measure Approach (PMA), the optimization problem is formulated as to minimize the objective function under a large number of (stress......-related) target performance constraints. In order to overcome the stress singularity phenomenon caused by the combined stress and reliability constraints, a reduction strategy on target reliability index is proposed and utilized together with the ε-relaxation approach. Meanwhile, an enhanced aggregation method...... is employed to aggregate the selected active constraints using a general K–S function, which avoids expensive computational cost from the large-scale nature of local failure constraints. Several numerical examples are given to demonstrate the validity of the present method....
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia Posso, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...
Directory of Open Access Journals (Sweden)
Maryam Iman
2017-08-01
Full Text Available Microbial remediation of nitroaromatic compounds (NACs is a promising environmentally friendly and cost-effective approach to the removal of these life-threating agents. Escherichia coli (E. coli has shown remarkable capability for the biotransformation of 2,4,6-trinitro-toluene (TNT. Efforts to develop E. coli as an efficient TNT degrading biocatalyst will benefit from holistic flux-level description of interactions between multiple TNT transforming pathways operating in the strain. To gain such an insight, we extended the genome-scale constraint-based model of E. coli to account for a curated version of major TNT transformation pathways known or evidently hypothesized to be active in E. coli in present of TNT. Using constraint-based analysis (CBA methods, we then performed several series of in silico experiments to elucidate the contribution of these pathways individually or in combination to the E. coli TNT transformation capacity. Results of our analyses were validated by replicating several experimentally observed TNT degradation phenotypes in E. coli cultures. We further used the extended model to explore the influence of process parameters, including aeration regime, TNT concentration, cell density, and carbon source on TNT degradation efficiency. We also conducted an in silico metabolic engineering study to design a series of E. coli mutants capable of degrading TNT at higher yield compared with the wild-type strain. Our study, therefore, extends the application of CBA to bioremediation of nitroaromatics and demonstrates the usefulness of this approach to inform bioremediation research.
The MELISSA food data base: space food preparation and process optimization
Creuly, Catherine; Poughon, Laurent; Pons, A.; Farges, Berangere; Dussap, Claude-Gilles
Life Support Systems have to deal with air, water and food requirement for a crew, waste management and also to the crew's habitability and safety constraints. Food can be provided from stocks (open loops) or produced during the space flight or on an extraterrestrial base (what implies usually a closed loop system). Finally it is admitted that only biological processes can fulfil the food requirement of life support system. Today, only a strictly vegetarian source range is considered, and this is limited to a very small number of crops compared to the variety available on Earth. Despite these constraints, a successful diet should have enough variety in terms of ingredients and recipes and sufficiently high acceptability in terms of acceptance ratings for individual dishes to remain interesting and palatable over a several months period and an adequate level of nutrients commensurate with the space nutritional requirements. In addition to the nutritional aspects, others parameters have to be considered for the pertinent selection of the dishes as energy consumption (for food production and transformation), quantity of generated waste, preparation time, food processes. This work concerns a global approach called MELISSA Food Database to facilitate the cre-ation and the management of these menus associated to the nutritional, mass, energy and time constraints. The MELISSA Food Database is composed of a database (MySQL based) con-taining multiple information among others crew composition, menu, dishes, recipes, plant and nutritional data and of a web interface (PHP based) to interactively access the database and manage its content. In its current version a crew is defined and a 10 days menu scenario can be created using dishes that could be cooked from a set of limited fresh plant assumed to be produced in the life support system. The nutritional covering, waste produced, mass, time and energy requirements are calculated allowing evaluation of the menu scenario and its
Directory of Open Access Journals (Sweden)
Fan Chen
2016-01-01
Full Text Available In order to achieve the precision and efficient processing of nanocomposite ceramics, the ultrasound-aided electrolytic in process dressing method was proposed. But how to realize grinding parameter optimization, that is, the maximum processing efficiency, on the premise of the assurance of best workpiece quality is a problem that needs to be solved urgently. Firstly, this research investigated the influence of grinding parameters on material removal rate and critical ductile depth, and their mathematic models based on the existing models were developed to simulate the material removal process. Then, on the basis of parameter sensitivity analysis based on partial derivative, the sensitivity models of material removal rates on grinding parameter were established and computed quantitatively by MATLAB, and the key grinding parameter for optimal grinding process was found. Finally, the theoretical analyses were verified by experiments: the material removal rate increases with the increase of grinding parameters, including grinding depth (ap, axial feeding speed (fa, workpiece speed (Vw, and wheel speed (Vs; the parameter sensitivity of material removal rate was in a descending order as ap>fa>Vw>Vs; the most sensitive parameter (ap was optimized and it was found that the better machining result has been obtained when ap was about 3.73 μm.
DEFF Research Database (Denmark)
Michelsen, Aage U.
2004-01-01
Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....
Cosmological parameters from CMB and other data: A Monte Carlo approach
International Nuclear Information System (INIS)
Lewis, Antony; Bridle, Sarah
2002-01-01
We present a fast Markov chain Monte Carlo exploration of cosmological parameter space. We perform a joint analysis of results from recent cosmic microwave background (CMB) experiments and provide parameter constraints, including σ 8 , from the CMB independent of other data. We next combine data from the CMB, HST Key Project, 2dF galaxy redshift survey, supernovae type Ia and big-bang nucleosynthesis. The Monte Carlo method allows the rapid investigation of a large number of parameters, and we present results from 6 and 9 parameter analyses of flat models, and an 11 parameter analysis of non-flat models. Our results include constraints on the neutrino mass (m ν < or approx. 3 eV), equation of state of the dark energy, and the tensor amplitude, as well as demonstrating the effect of additional parameters on the base parameter constraints. In a series of appendixes we describe the many uses of importance sampling, including computing results from new data and accuracy correction of results generated from an approximate method. We also discuss the different ways of converting parameter samples to parameter constraints, the effect of the prior, assess the goodness of fit and consistency, and describe the use of analytic marginalization over normalization parameters
Reduction Of Constraints For Coupled Operations
International Nuclear Information System (INIS)
Raszewski, F.; Edwards, T.
2009-01-01
The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated much of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al 2 O 3 (ge) 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% (ΣM 2 O 2 O 3 constraint to 4 wt% (Al 2 O 3 (ge) 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al 2 O 3 and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of
Optimization of CNC end milling process parameters using PCA ...
African Journals Online (AJOL)
Optimization of CNC end milling process parameters using PCA-based Taguchi method. ... International Journal of Engineering, Science and Technology ... To meet the basic assumption of Taguchi method; in the present work, individual response correlations have been eliminated first by means of Principal Component ...
Majumder, Himadri; Maity, Kalipada
2018-03-01
Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.
WMAP constraints on the Cardassian model
International Nuclear Information System (INIS)
Sen, A.A.; Sen, S.
2003-01-01
We investigate the constraints on the Cardassian model using the recent results from the Wilkinson microwave anisotropy probe for the locations of the peaks of the cosmic microwave background (CMB) anisotropy spectrum. We find that the model is consistent with the recent observational data for a certain range of the model parameter n and the cosmological parameters. We find that the Cardassian model is favored compared to the ΛCDM model for a higher spectral index (n s ≅1) together with a lower value of the Hubble parameter h (h≤0.71). But for smaller values of n s , both ΛCDM and Cardassian models are equally favored. Also, irrespective of supernova constraints, CMB data alone predict the current acceleration of the Universe in this model. We have also studied the constraint on σ 8 , the rms density fluctuations at the 8h -1 Mpc scale
Içten, Elçin; Giridhar, Arun; Nagy, Zoltan K; Reklaitis, Gintaras V
2016-04-01
The features of a drop-on-demand-based system developed for the manufacture of melt-based pharmaceuticals have been previously reported. In this paper, a supervisory control system, which is designed to ensure reproducible production of high quality of melt-based solid oral dosages, is presented. This control system enables the production of individual dosage forms with the desired critical quality attributes: amount of active ingredient and drug morphology by monitoring and controlling critical process parameters, such as drop size and product and process temperatures. The effects of these process parameters on the final product quality are investigated, and the properties of the produced dosage forms characterized using various techniques, such as Raman spectroscopy, optical microscopy, and dissolution testing. A crystallization temperature control strategy, including controlled temperature cycles, is presented to tailor the crystallization behavior of drug deposits and to achieve consistent drug morphology. This control strategy can be used to achieve the desired bioavailability of the drug by mitigating variations in the dissolution profiles. The supervisor control strategy enables the application of the drop-on-demand system to the production of individualized dosage required for personalized drug regimens.
Directory of Open Access Journals (Sweden)
Dong Hyun Moon
2017-07-01
Full Text Available The constraint effect is the key issue in structural integrity assessments based on two parameter fracture mechanics (TPFM to make a precise prediction of the load-bearing capacity of cracked structural components. In this study, a constraint-based failure assessment diagram (FAD was used to assess the fracture behavior of an Al 5083-O weldment with various flaws at cryogenic temperature. The results were compared with those of BS 7910 Option 1 FAD, in terms of the maximum allowable stress. A series of fracture toughness tests were conducted with compact tension (CT specimens at room and cryogenic temperatures. The Q parameter for the Al 5083-O weldment was evaluated to quantify the constraint level, which is the difference between the actual stress, and the Hutchinson-Rice-Rosengren (HRR stress field near the crack tip. Nonlinear 3D finite element analysis was carried out to calculate the Q parameter at cryogenic temperature. Based on the experimental and numerical results, the influence of the constraint level correction on the allowable applied stress was investigated using a FAD methodology. The results showed that the constraint-based FAD procedure is essential to avoid an overly conservative allowable stress prediction in an Al 5083-O weldment with flaws.
Cosmological constraints on Brans-Dicke theory.
Avilez, A; Skordis, C
2014-07-04
We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981theory and are valid for any Horndeski theory, the most general second-order scalar-tensor theory, which approximates the BD theory on cosmological scales. In this sense, our constraints place strong limits on possible modifications of gravity that might explain cosmic acceleration.
Executable specifications for hypothesis-based reasoning with Prolog and Constraint Handling Rules
DEFF Research Database (Denmark)
Christiansen, Henning
2009-01-01
Constraint Handling Rules (CHR) is an extension to Prolog which opens up a spectrum of hypotheses-based reasoning in logic programs without additional interpretation overhead. Abduction with integrity constraints is one example of hypotheses-based reasoning which can be implemented directly...... in Prolog and CHR with a straightforward use of available and efficiently implemented facilities The present paper clarifies the semantic foundations for this way of doing abduction in CHR and Prolog as well as other examples of hypotheses-based reasoning that is possible, including assumptive logic...
Escape from the island. Processing constraints on wh-extraction in Danish
DEFF Research Database (Denmark)
Christensen, Ken Ramshøj; Kizach, Johannes; Nyvad, Anne Mette
2012-01-01
In the formal syntax literature, it is commonly assumed that there is a constraint on linguistic competence that blocks extraction of WH-expressions (e.g. what or which book) from embedded questions, referred to as WH-islands. Furthermore, it is assumed that there is an argument/adjunct asymmetry...... reveal that WH-island violations, though degraded, are grammatical in Danish. Since the standard assumptions cannot account for the range of results, we argue in favor of a processing account referring to Locality (processing domains) and Working Memory....
A constraints-based approach to the acquisition of expertise in outdoor adventure sports
Davids, Keith; Brymer, Eric; Seifert, Ludovic; Orth, Dominic
2013-01-01
A constraints-based framework enables a new understanding of expertise in outdoor adventure sports by considering performer-environment couplings through emergent and self-organizing behaviours in relation to interacting constraints. Expert adventure athletes, conceptualized as complex, dynamical
DEFF Research Database (Denmark)
Wang, Yong; Cai, Zixing; Zhou, Yuren
2009-01-01
A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...
Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology
Kumar, Amit; Soota, Tarun; Kumar, Jitendra
2018-03-01
Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.
Energy Technology Data Exchange (ETDEWEB)
Tencate, Alister J. [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); Kalivas, John H., E-mail: kalijohn@isu.edu [Department of Chemistry, Idaho State University, Pocatello, ID 83209 (United States); White, Alexander J. [Department of Physics and Optical Engineering, Rose-Hulman Institute of Technology, Terre Huate, IN 47803 (United States)
2016-05-19
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
International Nuclear Information System (INIS)
Tencate, Alister J.; Kalivas, John H.; White, Alexander J.
2016-01-01
New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of
Effect of Processing Parameters on 3D Printing of Cement - based Materials
Lin, Jia Chao; Wang, Jun; Wu, Xiong; Yang, Wen; Zhao, Ri Xu; Bao, Ming
2018-06-01
3D printing is a new study direction of building method in recent years. The applicability of 3D printing equipment and cement based materials is analyzed, and the influence of 3D printing operation parameters on the printing effect is explored in this paper. Results showed that the appropriate range of 3D printing operation parameters: print height/nozzle diameter is between 0.4 to 0.6, the printing speed 4-8 cm/s with pumpage 9 * 10-2 m 3/ h.
Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.
Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven
2009-01-01
The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-01-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component. PMID:27043580
Medical image segmentation by means of constraint satisfaction neural network
International Nuclear Information System (INIS)
Chen, C.T.; Tsao, C.K.; Lin, W.C.
1990-01-01
This paper applies the concept of constraint satisfaction neural network (CSNN) to the problem of medical image segmentation. Constraint satisfaction (or constraint propagation), the procedure to achieve global consistency through local computation, is an important paradigm in artificial intelligence. CSNN can be viewed as a three-dimensional neural network, with the two-dimensional image matrix as its base, augmented by various constraint labels for each pixel. These constraint labels can be interpreted as the connections and the topology of the neural network. Through parallel and iterative processes, the CSNN will approach a solution that satisfies the given constraints thus providing segmented regions with global consistency
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Directory of Open Access Journals (Sweden)
Yunqing Rao
2013-01-01
Full Text Available For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.
An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.
Rao, Yunqing; Qi, Dezhong; Li, Jinling
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.
Optimization and Simulation of Machining Parameters in Radial-axial Ring Rolling Process
Directory of Open Access Journals (Sweden)
Shuiyuan Tang
2011-05-01
Full Text Available Ring rolling is a complicated process, in which rolling parameters influence directly the quality of ring. It is a process method with high productivity and few waste of material, widely used in transportation industry including automotive, shipbuilding, aerospace etc. During the rolling process of large-sized parts, crinkle and hollows often appear on surface, due to inconsistence of rolling motions with the deformation of ring part. Based on radial-axial ring rolling system configuration, motions and forces in rolling process are analyzed, and a dynamic model is formulated. Error of ring's end flatness and roundness are defined as the characteristic parameters of ring quality. The relationship between core roller feed speed, drive roller speed, the upper taper roller feed speed, and quality of ring part are analyzed. The stress and strain of the part are simulated in the Finite Element Method by DEFORM software. The simulation results provide a reference for the definition of ring rolling process parameters. It is able to make the deformation of the part be consistent with the process parameters, and improve product quality considerably.
Penalty parameter of the penalty function method
DEFF Research Database (Denmark)
Si, Cheng Yong; Lan, Tian; Hu, Junjie
2014-01-01
The penalty parameter of penalty function method is systematically analyzed and discussed. For the problem that Deb's feasibility-based rule doesnot give the detailed instruction as how to rank two solutions when they have the same constraint violation, an improved Deb's feasibility-based rule is...
Optimization of dissolution process parameters for uranium ore concentrate powders
Energy Technology Data Exchange (ETDEWEB)
Misra, M.; Reddy, D.M.; Reddy, A.L.V.; Tiwari, S.K.; Venkataswamy, J.; Setty, D.S.; Sheela, S.; Saibaba, N. [Nuclear Fuel Complex, Hyderabad (India)
2013-07-01
Nuclear fuel complex processes Uranium Ore Concentrate (UOC) for producing uranium dioxide powder required for the fabrication of fuel assemblies for Pressurized Heavy Water Reactor (PHWR)s in India. UOC is dissolved in nitric acid and further purified by solvent extraction process for producing nuclear grade UO{sub 2} powder. Dissolution of UOC in nitric acid involves complex nitric oxide based reactions, since it is in the form of Uranium octa oxide (U{sub 3}O{sub 8}) or Uranium Dioxide (UO{sub 2}). The process kinetics of UOC dissolution is largely influenced by parameters like concentration and flow rate of nitric acid, temperature and air flow rate and found to have effect on recovery of nitric oxide as nitric acid. The plant scale dissolution of 2 MT batch in a single reactor is studied and observed excellent recovery of oxides of nitrogen (NO{sub x}) as nitric acid. The dissolution process is automated by PLC based Supervisory Control and Data Acquisition (SCADA) system for accurate control of process parameters and successfully dissolved around 200 Metric Tons of UOC. The paper covers complex chemistry involved in UOC dissolution process and also SCADA system. The solid and liquid reactions were studied along with multiple stoichiometry of nitrous oxide generated. (author)
Atmospheric neutrino oscillation analysis with external constraints in Super-Kamiokande I-IV
Abe, K.; Bronner, C.; Haga, Y.; Hayato, Y.; Ikeda, M.; Iyogi, K.; Kameda, J.; Kato, Y.; Kishimoto, Y.; Marti, Ll.; Miura, M.; Moriyama, S.; Nakahata, M.; Nakajima, T.; Nakano, Y.; Nakayama, S.; Okajima, Y.; Orii, A.; Pronost, G.; Sekiya, H.; Shiozawa, M.; Sonoda, Y.; Takeda, A.; Takenaka, A.; Tanaka, H.; Tasaka, S.; Tomura, T.; Akutsu, R.; Irvine, T.; Kajita, T.; Kametani, I.; Kaneyuki, K.; Nishimura, Y.; Okumura, K.; Richard, E.; Tsui, K. M.; Labarga, L.; Fernandez, P.; Blaszczyk, F. d. M.; Gustafson, J.; Kachulis, C.; Kearns, E.; Raaf, J. L.; Stone, J. L.; Sulak, L. R.; Berkman, S.; Tobayama, S.; Goldhaber, M.; Carminati, G.; Elnimr, M.; Kropp, W. R.; Mine, S.; Locke, S.; Renshaw, A.; Smy, M. B.; Sobel, H. W.; Takhistov, V.; Weatherly, P.; Ganezer, K. S.; Hartfiel, B. L.; Hill, J.; Hong, N.; Kim, J. Y.; Lim, I. T.; Park, R. G.; Akiri, T.; Himmel, A.; Li, Z.; O'Sullivan, E.; Scholberg, K.; Walter, C. W.; Wongjirad, T.; Ishizuka, T.; Nakamura, T.; Jang, J. S.; Choi, K.; Learned, J. G.; Matsuno, S.; Smith, S. N.; Amey, J.; Litchfield, R. P.; Ma, W. Y.; Uchida, Y.; Wascko, M. O.; Cao, S.; Friend, M.; Hasegawa, T.; Ishida, T.; Ishii, T.; Kobayashi, T.; Nakadaira, T.; Nakamura, K.; Oyama, Y.; Sakashita, K.; Sekiguchi, T.; Tsukamoto, T.; Abe, KE.; Hasegawa, M.; Suzuki, A. T.; Takeuchi, Y.; Yano, T.; Hayashino, T.; Hirota, S.; Huang, K.; Ieki, K.; Jiang, M.; Kikawa, T.; Nakamura, KE.; Nakaya, T.; Patel, N. D.; Suzuki, K.; Takahashi, S.; Wendell, R. A.; Anthony, L. H. V.; McCauley, N.; Pritchard, A.; Fukuda, Y.; Itow, Y.; Mitsuka, G.; Murase, M.; Muto, F.; Suzuki, T.; Mijakowski, P.; Frankiewicz, K.; Hignight, J.; Imber, J.; Jung, C. K.; Li, X.; Palomino, J. L.; Santucci, G.; Vilela, C.; Wilking, M. J.; Yanagisawa, C.; Ito, S.; Fukuda, D.; Ishino, H.; Kayano, T.; Kibayashi, A.; Koshio, Y.; Mori, T.; Nagata, H.; Sakuda, M.; Xu, C.; Kuno, Y.; Wark, D.; Di Lodovico, F.; Richards, B.; Tacik, R.; Kim, S. B.; Cole, A.; Thompson, L.; Okazawa, H.; Choi, Y.; Ito, K.; Nishijima, K.; Koshiba, M.; Totsuka, Y.; Suda, Y.; Yokoyama, M.; Calland, R. G.; Hartz, M.; Martens, K.; Quilain, B.; Simpson, C.; Suzuki, Y.; Vagins, M. R.; Hamabe, D.; Kuze, M.; Yoshida, T.; Ishitsuka, M.; Martin, J. F.; Nantais, C. M.; de Perio, P.; Tanaka, H. A.; Konaka, A.; Chen, S.; Wan, L.; Zhang, Y.; Wilkes, R. J.; Minamino, A.; Super-Kamiokande Collaboration
2018-04-01
An analysis of atmospheric neutrino data from all four run periods of Super-Kamiokande optimized for sensitivity to the neutrino mass hierarchy is presented. Confidence intervals for Δ m322 , sin2θ23, sin2θ13 and δC P are presented for normal neutrino mass hierarchy and inverted neutrino mass hierarchy hypotheses, based on atmospheric neutrino data alone. Additional constraints from reactor data on θ13 and from published binned T2K data on muon neutrino disappearance and electron neutrino appearance are added to the atmospheric neutrino fit to give enhanced constraints on the above parameters. Over the range of parameters allowed at 90% confidence level, the normal mass hierarchy is favored by between 91.9% and 94.5% based on the combined Super-Kamiokande plus T2K result.
Material Behavior Based Hybrid Process for Sheet Draw-Forging Thin Walled Magnesium Alloys
International Nuclear Information System (INIS)
Sheng, Z.Q.; Shivpuri, R.
2005-01-01
Magnesium alloys are conventionally formed at the elevated temperatures. The thermally improved formability is sensitive to the temperature and strain rate. Due to limitations in forming speeds, tooling strength and narrow processing windows, complex thin walled parts cannot be made by traditional warm drawing or hot forging processes. A hybrid process, which is based on the deformation mechanism of magnesium alloys at the elevated temperature, is proposed that combines warm drawing and hot forging modes to produce an aggressive geometry at acceptable forming speed. The process parameters, such as temperatures, forming speeds etc. are determined by the FEM modeling and simulation. Sensitivity analysis under the constraint of forming limits of Mg alloy sheet material and strength of tooling material is carried out. The proposed approach is demonstrated on a conical geometry with thin walls and with bottom features. Results show that designed geometry can be formed in about 8 seconds, this cannot be formed by conventional forging while around 1000s is required for warm drawing. This process is being further investigated through controlled experiments
Directory of Open Access Journals (Sweden)
Dan Selişteanu
2015-01-01
Full Text Available Monoclonal antibodies (mAbs are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
DEFF Research Database (Denmark)
such that conventional LDF (linear driving force) type models are extended to inactive zones without loosing their generality. Based on a limiting component constraint, an exchange probability kernel is developed for multi-component systems. The LDF-type model with the kernel is continuous with time and axial direction....... Two tuning parameters such as concentration layer thickness and function change rate at the threshold point are needed for the probability kernels, which are not sensitive to problems considered....
International Nuclear Information System (INIS)
Attaran, Seyed Mohammad; Yusof, Rubiyah; Selamat, Hazlina
2016-01-01
Highlights: • Decoupling of a heating, ventilation, and air conditioning system is presented. • RBF models were identified by Epsilon constraint method for temperature and humidity. • Control settings derived from optimization of the decoupled model. • Epsilon constraint-RBF based on PID controller was implemented to keep thermal comfort and minimize energy. • Enhancements of controller parameters of the HVAC system are desired. - Abstract: The energy efficiency of a heating, ventilating and air conditioning (HVAC) system optimized using a radial basis function neural network (RBFNN) combined with the epsilon constraint (EC) method is reported. The new method adopts the advanced algorithm of RBFNN for the HVAC system to estimate the residual errors, increase the control signal and reduce the error results. The objective of this study is to develop and simulate the EC-RBFNN for a self tuning PID controller for a decoupled bilinear HVAC system to control the temperature and relative humidity (RH) produced by the system. A case study indicates that the EC-RBFNN algorithm has a much better accuracy than optimization PID itself and PID-RBFNN, respectively.
Lithium-Ion Battery Online Rapid State-of-Power Estimation under Multiple Constraints
Directory of Open Access Journals (Sweden)
Shun Xiang
2018-01-01
Full Text Available The paper aims to realize a rapid online estimation of the state-of-power (SOP with multiple constraints of a lithium-ion battery. Firstly, based on the improved first-order resistance-capacitance (RC model with one-state hysteresis, a linear state-space battery model is built; then, using the dual extended Kalman filtering (DEKF method, the battery parameters and states, including open-circuit voltage (OCV, are estimated. Secondly, by employing the estimated OCV as the observed value to build the second dual Kalman filters, the battery SOC is estimated. Thirdly, a novel rapid-calculating peak power/SOP method with multiple constraints is proposed in which, according to the bisection judgment method, the battery’s peak state is determined; then, one or two instantaneous peak powers are used to determine the peak power during T seconds. In addition, in the battery operating process, the actual constraint that the battery is under is analyzed specifically. Finally, three simplified versions of the Federal Urban Driving Schedule (SFUDS with inserted pulse experiments are conducted to verify the effectiveness and accuracy of the proposed online SOP estimation method.
Cheng, Yu; Ye, Dong; Sun, Zhaowei; Zhang, Shijie
2018-03-01
This paper proposes a novel feedback control law for spacecraft to deal with attitude constraint, input saturation, and stochastic disturbance during the attitude reorientation maneuver process. Applying the parameter selection method to improving the existence conditions for the repulsive potential function, the universality of the potential-function-based algorithm is enhanced. Moreover, utilizing the auxiliary system driven by the difference between saturated torque and command torque, a backstepping control law, which satisfies the input saturation constraint and guarantees the spacecraft stability, is presented. Unlike some methods that passively rely on the inherent characteristic of the existing controller to stabilize the adverse effects of external stochastic disturbance, this paper puts forward a nonlinear disturbance observer to compensate the disturbance in real-time, which achieves a better performance of robustness. The simulation results validate the effectiveness, reliability, and universality of the proposed control law.
Zahariev, Federico; De Silva, Nuwan; Gordon, Mark S; Windus, Theresa L; Dick-Perez, Marilu
2017-03-27
A newly created object-oriented program for automating the process of fitting molecular-mechanics parameters to ab initio data, termed ParFit, is presented. ParFit uses a hybrid of deterministic and stochastic genetic algorithms. ParFit can simultaneously handle several molecular-mechanics parameters in multiple molecules and can also apply symmetric and antisymmetric constraints on the optimized parameters. The simultaneous handling of several molecules enhances the transferability of the fitted parameters. ParFit is written in Python, uses a rich set of standard and nonstandard Python libraries, and can be run in parallel on multicore computer systems. As an example, a series of phosphine oxides, important for metal extraction chemistry, are parametrized using ParFit. ParFit is in an open source program available for free on GitHub ( https://github.com/fzahari/ParFit ).
Biondi, Daniela; De Luca, Davide Luciano
2015-04-01
The use of rainfall-runoff models represents an alternative to statistical approaches (such as at-site or regional flood frequency analysis) for design flood estimation, and constitutes an answer to the increasing need for synthetic design hydrographs (SDHs) associated to a specific return period. However, the lack of streamflow observations and the consequent high uncertainty associated with parameter estimation, usually pose serious limitations to the use of process-based approaches in ungauged catchments, which in contrast represent the majority in practical applications. This work presents the application of a Bayesian procedure that, for a predefined rainfall-runoff model, allows for the assessment of posterior parameters distribution, using the limited and uncertain information available for the response of an ungauged catchment (Bulygina et al. 2009; 2011). The use of regional estimates of river flow statistics, interpreted as hydrological signatures that measure theoretically relevant system process behaviours (Gupta et al. 2008), within this framework represents a valuable option and has shown significant developments in recent literature to constrain the plausible model response and to reduce the uncertainty in ungauged basins. In this study we rely on the first three L-moments of annual streamflow maxima, for which regressions are available from previous studies (Biondi et al. 2012; Laio et al. 2011). The methodology was carried out for a catchment located in southern Italy, and used within a Monte Carlo scheme (MCs) considering both event-based and continuous simulation approaches for design flood estimation. The applied procedure offers promising perspectives to perform model calibration and uncertainty analysis in ungauged basins; moreover, in the context of design flood estimation, process-based methods coupled with MCs approach have the advantage of providing simulated floods uncertainty analysis that represents an asset in risk-based decision
Constraints on vacuum energy from structure formation and Nucleosynthesis
Energy Technology Data Exchange (ETDEWEB)
Adams, Fred C.; Grohs, Evan [Physics Department, University of Michigan, 450 Church Street, Ann Arbor, MI, 48109 (United States); Alexander, Stephon [Physics Department, Brown University, 6127 Wilder Laboratory, Providence, RI, 02912 (United States); Mersini-Houghton, Laura, E-mail: fca@umich.edu, E-mail: stephon_alexander@brown.edu, E-mail: egrohs@umich.edu, E-mail: mersini@physics.unc.edu [Physics Department, University of North Carolina, 120 E. Cameron Avenue, Chapel Hill, NC, 27599 (United States)
2017-03-01
This paper derives an upper limit on the density ρ{sub Λ} of dark energy based on the requirement that cosmological structure forms before being frozen out by the eventual acceleration of the universe. By allowing for variations in both the cosmological parameters and the strength of gravity, the resulting constraint is a generalization of previous limits. The specific parameters under consideration include the amplitude Q of the primordial density fluctuations, the Planck mass M {sub pl}, the baryon-to-photon ratio η, and the density ratio Ω {sub M} /Ω {sub b} . In addition to structure formation, we use considerations from stellar structure and Big Bang Nucleosynthesis (BBN) to constrain these quantities. The resulting upper limit on the dimensionless density of dark energy becomes ρ{sub Λ}/ M {sub pl}{sup 4} < 10{sup −90}, which is ∼30 orders of magnitude larger than the value in our universe ρ{sub Λ}/ M {sub pl}4 ∼ 10{sup −120}. This new limit is much less restrictive than previous constraints because additional parameters are allowed to vary. With these generalizations, a much wider range of universes can develop cosmic structure and support observers. To constrain the constituent parameters, new BBN calculations are carried out in the regime where η and G = M {sub pl}{sup −2} are much larger than in our universe. If the BBN epoch were to process all of the protons into heavier elements, no hydrogen would be left behind to make water, and the universe would not be viable. However, our results show that some hydrogen is always left over, even under conditions of extremely large η and G , so that a wide range of alternate universes are potentially habitable.
Study of pipe-whip parameters in pipelines
International Nuclear Information System (INIS)
Guerreiro, J.N.C.; Loula, A.F.D.; Galeao, A.C.N.R.
1980-01-01
The problem of the high energy pipe-whip, assuming an elastic-plastic behavior for the tube material and taking in account the internal pressure, is studied. The constraints are simulated as bilinear springs and viscous dampers. A general research, based on the finite element method was developed to analyse the phenomenon. The influence of the following parameters: gap, damping coefficient, stiffness, constraints positioning and internal pressure of the tube is studied. (Author) [pt
Constraints on stellar evolution from pulsations
International Nuclear Information System (INIS)
Cox, A.N.
1983-01-01
Consideration of the many types of intrinsic variable stars, that is, those that pulsate, reveals that perhaps a dozen classes can indicate some constraints that affect the results of stellar evolution calculations, or some interpretations of observations. Many of these constraints are not very strong or may not even be well defined yet. In this review we discuss only the case for six classes: classical Cepheids with their measured Wesselink radii, the observed surface effective temperatures of the known eleven double-mode Cepheids, the pulsation periods and measured surface effective temperatures of three R CrB variables, the delta Scuti variable VZ Cnc with a very large ratio of its two observed periods, the nonradial oscillations of our sun, and the period ratios of the newly discovered double-mode RR Lyrae variables. Unfortunately, the present state of knowledge about the exact compositions; mass loss and its dependence on the mass, radius, luminosity, and composition; ;and internal mixing processes, as well as sometimes the more basic parameters such as luminosities and surface effective temperatures prevent us from applying strong constraints for every case where currently the possibility exists
Nanohydroxyapatite synthesis using optimized process parameters ...
Indian Academy of Sciences (India)
3Energy Research Group, School of Engineering, Taylor's University, 47500 ... influence of different ultrasonication parameters on the prop- ... to evaluate multiple process parameters and their interaction. ..... dent and dependent variables by a 3-D representation of .... The intensities of O–H functional groups are seen to.
Constraint-based modeling in microbial food biotechnology
Rau, Martin H.
2018-01-01
Genome-scale metabolic network reconstruction offers a means to leverage the value of the exponentially growing genomics data and integrate it with other biological knowledge in a structured format. Constraint-based modeling (CBM) enables both the qualitative and quantitative analyses of the reconstructed networks. The rapid advancements in these areas can benefit both the industrial production of microbial food cultures and their application in food processing. CBM provides several avenues for improving our mechanistic understanding of physiology and genotype–phenotype relationships. This is essential for the rational improvement of industrial strains, which can further be facilitated through various model-guided strain design approaches. CBM of microbial communities offers a valuable tool for the rational design of defined food cultures, where it can catalyze hypothesis generation and provide unintuitive rationales for the development of enhanced community phenotypes and, consequently, novel or improved food products. In the industrial-scale production of microorganisms for food cultures, CBM may enable a knowledge-driven bioprocess optimization by rationally identifying strategies for growth and stability improvement. Through these applications, we believe that CBM can become a powerful tool for guiding the areas of strain development, culture development and process optimization in the production of food cultures. Nevertheless, in order to make the correct choice of the modeling framework for a particular application and to interpret model predictions in a biologically meaningful manner, one should be aware of the current limitations of CBM. PMID:29588387
Constraints on low energy QCD parameters from η → 3π and π π scattering
Kolesár, Marián; Novotný, Jiří
2018-03-01
The η → 3π decays are a valuable source of information on low energy QCD. Yet they were not used for an extraction of the three flavor chiral symmetry breaking order parameters until now. We use a Bayesian approach in the framework of resummed chiral perturbation theory to obtain constraints on the quark condensate and pseudoscalar decay constant in the chiral limit. We compare our results with recent CHPT and lattice QCD fits and find some tension, as the η → 3π data seem to prefer a larger ratio of the chiral order parameters. The results also disfavor a very large value of the pseudoscalar decay constant in the chiral limit, which was found by some recent work. In addition, we present results of a combined analysis including η → 3π decays and π π scattering and though the picture does not changed appreciably, we find some tension between the data we use. We also try to extract information on the mass difference of the light quarks, but the uncertainties prove to be large.
Directory of Open Access Journals (Sweden)
Wiktor Jakowluk
2014-11-01
Full Text Available System identification, in practice, is carried out by perturbing processes or plants under operation. That is why in many industrial applications a plant-friendly input signal would be preferred for system identification. The goal of the study is to design the optimal input signal which is then employed in the identification experiment and to examine the relationships between the index of friendliness of this input signal and the accuracy of parameter estimation when the measured output signal is significantly affected by noise. In this case, the objective function was formulated through maximisation of the Fisher information matrix determinant (D-optimality expressed in conventional Bolza form. As setting such conditions of the identification experiment we can only talk about the D-suboptimality, we quantify the plant trajectories using the D-efficiency measure. An additional constraint, imposed on D-efficiency of the solution, should allow one to attain the most adequate information content from the plant which operating point is perturbed in the least invasive (most friendly way. A simple numerical example, which clearly demonstrates the idea presented in the paper, is included and discussed.
Hand-Geometry Recognition Based on Contour Parameters
Veldhuis, Raymond N.J.; Bazen, A.M.; Booij, W.D.T.; Hendrikse, A.J.; Jain, A.K.; Ratha, N.K.
This paper demonstrates the feasibility of a new method of hand-geometry recognition based on parameters derived from the contour of the hand. The contour is completely determined by the black-and-white image of the hand and can be derived from it by means of simple image-processing techniques. It
Multi-atlas Based Segmentation Editing with Interaction-Guided Constraints
Park, Sang Hyun; Gao, Yaozong; Shen, Dinggang
2015-01-01
We propose a novel multi-atlas based segmentation method to address the editing scenario, when given an incomplete segmentation along with a set of training label images. Unlike previous multi-atlas based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate training labels and derive their voting weights. Specifically, we divide user interactions, provided on erroneous parts, into multiple local interaction combinations, and th...
Kumar, S.; Singh, A.; Dhar, A.
2017-08-01
The accurate estimation of the photovoltaic parameters is fundamental to gain an insight of the physical processes occurring inside a photovoltaic device and thereby to optimize its design, fabrication processes, and quality. A simulative approach of accurately determining the device parameters is crucial for cell array and module simulation when applied in practical on-field applications. In this work, we have developed a global particle swarm optimization (GPSO) approach to estimate the different solar cell parameters viz., ideality factor (η), short circuit current (Isc), open circuit voltage (Voc), shunt resistant (Rsh), and series resistance (Rs) with wide a search range of over ±100 % for each model parameter. After validating the accurateness and global search power of the proposed approach with synthetic and noisy data, we applied the technique to the extract the PV parameters of ZnO/PCDTBT based hybrid solar cells (HSCs) prepared under different annealing conditions. Further, we examine the variation of extracted model parameters to unveil the physical processes occurring when different annealing temperatures are employed during the device fabrication and establish the role of improved charge transport in polymer films from independent FET measurements. The evolution of surface morphology, optical absorption, and chemical compositional behaviour of PCDTBT co-polymer films as a function of processing temperature has also been captured in the study and correlated with the findings from the PV parameters extracted using GPSO approach.
Invalid-point removal based on epipolar constraint in the structured-light method
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
International Nuclear Information System (INIS)
Horseman, S.T.
1994-01-01
The purpose of this report, commissioned by ENRESA, is to examine the characteristics, properties and responses of argillaceous media (clays and more indurated mudrocks) in some detail in order to identify the main parameters that will influence the radiological safety of a deep underground facility for the disposal of high-level radioactive wastes (HLW) and to highlight possible constraints and other important issues relating to the construction, operation and performance of such a facility
Optimization of process parameters in precipitation for consistent quality UO2 powder production
International Nuclear Information System (INIS)
Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N.
2013-01-01
Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO 2 powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO 2 powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO 2 powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)
Optimization of a Cu CMP process modeling parameters of nanometer integrated circuits
International Nuclear Information System (INIS)
Ruan Wenbiao; Chen Lan; Ma Tianyu; Fang Jingjing; Zhang He; Ye Tianchun
2012-01-01
A copper chemical mechanical polishing (Cu CMP) process is reviewed and analyzed from the view of chemical physics. Three steps Cu CMP process modeling is set up based on the actual process of manufacturing and pattern-density-step-height (PDSH) modeling from MIT. To catch the pattern dependency, a 65 nm testing chip is designed and processed in the foundry. Following the model parameter extraction procedure, the model parameters are extracted and verified by testing data from the 65 nm testing chip. A comparison of results between the model predictions and test data show that the former has the same trend as the latter and the largest deviation is less than 5 nm. Third party testing data gives further evidence to support the great performance of model parameter optimization. Since precise CMP process modeling is used for the design of manufacturability (DFM) checks, critical hotspots are displayed and eliminated, which will assure good yield and production capacity of IC. (semiconductor technology)
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.
2017-12-01
The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.
Yoo, C. J.; Shin, B. S.; Kang, B. S.; Yun, D. H.; You, D. B.; Hong, S. M.
2017-09-01
In this paper, we propose a new porous polymer printing technology based on CBA(chemical blowing agent), and describe the optimization process according to the process parameters. By mixing polypropylene (PP) and CBA, a hybrid CBA filament was manufactured; the diameter of the filament ranged between 1.60 mm and 1.75 mm. A porous polymer structure was manufactured based on the traditional fused deposition modelling (FDM) method. The process parameters of the three-dimensional (3D) porous polymer printing (PPP) process included nozzle temperature, printing speed, and CBA density. Porosity increase with an increase in nozzle temperature and CBA density. On the contrary, porosity increase with a decrease in the printing speed. For porous structures, it has excellent mechanical properties. We manufactured a simple shape in 3D using 3D PPP technology. In the future, we will study the excellent mechanical properties of 3D PPP technology and apply them to various safety fields.
Linear determining equations for differential constraints
International Nuclear Information System (INIS)
Kaptsov, O V
1998-01-01
A construction of differential constraints compatible with partial differential equations is considered. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the classical determining equations used in the search for admissible Lie operators. As applications of this approach equations of an ideal incompressible fluid and non-linear heat equations are discussed
Directory of Open Access Journals (Sweden)
Meleiro L.A.C.
2000-01-01
Full Text Available Most advanced computer-aided control applications rely on good dynamics process models. The performance of the control system depends on the accuracy of the model used. Typically, such models are developed by conducting off-line identification experiments on the process. These experiments for identification often result in input-output data with small output signal-to-noise ratio, and using these data results in inaccurate model parameter estimates [1]. In this work, a multivariable adaptive self-tuning controller (STC was developed for a biotechnological process application. Due to the difficulties involving the measurements or the excessive amount of variables normally found in industrial process, it is proposed to develop "soft-sensors" which are based fundamentally on artificial neural networks (ANN. A second approach proposed was set in hybrid models, results of the association of deterministic models (which incorporates the available prior knowledge about the process being modeled with artificial neural networks. In this case, kinetic parameters - which are very hard to be accurately determined in real time industrial plants operation - were obtained using ANN predictions. These methods are especially suitable for the identification of time-varying and nonlinear models. This advanced control strategy was applied to a fermentation process to produce ethyl alcohol (ethanol in industrial scale. The reaction rate considered for substratum consumption, cells and ethanol productions are validated with industrial data for typical operating conditions. The results obtained show that the proposed procedure in this work has a great potential for application.
Equation-free analysis of agent-based models and systematic parameter determination
Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.
2016-12-01
Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for
Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G
2016-11-01
Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.
Microcontroller base process emulator
Jovrea Titus Claudiu
2009-01-01
This paper describes the design of a microcontroller base emulator for a conventional industrial process. The emulator is made with microcontroller and is used for testing and evaluating the performances of the industrial regulators. The parameters of the emulated process are fully customizable online and downloadable thru a serial communication from a personal computer.
International Nuclear Information System (INIS)
Mir, S.A.; Padma, T.
2016-01-01
Due to overwhelming complex and vague nature of interactions between multiple factors describing agriculture, Multi-Criteria Decision Making (MCDM) methods are widely used from farm to fork to facilitate systematic and transparent decision support, figure out multiple decision outcomes and equip decision maker with confident decision choices in order to choose best alternative. This research proposes a Fuzzy Analytical Hierarchy Process (FAHP) based decision support to evaluate and prioritize important factors of rice production practices and constraints under temperate climatic conditions and provides estimate of weightings, which measure relative importance of critical factors of the crop under biotic, abiotic, socio-economic and technological settings. The results envisage that flood, drought, water logging, late sali, temperature and rainfall are important constraints. However, regulating transplantation time; maintaining planting density; providing training to the educated farmers; introducing high productive varieties like Shalimar Rice-1 and Jhelum; better management of nutrients, weeds and diseases are most important opportunities to enhance rice production in the region. Therefore, the proposed system supplements farmers with precise decision information about important rice production practices, opportunities and constraints.
Directory of Open Access Journals (Sweden)
Shabir A. Mir
2016-12-01
Full Text Available Due to overwhelming complex and vague nature of interactions between multiple factors describing agriculture, Multi-Criteria Decision Making (MCDM methods are widely used from farm to fork to facilitate systematic and transparent decision support, figure out multiple decision outcomes and equip decision maker with confident decision choices in order to choose best alternative. This research proposes a Fuzzy Analytical Hierarchy Process (FAHP based decision support to evaluate and prioritize important factors of rice production practices and constraints under temperate climatic conditions and provides estimate of weightings, which measure relative importance of critical factors of the crop under biotic, abiotic, socio-economic and technological settings. The results envisage that flood, drought, water logging, late sali, temperature and rainfall are important constraints. However, regulating transplantation time; maintaining planting density; providing training to the educated farmers; introducing high productive varieties like Shalimar Rice-1 and Jhelum; better management of nutrients, weeds and diseases are most important opportunities to enhance rice production in the region. Therefore, the proposed system supplements farmers with precise decision information about important rice production practices, opportunities and constraints.
Energy Technology Data Exchange (ETDEWEB)
Mir, S.A.; Padma, T.
2016-07-01
Due to overwhelming complex and vague nature of interactions between multiple factors describing agriculture, Multi-Criteria Decision Making (MCDM) methods are widely used from farm to fork to facilitate systematic and transparent decision support, figure out multiple decision outcomes and equip decision maker with confident decision choices in order to choose best alternative. This research proposes a Fuzzy Analytical Hierarchy Process (FAHP) based decision support to evaluate and prioritize important factors of rice production practices and constraints under temperate climatic conditions and provides estimate of weightings, which measure relative importance of critical factors of the crop under biotic, abiotic, socio-economic and technological settings. The results envisage that flood, drought, water logging, late sali, temperature and rainfall are important constraints. However, regulating transplantation time; maintaining planting density; providing training to the educated farmers; introducing high productive varieties like Shalimar Rice-1 and Jhelum; better management of nutrients, weeds and diseases are most important opportunities to enhance rice production in the region. Therefore, the proposed system supplements farmers with precise decision information about important rice production practices, opportunities and constraints.
Constraints on variations in inflaton decay rate from modulated preheating
Energy Technology Data Exchange (ETDEWEB)
Mazumdar, Arindam [Theory Division, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Kolkata-64 (India); Modak, Kamakshya Prasad, E-mail: arindam.mazumdar@saha.ac.in, E-mail: kamakshya.modak@saha.ac.in [Astroparticle Physics and Cosmology Division, Saha Institute of Nuclear Physics, 1/AF Bidhannagar, Kolkata-64 (India)
2016-06-01
Modulated (p)reheating is thought to be an alternative mechanism for producing super-horizon curvature perturbations in CMB. But large non-gaussianity and iso-curvature perturbations produced by this mechanism rule out its acceptability as the sole process responsible for generating CMB perturbations. We explore the situation where CMB perturbations are mostly generated by usual quantum fluctuations of inflaton during inflation, but a modulated coupling constant between inflaton and a secondary scalar affects the preheating process and produces some extra curvature perturbations. If the modulating scalar field is considered to be a dark matter candidate, coupling constant between the fields has to be unnaturally fine tuned in order to keep the local-form non-gaussianity and the amplitude of iso-curvature perturbations within observational limit; otherwise parameters of the models have to be tightly constrained. Those constraints imply that the curvature perturbations generated by modulated preheating should be less than 15% of the total observed CMB perturbations. On the other hand if the modulating scalar field is not a dark matter candidate, parameters of the models could not be constrained, but the constraints on the maximum amount of the curvature perturbations coming from modulated preheating remain valid.
Constraints on variations in inflaton decay rate from modulated preheating
International Nuclear Information System (INIS)
Mazumdar, Arindam; Modak, Kamakshya Prasad
2016-01-01
Modulated (p)reheating is thought to be an alternative mechanism for producing super-horizon curvature perturbations in CMB. But large non-gaussianity and iso-curvature perturbations produced by this mechanism rule out its acceptability as the sole process responsible for generating CMB perturbations. We explore the situation where CMB perturbations are mostly generated by usual quantum fluctuations of inflaton during inflation, but a modulated coupling constant between inflaton and a secondary scalar affects the preheating process and produces some extra curvature perturbations. If the modulating scalar field is considered to be a dark matter candidate, coupling constant between the fields has to be unnaturally fine tuned in order to keep the local-form non-gaussianity and the amplitude of iso-curvature perturbations within observational limit; otherwise parameters of the models have to be tightly constrained. Those constraints imply that the curvature perturbations generated by modulated preheating should be less than 15% of the total observed CMB perturbations. On the other hand if the modulating scalar field is not a dark matter candidate, parameters of the models could not be constrained, but the constraints on the maximum amount of the curvature perturbations coming from modulated preheating remain valid.
Parameters Tuning of Model Free Adaptive Control Based on Minimum Entropy
Institute of Scientific and Technical Information of China (English)
Chao Ji; Jing Wang; Liulin Cao; Qibing Jin
2014-01-01
Dynamic linearization based model free adaptive control(MFAC) algorithm has been widely used in practical systems, in which some parameters should be tuned before it is successfully applied to process industries. Considering the random noise existing in real processes, a parameter tuning method based on minimum entropy optimization is proposed,and the feature of entropy is used to accurately describe the system uncertainty. For cases of Gaussian stochastic noise and non-Gaussian stochastic noise, an entropy recursive optimization algorithm is derived based on approximate model or identified model. The extensive simulation results show the effectiveness of the minimum entropy optimization for the partial form dynamic linearization based MFAC. The parameters tuned by the minimum entropy optimization index shows stronger stability and more robustness than these tuned by other traditional index,such as integral of the squared error(ISE) or integral of timeweighted absolute error(ITAE), when the system stochastic noise exists.
A possibility for obtaining constraints in extended supersymmetry
International Nuclear Information System (INIS)
Hruby, J.
1980-01-01
Based on the models, where the central charges appear, an idea is proposed for constructing supersymmetry models where constraints are given automatically. The idea is based on the deep relation between the system of numbers (complex, quaternions, octonions) and supersymmetry. It is shown that the supermodels with topological excitation which are equivalent to the super CP model, the central charges appear due to the 0(2) extended supersymmetry. In 0(2) extended supersymmetry the central charge is proportional to the mass parameter
Data assimilation with inequality constraints
Thacker, W. C.
If values of variables in a numerical model are limited to specified ranges, these restrictions should be enforced when data are assimilated. The simplest option is to assimilate without regard for constraints and then to correct any violations without worrying about additional corrections implied by correlated errors. This paper addresses the incorporation of inequality constraints into the standard variational framework of optimal interpolation with emphasis on our limited knowledge of the underlying probability distributions. Simple examples involving only two or three variables are used to illustrate graphically how active constraints can be treated as error-free data when background errors obey a truncated multi-normal distribution. Using Lagrange multipliers, the formalism is expanded to encompass the active constraints. Two algorithms are presented, both relying on a solution ignoring the inequality constraints to discover violations to be enforced. While explicitly enforcing a subset can, via correlations, correct the others, pragmatism based on our poor knowledge of the underlying probability distributions suggests the expedient of enforcing them all explicitly to avoid the computationally expensive task of determining the minimum active set. If additional violations are encountered with these solutions, the process can be repeated. Simple examples are used to illustrate the algorithms and to examine the nature of the corrections implied by correlated errors.
An agent-based analysis of the German electricity market with transmission capacity constraints
International Nuclear Information System (INIS)
Veit, Daniel J.; Weidlich, Anke; Krafft, Jacob A.
2009-01-01
While some agent-based models have been developed for analyzing the German electricity market, there has been little research done on the emerging issue of intra-German congestion and its effects on the bidding behavior of generator agents. Yet, studies of other markets have shown that transmission grid constraints considerably affect strategic behavior in electricity markets. In this paper, the implications of transmission constraints on power markets are analyzed for the case of Germany. Market splitting is applied in the case of congestion in the grid. For this purpose, the agent-based modeling of electricity systems (AMES) market package developed by Sun and Tesfatsion is modified to fit the German context, including a detailed representation of the German high-voltage grid and its interconnections. Implications of transmission constraints on prices and social welfare are analyzed for scenarios that include strategic behavior of market participants and high wind power generation. It can be shown that strategic behavior and transmission constraints are inter-related and may pose severe problems in the future German electricity market.
An agent-based analysis of the German electricity market with transmission capacity constraints
Energy Technology Data Exchange (ETDEWEB)
Veit, Daniel J.; Weidlich, Anke; Krafft, Jacob A. [University of Mannheim, Dieter Schwarz Chair of Business Administration, E-Business and E-Government, 68131 Mannheim (Germany)
2009-10-15
While some agent-based models have been developed for analyzing the German electricity market, there has been little research done on the emerging issue of intra-German congestion and its effects on the bidding behavior of generator agents. Yet, studies of other markets have shown that transmission grid constraints considerably affect strategic behavior in electricity markets. In this paper, the implications of transmission constraints on power markets are analyzed for the case of Germany. Market splitting is applied in the case of congestion in the grid. For this purpose, the agent-based modeling of electricity systems (AMES) market package developed by Sun and Tesfatsion is modified to fit the German context, including a detailed representation of the German high-voltage grid and its interconnections. Implications of transmission constraints on prices and social welfare are analyzed for scenarios that include strategic behavior of market participants and high wind power generation. It can be shown that strategic behavior and transmission constraints are inter-related and may pose severe problems in the future German electricity market. (author)
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-02-07
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small
MCO closure welding process parameter development and qualification
International Nuclear Information System (INIS)
CANNELL, G.R.
2003-01-01
One of the key elements in the SNF process is final closure of the MCO by welding. Fuel is loaded into the MCO (approximately 2 ft. in diameter and 13 ft. long) and a heavy shield plug is inserted into the top, creating a mechanical seal. The plug contains several process ports for various operations, including vacuum drying and inert-gas backfilling of the packaged fuel. When fully processed, the Canister Cover Assembly (CCA) is placed over the shield plug and final closure made by welding. The following reports the effort between the Amer Industrial Technology (AIT) and Fluor Hanford (FH) to develop and qualify the welding process for making the final closure--with primary emphasis on developing a set of robust parameters for deposition of the root pass. Work was carried out in three phases: (1) Initial welding process and equipment selection with subsequent field demonstration testing; (2) Development and qualification of a specific process technique and parameters; and (3) Validation of the process and parameters at the CSB under mock production conditions. This work establishes the process technique and parameters that provide a high level of confidence that acceptable MCO closure welds will be made on a consistent and repeatable basis
Optimization of machining parameters of turning operations based on multi performance criteria
Directory of Open Access Journals (Sweden)
N.K.Mandal
2013-01-01
Full Text Available The selection of optimum machining parameters plays a significant role to ensure quality of product, to reduce the manufacturing cost and to increase productivity in computer controlled manufacturing process. For many years, multi-objective optimization of turning based on inherent complexity of process is a competitive engineering issue. This study investigates multi-response optimization of turning process for an optimal parametric combination to yield the minimum power consumption, surface roughness and frequency of tool vibration using a combination of a Grey relational analysis (GRA. Confirmation test is conducted for the optimal machining parameters to validate the test result. Various turning parameters, such as spindle speed, feed and depth of cut are considered. Experiments are designed and conducted based on full factorial design of experiment.
On state estimation and fusion with elliptical constraints
Energy Technology Data Exchange (ETDEWEB)
Rao, Nageswara S. [ORNL; Liu, Qiang [ORNL
2017-11-01
We consider tracking of a target with elliptical nonlinear constraints on its motion dynamics. The state estimates are generated by sensors and sent over long-haul links to a remote fusion center for fusion. We show that the constraints can be projected onto the known ellipse and hence incorporated into the estimation and fusion process. In particular, two methods based on (i) direct connection to the center, and (ii) shortest distance to the ellipse are discussed. A tracking example is used to illustrate the tracking performance using projection-based methods with various fusers in the lossy long-haul tracking environment.
Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng
2015-10-01
Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
International Nuclear Information System (INIS)
Milewski, J.O.; Lambrakos, S.G.
1995-01-01
This report presents a general overview of a method of numerically modelling deep penetration welding processes using geometric constraints based on boundary information obtained from experiment. General issues are considered concerning accurate numerical calculation of temperature and velocity fields in regions of the meltpool where the flow of fluid is characterized by quasi-stationary Stokes flow. It is this region of the meltpool which is closest to the heat-affected-zone (HAZ) and which represents a significant fraction of the fusion zone (FZ)
Directory of Open Access Journals (Sweden)
Guiqiang Li
2016-12-01
Full Text Available Electrical efficiency can be increased by combining photovoltaic (PV and the thermoelectric (TE systems. However, a simple and cursory combination is unsuitable because the negative impact of temperature on PV may be greater than its positive impact on TE. This study analyzed the primary constraint conditions based on the hybrid system model consisting of a PV and a TE generator (TEG, which includes TE material with temperature-dependent properties. The influences of the geometric size, solar irradiation and cold side temperature on the hybrid system performance is discussed based on the simulation. Furthermore, the effective range of parameters is demonstrated using the image area method, and the change trend of the area with different parameters illustrates the constraint conditions of an efficient PV-TE hybrid system. These results provide a benchmark for efficient PV-TEG design.
Search strategies in practice: Influence of information and task constraints.
Pacheco, Matheus M; Newell, Karl M
2018-01-01
The practice of a motor task has been conceptualized as a process of search through a perceptual-motor workspace. The present study investigated the influence of information and task constraints on the search strategy as reflected in the sequential relations of the outcome in a discrete movement virtual projectile task. The results showed that the relation between the changes of trial-to-trial movement outcome to performance level was dependent on the landscape of the task dynamics and the influence of inherent variability. Furthermore, the search was in a constrained parameter region of the perceptual-motor workspace that depended on the task constraints. These findings show that there is not a single function of trial-to-trial change over practice but rather that local search strategies (proportional, discontinuous, constant) adapt to the level of performance and the confluence of constraints to action. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermo-mechanical simulation and parameters optimization for beam blank continuous casting
International Nuclear Information System (INIS)
Chen, W.; Zhang, Y.Z.; Zhang, C.J.; Zhu, L.G.; Lu, W.G.; Wang, B.X.; Ma, J.H.
2009-01-01
The objective of this work is to optimize the process parameters of beam blank continuous casting in order to ensure high quality and productivity. A transient thermo-mechanical finite element model is developed to compute the temperature and stress profile in beam blank continuous casting. By comparing the calculated data with the metallurgical constraints, the key factors causing defects of beam blank can be found out. Then based on the subproblem approximation method, an optimization program is developed to search out the optimum cooling parameters. Those optimum parameters can make it possible to run the caster at its maximum productivity, minimum cost and to reduce the defects. Now, online verifying of this optimization project has been put in practice, which can prove that it is very useful to control the actual production
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-02-24
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
Directory of Open Access Journals (Sweden)
Shiyao Wang
2016-02-01
Full Text Available A high-performance differential global positioning system (GPS receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU/dead reckoning (DR data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
New constraints on the running-mass inflation model
International Nuclear Information System (INIS)
Covi, L.; Lyth, D.H.; Melchiorri, A.
2002-10-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)
Unitarity constraints on trimaximal mixing
International Nuclear Information System (INIS)
Kumar, Sanjeev
2010-01-01
When the neutrino mass eigenstate ν 2 is trimaximally mixed, the mixing matrix is called trimaximal. The middle column of the trimaximal mixing matrix is identical to tribimaximal mixing and the other two columns are subject to unitarity constraints. This corresponds to a mixing matrix with four independent parameters in the most general case. Apart from the two Majorana phases, the mixing matrix has only one free parameter in the CP conserving limit. Trimaximality results in interesting interplay between mixing angles and CP violation. A notion of maximal CP violation naturally emerges here: CP violation is maximal for maximal 2-3 mixing. Similarly, there is a natural constraint on the deviation from maximal 2-3 mixing which takes its maximal value in the CP conserving limit.
Temporal Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan
2002-01-01
The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...
van Doorn, Erik A.
2015-01-01
We study the decay parameter (the rate of convergence of the transition probabilities) of a birth-death process on $\\{0,1,...\\}$, which we allow to evanesce by escape, via state 0, to an absorbing state -1. Our main results are representations for the decay parameter under four different scenarios,
van Doorn, Erik A.
We study the decay parameter (the rate of convergence of the transition probabilities) of a birth-death process on $\\{0,1,...\\}$, which we allow to evanesce by escape, via state 0, to an absorbing state -1. Our main results are representations for the decay parameter under four different scenarios,
Decisive Constraints as a Creative Resource in Interaction Design
DEFF Research Database (Denmark)
Biskjaer, Michael Mose; Halskov, Kim
2014-01-01
‘decisive constraints’ based on a review of current, but dispersed, studies into creativity constraints. We build decisive constraints on two definitional conditions related to radical decision-making and creative turning points. To test our concept analytically and ensure its relevance to creative practice......, we apply the two definitional conditions to three media façade installation projects in which our interaction design research lab has been involved. In accord with insights from these case analyses, we argue that decisive constraints may inform current research into design processes and act......This article explores the observation that highly limiting, creative decisions of voluntary self-binding that radically prune the design solution space may in fact fuel and accelerate the process toward an innovative final design. To gain insight into this phenomenon, we propose the concept...
Financing Constraints and Entrepreneurship
William R. Kerr; Ramana Nanda
2009-01-01
Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...
International Nuclear Information System (INIS)
Khan, Mohd Shariq; Lee, Moonyong
2013-01-01
The particle swarm paradigm is employed to optimize single mixed refrigerant natural gas liquefaction process. Liquefaction design involves multivariable problem solving and non-optimal execution of these variables can waste energy and contribute to process irreversibilities. Design optimization requires these variables to be optimized simultaneously; minimizing the compression energy requirement is selected as the optimization objective. Liquefaction is modeled using Honeywell UniSim Design ™ and the resulting rigorous model is connected with the particle swarm paradigm coded in MATLAB. Design constraints are folded into the objective function using the penalty function method. Optimization successfully improved efficiency by reducing the compression energy requirement by ca. 10% compared with the base case. -- Highlights: ► The particle swarm paradigm (PSP) is employed for design optimization of SMR NG liquefaction process. ► Rigorous SMR process model based on UniSim is connected with PSP coded in MATLAB. ► Stochastic features of PSP give more confidence in the optimality of complex nonlinear problems. ► Optimization with PSP notably improves energy efficiency of the SMR process.
Travieso Rodriguez, Jose Antonio; Gómez Gras, David; García Vilana, Silvia; Mainau Noguer, Ferran; Jerez Mesa, Ramón
2015-01-01
This paper aims to find the key process parameters for machining different parts of an automobile gearbox, commissioned by a company that needs to replace with the MQL lubrication system their current machining process based on cutting fluids. It particularly focuses on the definition of appropriate cutting parameters for machining under the MQL condition through a statistical method of Design of Experiments (DOE). Using a combination of recommended parameters, significant improvements in the...
Directory of Open Access Journals (Sweden)
Lei NIE
2015-11-01
Full Text Available Surface roughness is a very important index in silicon direct bonding and it is affected by processing parameters in the wet-activated process. These parameters include the concentration of activation solution, holding time and treatment temperature. The effects of these parameters were investigated by means of orthogonal experiments. In order to analyze the wafer roughness more accurately, the bear ratio of the surface was used as the evaluation index. From the results of the experiments, it could be concluded that the concentration of the activation solution affected the roughness directly and the higher the concentration, the lower the roughness. Holding time did not affect the roughness as acutely as that of the concentration, but a reduced activation time decreased the roughness perceptibly. It was also discovered that the treatment temperature had a weak correlation with the surface roughness. Based on these conclusions, the parameters of concentration, temperature and holding time were optimized respectively as NH4OH:H2O2=1:1 (without water, 70 °C and 5 min. The results of bonding experiments proved the validity of the conclusions of orthogonal experiments.DOI: http://dx.doi.org/10.5755/j01.ms.21.4.9711
Wang, Shijun; Liu, Peter; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Summers, Ronald M
2012-01-01
In this paper, we propose a new pharmacokinetic model for parameter estimation of dynamic contrast-enhanced (DCE) MRI by using Gaussian process inference. Our model is based on the Tofts dual-compartment model for the description of tracer kinetics and the observed time series from DCE-MRI is treated as a Gaussian stochastic process. The parameter estimation is done through a maximum likelihood approach and we propose a variant of the coordinate descent method to solve this likelihood maximization problem. The new model was shown to outperform a baseline method on simulated data. Parametric maps generated on prostate DCE data with the new model also provided better enhancement of tumors, lower intensity on false positives, and better boundary delineation when compared with the baseline method. New statistical parameter maps from the process model were also found to be informative, particularly when paired with the PK parameter maps.
International Nuclear Information System (INIS)
Snyder, P.B.; Ferron, J.R.; Wilson, H.R.
2004-01-01
We review and test the peeling-ballooning model for edge localized modes (ELMs) and pedestal constraints, a model based upon theoretical analysis of magnetohydrodynamic (MHD) instabilities that can limit the pedestal height and drive ELMs. A highly efficient MHD stability code, ELITE, is used to calculate quantitative stability constraints on the pedestal, including constraints on the pedestal height. Because of the impact of collisionality on the bootstrap current, these pedestal constraints are dependent on the density and temperature separately, rather than simply on the pressure. ELITE stability calculations are directly compared with experimental data for a series of plasmas in which the density is varied and ELM characteristics change. In addition, a technique is developed whereby peeling-ballooning pedestal constraints are calculated as a function of key equilibrium parameters via ELITE calculations using series of model equilibria. This technique is used to successfully compare the expected pedestal height as a function of density, triangularity and plasma current with experimental data. Furthermore, the technique can be applied for parameter ranges beyond the purview of present experiments, and we present a brief projection of peeling-ballooning pedestal constraints for burning plasma tokamak designs. (author)
Chen, Pang-Chia
2013-01-01
This paper investigates multi-objective controller design approaches for nonlinear boiler-turbine dynamics subject to actuator magnitude and rate constraints. System nonlinearity is handled by a suitable linear parameter varying system representation with drum pressure as the system varying parameter. Variation of the drum pressure is represented by suitable norm-bounded uncertainty and affine dependence on system matrices. Based on linear matrix inequality algorithms, the magnitude and rate constraints on the actuator and the deviations of fluid density and water level are formulated while the tracking abilities on the drum pressure and power output are optimized. Variation ranges of drum pressure and magnitude tracking commands are used as controller design parameters, determined according to the boiler-turbine's operation range. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints
Yunqing Rao; Dezhong Qi; Jinling Li
2013-01-01
For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better ...
Real-time parameter optimization based on neural network for smart injection molding
Lee, H.; Liau, Y.; Ryu, K.
2018-03-01
The manufacturing industry has been facing several challenges, including sustainability, performance and quality of production. Manufacturers attempt to enhance the competitiveness of companies by implementing CPS (Cyber-Physical Systems) through the convergence of IoT(Internet of Things) and ICT(Information & Communication Technology) in the manufacturing process level. Injection molding process has a short cycle time and high productivity. This features have been making it suitable for mass production. In addition, this process is used to produce precise parts in various industry fields such as automobiles, optics and medical devices. Injection molding process has a mixture of discrete and continuous variables. In order to optimized the quality, variables that is generated in the injection molding process must be considered. Furthermore, Optimal parameter setting is time-consuming work to predict the optimum quality of the product. Since the process parameter cannot be easily corrected during the process execution. In this research, we propose a neural network based real-time process parameter optimization methodology that sets optimal process parameters by using mold data, molding machine data, and response data. This paper is expected to have academic contribution as a novel study of parameter optimization during production compare with pre - production parameter optimization in typical studies.
Parameters Online Detection and Model Predictive Control during the Grain Drying Process
Directory of Open Access Journals (Sweden)
Lihui Zhang
2013-01-01
Full Text Available In order to improve the grain drying quality and automation level, combined with the structural characteristics of the cross-flow circulation grain dryer designed and developed by us, the temperature, moisture, and other parameters measuring sensors were placed on the dryer, to achieve online automatic detection of process parameters during the grain drying process. A drying model predictive control system was set up. A grain dry predictive control model at constant velocity and variable temperature was established, in which the entire process was dried at constant velocity (i.e., precipitation rate per hour is a constant and variable temperature. Combining PC with PLC, and based on LabVIEW, a system control platform was designed.
Nanohydroxyapatite synthesis using optimized process parameters
Indian Academy of Sciences (India)
Nanohydroxyapatite; ultrasonication; response surface methodology; calcination; ... Three independent process parameters: temperature () (70, 80 and 90°C), ... Bangi, Selangor, Malaysia; Energy Research Group, School of Engineering, ...
Energy Technology Data Exchange (ETDEWEB)
Leiva, R.; Donoso, J. R.; Muehlich, U.; Labbe, F.
2004-07-01
The effect of the mismatched weld metal on the stress field close to the crack tip in an idealized weld joint made up of base metal (BM) and weld metal (WM), with the crack located in WM, parallel to the BM/WM interface, was numerically analyzed. The analysis was performed with a J-Q type two-parameter approach with a Modified Boundary Layer, MBL, model subject to a remote displacement field solely controlled by K{sub 1} in order to eliminate the effect of the geometry constraint. The numerical results show that the constraint level decreases for overmatched welds (yield stress of WM higher than that of BM), and increases for under matched welds (yield stress of WM lower than that BM). The constraint level depends on the degree of the mismatch, on the width of the weld, and on the applied load level. (Author) 21 refs.
Processing parameters for ZnO-based thick film varistors obtained by screen printing
Directory of Open Access Journals (Sweden)
de la Rubia, M. A.
2006-06-01
Full Text Available Thick film varistors based on the ZnO-Bi2O3-Sb2O3 system have been prepared by screen printing on dense alumina substrates. Different processing parameters like the paste viscosity, burn out and sintering cycles, green and sintered thickness, have been studied to improve the processing of ZnO-based thick film varistors. Starting powders were pre-treated in two different ways in order to control both the Bi-rich liquid phase formation and the excessive volatilization of Bi2O3 during sintering due to the high area/volume ratio of the thick films. Significant changes have been observed in the electrical properties related to the different firing schedule and selection of the starting powders.
Se han preparado varistores basados en el sistema ZnO-Bi2O3-Sb2O3 en forma de lámina gruesa sobre sustratos de alúmina densa. Diferentes parámetros del procesamiento como la viscosidad de la pasta, los ciclos de calcinación y sinterización y el espesor en verde y sinterizado han sido estudiados para mejorar el procesamiento de los varistores basados en ZnO preparados en forma de lámina gruesa. Los polvos de partida fueron pretratados de dos formas diferentes con el objetivo de controlar la formación de la fase líquida rica en bismuto y la excesiva volatilización de Bi2O3 durante la sinterización debida a la alta relación área-volumen de las láminas gruesas. Se han observado cambios significativos en las propiedades eléctricas relacionadas con los diferentes ciclos de calcinado y con la selección de los polvos de partida.
Model–Based Techniques for Virtual Sensing of Longitudinal Flight Parameters
Directory of Open Access Journals (Sweden)
Seren Cédric
2015-03-01
Full Text Available Introduction of fly-by-wire and increasing levels of automation significantly improve the safety of civil aircraft, and result in advanced capabilities for detecting, protecting and optimizing A/C guidance and control. However, this higher complexity requires the availability of some key flight parameters to be extended. Hence, the monitoring and consolidation of those signals is a significant issue, usually achieved via many functionally redundant sensors to extend the way those parameters are measured. This solution penalizes the overall system performance in terms of weight, maintenance, and so on. Other alternatives rely on signal processing or model-based techniques that make a global use of all or part of the sensor data available, supplemented by a model-based simulation of the flight mechanics. That processing achieves real-time estimates of the critical parameters and yields dissimilar signals. Filtered and consolidated information is delivered in unfaulty conditions by estimating an extended state vector, including wind components, and can replace failed signals in degraded conditions. Accordingly, this paper describes two model-based approaches allowing the longitudinal flight parameters of a civil A/C to be estimated on-line. Results are displayed to evaluate the performances in different simulated and real flight conditions, including realistic external disturbances and modeling errors.
Directory of Open Access Journals (Sweden)
Yuhan Chen
Full Text Available The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter α, and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of α, resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of α values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real
Constraint-based query distribution framework for an integrated global schema
DEFF Research Database (Denmark)
Malik, Ahmad Kamran; Qadir, Muhammad Abdul; Iftikhar, Nadeem
2009-01-01
and replicated data sources. The provided system is all XML-based which poses query in XML form, transforms, and integrates local results in an XML document. Contributions include the use of constraints in our existing global schema which help in source selection and query optimization, and a global query...
Liu, Xinran; Kofman, Jonathan
2017-07-10
A new fringe projection method for surface-shape measurement was developed using four high-frequency phase-shifted background modulation fringe patterns. The pattern frequency is determined using a new fringe-wavelength geometry-constraint model that allows only two corresponding-point candidates in the measurement volume. The correct corresponding point is selected with high reliability using a binary pattern computed from intensity background encoded in the fringe patterns. Equations of geometry-constraint parameters permit parameter calculation prior to measurement, thus reducing measurement computational cost. Experiments demonstrated the ability of the method to perform 3D shape measurement for a surface with geometric discontinuity, and for spatially isolated objects.
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Computationally based methodology for reengineering the high-level waste planning process at SRS
International Nuclear Information System (INIS)
Paul, P.K.; Gregory, M.V.; Wells, M.N.
1997-01-01
The Savannah River Site (SRS) has started processing its legacy of 34 million gallons of high-level radioactive waste into its final disposable form. The SRS high-level waste (HLW) complex consists of 51 waste storage tanks, 3 evaporators, 6 waste treatment operations, and 2 waste disposal facilities. It is estimated that processing wastes to clean up all tanks will take 30+ yr of operation. Integrating all the highly interactive facility operations through the entire life cycle in an optimal fashion-while meeting all the budgetary, regulatory, and operational constraints and priorities-is a complex and challenging planning task. The waste complex operating plan for the entire time span is periodically published as an SRS report. A computationally based integrated methodology has been developed that has streamlined the planning process while showing how to run the operations at economically and operationally optimal conditions. The integrated computational model replaced a host of disconnected spreadsheet calculations and the analysts' trial-and-error solutions using various scenario choices. This paper presents the important features of the integrated computational methodology and highlights the parameters that are core components of the planning process
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Method for Determining the Time Parameter
Directory of Open Access Journals (Sweden)
K. P. Baslyk
2014-01-01
Full Text Available This article proposes a method for calculating one of the characteristics that represents the flight program of the first stage of ballistic rocket i.e. time parameter of the program of attack angle.In simulation of placing the payload for the first stage, a program of flight is used which consists of three segments, namely a vertical climb of the rocket, a segment of programmed reversal by attack angle, and a segment of gravitational reversal with zero angle of attack.The programed reversal by attack angle is simulated as a rapidly decreasing and increasing function. This function depends on the attack angle amplitude, time and time parameter.If the projected and ballistic parameters and the amplitude of attack angle were determined this coefficient is calculated based the constraint that the rocket velocity is equal to 0.8 from the sound velocity (0,264 km/sec when the angle of attack becomes equal to zero. Such constraint is transformed to the nonlinear equation, which can be solved using a Newton method.The attack angle amplitude value is unknown for the design analysis. Exceeding some maximum admissible value for this parameter may lead to excessive trajectory collapsing (foreshortening, which can be identified as an arising negative trajectory angle.Consequently, therefore it is necessary to compute the maximum value of the attack angle amplitude with the following constraints: a trajectory angle is positive during the entire first stage flight and the rocket velocity is equal to 0,264 km/sec by the end of program of angle attack. The problem can be formulated as a task of the nonlinear programming, minimization of the modified Lagrange function, which is solved using the multipliers method.If multipliers and penalty parameter are constant the optimization problem without constraints takes place. Using the determined coordinate descent method allows solving the problem of modified Lagrange function of unconstrained minimization with fixed
Ply-based Optimization of Laminated Composite Shell Structures under Manufacturing Constraints
DEFF Research Database (Denmark)
Sørensen, Rene; Lund, Erik
2012-01-01
This work concerns a new ply-based parameterization for performing simultaneous material selection and topology optimization of fiber reinforced laminated composite structures while ensuring that a series of different manufacturing constraints are fulfilled. The material selection can either...
Directory of Open Access Journals (Sweden)
Xuefeng Li
2014-04-01
Full Text Available Based on solving numerically the generalized nonlinear Langevin equation describing the nonlinear dynamics of stochastic resonance by Fourth-order Runge-Kutta method, an aperiodic stochastic resonance based on an optical bistable system is numerically investigated. The numerical results show that a parameter-tuning stochastic resonance system can be realized by choosing the appropriate optical bistable parameters, which performs well in reconstructing aperiodic signals from a very high level of noise background. The influences of optical bistable parameters on the stochastic resonance effect are numerically analyzed via cross-correlation, and a maximum cross-correlation gain of 8 is obtained by optimizing optical bistable parameters. This provides a prospective method for reconstructing noise-hidden weak signals in all-optical signal processing systems.
Directory of Open Access Journals (Sweden)
Mehran Tamjidy
2017-05-01
Full Text Available The development of Friction Stir Welding (FSW has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ, a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS and Shannon’s entropy.
Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz
2017-05-15
The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.
Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution
Directory of Open Access Journals (Sweden)
Rammohan Mallipeddi
2013-01-01
Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.
Directory of Open Access Journals (Sweden)
Niti Ashish Kumar Desai
2015-12-01
Full Text Available Business Strategies are formulated based on an understanding of customer needs. This requires development of a strategy to understand customer behaviour and buying patterns, both current and future. This involves understanding, first how an organization currently understands customer needs and second predicting future trends to drive growth. This article focuses on purchase trend of customer, where timing of purchase is more important than association of item to be purchased, and which can be found out with Sequential Pattern Mining (SPM methods. Conventional SPM algorithms worked purely on frequency identifying patterns that were more frequent but suffering from challenges like generation of huge number of uninteresting patterns, lack of user’s interested patterns, rare item problem, etc. Article attempts a solution through development of a SPM algorithm based on various constraints like Gap, Compactness, Item, Recency, Profitability and Length along with Frequency constraint. Incorporation of six additional constraints is as well to ensure that all patterns are recently active (Recency, active for certain time span (Compactness, profitable and indicative of next timeline for purchase (Length―Item―Gap. The article also attempts to throw light on how proposed Constraint-based Prefix Span algorithm is helpful to understand buying behaviour of customer which is in formative stage.
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Herrmann, Bjoern; Mahmoudi, Farvah; O'Leary, Ben; Porod, Werner; Sekmen, Sezen; Strobbe, Nadja
2015-11-19
We present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Differential constraints and exact solutions of nonlinear diffusion equations
International Nuclear Information System (INIS)
Kaptsov, Oleg V; Verevkin, Igor V
2003-01-01
The differential constraints are applied to obtain explicit solutions of nonlinear diffusion equations. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the determining equations used in the search for classical Lie symmetries
Laser Processing of Multilayered Thermal Spray Coatings: Optimal Processing Parameters
Tewolde, Mahder; Zhang, Tao; Lee, Hwasoo; Sampath, Sanjay; Hwang, David; Longtin, Jon
2017-12-01
Laser processing offers an innovative approach for the fabrication and transformation of a wide range of materials. As a rapid, non-contact, and precision material removal technology, lasers are natural tools to process thermal spray coatings. Recently, a thermoelectric generator (TEG) was fabricated using thermal spray and laser processing. The TEG device represents a multilayer, multimaterial functional thermal spray structure, with laser processing serving an essential role in its fabrication. Several unique challenges are presented when processing such multilayer coatings, and the focus of this work is on the selection of laser processing parameters for optimal feature quality and device performance. A parametric study is carried out using three short-pulse lasers, where laser power, repetition rate and processing speed are varied to determine the laser parameters that result in high-quality features. The resulting laser patterns are characterized using optical and scanning electron microscopy, energy-dispersive x-ray spectroscopy, and electrical isolation tests between patterned regions. The underlying laser interaction and material removal mechanisms that affect the feature quality are discussed. Feature quality was found to improve both by using a multiscanning approach and an optional assist gas of air or nitrogen. Electrically isolated regions were also patterned in a cylindrical test specimen.
Application of the Theory of Constraints in Project Based Structures
Martynas Sarapinas; Vytautas Pranas Sūdžius
2011-01-01
The article deals with the application of the Theory of Constraints (TOC) in project management. This article involves a short introduction to TOC as a project management method and deep analysis of project management specialties using the TOC: TOC based project planning, timetable management, tasks synchronization, project control and “relay runner work ethic”. Moreover, the article describes traditional and TOC based project management theories in their comparison, and emphasize the main be...
Salloum, Ahmed
Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models
Home-based Constraint Induced Movement Therapy Poststroke
Directory of Open Access Journals (Sweden)
Stephen Isbel HScD
2014-10-01
Full Text Available Background: This study examined the efficacy of a home-based Constraint Induced Movement Therapy (CI Therapy protocol with eight poststroke survivors. Method: Eight ABA, single case experiments were conducted in the homes of poststroke survivors. The intervention comprised restraint of the intact upper limb in a mitt for 21 days combined with a home-based and self-directed daily activity regime. Motor changes were measured using The Wolf Motor Function Test (WMFT and the Motor Activity Log (MAL. Results: Grouped results showed statistically and clinically significant differences on the WMFT (WMFT [timed items]: Mean 7.28 seconds, SEM 1.41, 95% CI 4.40 – 10.18, p = 0.000; WMFT (Functional Ability: z = -4.63, p = 0.000. Seven out of the eight participants exceeded the minimal detectable change on both subscales of the MAL. Conclusion: This study offers positive preliminary data regarding the feasibility of a home-based CI Therapy protocol. This requires further study through an appropriately powered control trial.
Kinetics parameters of a slurry remediation process in rotating drum bioreactors
International Nuclear Information System (INIS)
Esquivel-Rios, I.; Rodriguez-Meza, M. A.; Barrera-Cortes, J.
2009-01-01
The knowledge of biotransformation pollution dynamics in any systems is important for design and optimization purposes of biochemical processes involved. this is focus to the determination of kinetics parameters such as the maximum specific growth rate (μMAX), saturation constant (Ks), biomass yield (YX/S; X: biomass, S: substrate) and oxygen consumption (YO 2 /S; O 2 : oxygen). Several approximations, based on Monod equation, have been developed for estimating kinetics parameters in terms of concentration and type of substrate, bioprocess type and microflora available. (Author)
On Lifecycle Constraints of Artifact-Centric Workflows
Kucukoguz, Esra; Su, Jianwen
Data plays a fundamental role in modeling and management of business processes and workflows. Among the recent "data-aware" workflow models, artifact-centric models are particularly interesting. (Business) artifacts are the key data entities that are used in workflows and can reflect both the business logic and the execution states of a running workflow. The notion of artifacts succinctly captures the fluidity aspect of data during workflow executions. However, much of the technical dimension concerning artifacts in workflows is not well understood. In this paper, we study a key concept of an artifact "lifecycle". In particular, we allow declarative specifications/constraints of artifact lifecycle in the spirit of DecSerFlow, and formulate the notion of lifecycle as the set of all possible paths an artifact can navigate through. We investigate two technical problems: (Compliance) does a given workflow (schema) contain only lifecycle allowed by a constraint? And (automated construction) from a given lifecycle specification (constraint), is it possible to construct a "compliant" workflow? The study is based on a new formal variant of artifact-centric workflow model called "ArtiNets" and two classes of lifecycle constraints named "regular" and "counting" constraints. We present a range of technical results concerning compliance and automated construction, including: (1) compliance is decidable when workflow is atomic or constraints are regular, (2) for each constraint, we can always construct a workflow that satisfies the constraint, and (3) sufficient conditions where atomic workflows can be constructed.
Null Space Integration Method for Constrained Multibody Systems with No Constraint Violation
International Nuclear Information System (INIS)
Terze, Zdravko; Lefeber, Dirk; Muftic, Osman
2001-01-01
A method for integrating equations of motion of constrained multibody systems with no constraint violation is presented. A mathematical model, shaped as a differential-algebraic system of index 1, is transformed into a system of ordinary differential equations using the null-space projection method. Equations of motion are set in a non-minimal form. During integration, violations of constraints are corrected by solving constraint equations at the position and velocity level, utilizing the metric of the system's configuration space, and projective criterion to the coordinate partitioning method. The method is applied to dynamic simulation of 3D constrained biomechanical system. The simulation results are evaluated by comparing them to the values of characteristic parameters obtained by kinematics analysis of analyzed motion based unmeasured kinematics data
Revisiting the simplicity constraints and coherent intertwiners
International Nuclear Information System (INIS)
Dupuis, Maite; Livine, Etera R
2011-01-01
In the context of loop quantum gravity and spinfoam models, the simplicity constraints are essential in that they allow one to write general relativity as a constrained topological BF theory. In this work, we apply the recently developed U(N) framework for SU(2) intertwiners to the issue of imposing the simplicity constraints to spin network states. More particularly, we focus on solving on individual intertwiners in the 4D Euclidean theory. We review the standard way of solving the simplicity constraints using coherent intertwiners and we explain how these fit within the U(N) framework. Then we show how these constraints can be written as a closed u(N) algebra and we propose a set of U(N) coherent states that solves all the simplicity constraints weakly for an arbitrary Immirzi parameter.
FPGA based image processing for optical surface inspection with real time constraints
Hasani, Ylber; Bodenstorfer, Ernst; Brodersen, Jörg; Mayer, Konrad J.
2015-02-01
Today, high-quality printing products like banknotes, stamps, or vouchers, are automatically checked by optical surface inspection systems. In a typical optical surface inspection system, several digital cameras acquire the printing products with fine resolution from different viewing angles and at multiple wavelengths of the visible and also near infrared spectrum of light. The cameras deliver data streams with a huge amount of image data that have to be processed by an image processing system in real time. Due to the printing industry's demand for higher throughput together with the necessity to check finer details of the print and its security features, the data rates to be processed tend to explode. In this contribution, a solution is proposed, where the image processing load is distributed between FPGAs and digital signal processors (DSPs) in such a way that the strengths of both technologies can be exploited. The focus lies upon the implementation of image processing algorithms in an FPGA and its advantages. In the presented application, FPGAbased image-preprocessing enables real-time implementation of an optical color surface inspection system with a spatial resolution of 100 μm and for object speeds over 10 m/s. For the implementation of image processing algorithms in the FPGA, pipeline parallelism with clock frequencies up to 150 MHz together with spatial parallelism based on multiple instantiations of modules for parallel processing of multiple data streams are exploited for the processing of image data of two cameras and three color channels. Due to their flexibility and their fast response times, it is shown that FPGAs are ideally suited for realizing a configurable all-digital PLL for the processing of camera line-trigger signals with frequencies about 100 kHz, using pure synchronous digital circuit design.
New evaluation parameter for wearable thermoelectric generators
Wijethunge, Dimuthu; Kim, Woochul
2018-04-01
Wearable devices constitute a key application area for thermoelectric devices. However, owing to new constraints in wearable applications, a few conventional device optimization techniques are not appropriate and material evaluation parameters, such as figure of merit (zT) and power factor (PF), tend to be inadequate. We illustrated the incompleteness of zT and PF by performing simulations and considering different thermoelectric materials. The results indicate a weak correlation between device performance and zT and PF. In this study, we propose a new evaluation parameter, zTwearable, which is better suited for wearable applications compared to conventional zT. Owing to size restrictions, gap filler based device optimization is extremely critical in wearable devices. With respect to the occasions in which gap fillers are used, expressions for power, effective thermal conductivity (keff), and optimum load electrical ratio (mopt) are derived. According to the new parameters, the thermal conductivity of the material has become much more critical now. The proposed new evaluation parameter, namely, zTwearable, is extremely useful in the selection of an appropriate thermoelectric material among various candidates prior to the commencement of the actual design process.
Energy Technology Data Exchange (ETDEWEB)
Tiwari, S.K.; Reddy, A.L.V.; Venkataswamy, J.; Misra, M.; Setty, D.S.; Sheela, S.; Saibaba, N., E-mail: misra@nfc.gov.in [Nuclear Fuel Complex, Hyderabad (India)
2013-07-01
Nuclear reactor grade natural uranium dioxide powder is being produced through precipitation route, which is further processed before converting into sintered pellets used in the fabrication of PHWR fuel assemblies of 220 and 540 MWe type reactors. The process of precipitating Uranyl Nitrate Pure Solution (UNPS) is an important step in the UO{sub 2} powder production line, where in soluble uranium is transformed into solid form of Ammonium Uranate (AU), which in turn reflects and decides the powder characteristics. Precipitation of UNPS with vapour ammonia is being carried out in semi batch process and process parameters like ammonia flow rate, temperature, concentration of UNPS and free acidity of UNPS are very critical and decides the UO{sub 2} powder quality. Variation in these critical parameters influences powder characteristics, which in turn influences the sinterability of UO{sub 2} powder. In order to get consistent powder quality and sinterability the critical parameter like ammonia flow rate during precipitation is studied, optimized and validated. The critical process parameters are controlled through PLC based automated on-line data acquisition systems for achieving consistent powder quality with increased recovery and production. The present paper covers optimization of process parameters and powder characteristics. (author)
A chaos-based image encryption algorithm with variable control parameters
International Nuclear Information System (INIS)
Wang Yong; Wong, K.-W.; Liao Xiaofeng; Xiang Tao; Chen Guanrong
2009-01-01
In recent years, a number of image encryption algorithms based on the permutation-diffusion structure have been proposed. However, the control parameters used in the permutation stage are usually fixed in the whole encryption process, which favors attacks. In this paper, a chaos-based image encryption algorithm with variable control parameters is proposed. The control parameters used in the permutation stage and the keystream employed in the diffusion stage are generated from two chaotic maps related to the plain-image. As a result, the algorithm can effectively resist all known attacks against permutation-diffusion architectures. Theoretical analyses and computer simulations both confirm that the new algorithm possesses high security and fast encryption speed for practical image encryption.
International Nuclear Information System (INIS)
Nadammal, Naresh; Kailas, Satish V.; Suwas, Satyam
2015-01-01
Highlights: • An experimental bottom-up approach has been developed for optimizing the process parameters for friction stir processing. • Optimum parameter processed samples were tested and characterized in detail. • Ultimate tensile strength of 1.3 times the base metal strength was obtained. • Residual stresses on the processed surface were only 10% of the yield strength of base metal. • Microstructure observations revealed fine equi-axed grains with precipitate particles at the grain boundaries. - Abstract: Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for
Game-Based Approaches, Pedagogical Principles and Tactical Constraints: Examining Games Modification
Serra-Olivares, Jaime; García-López, Luis M.; Calderón, Antonio
2016-01-01
The purpose of this study was to analyze the effect of modification strategies based on the pedagogical principles of the Teaching Games for Understanding approach on tactical constraints of four 3v3 soccer small-sided games. The Game performance of 21 U-10 players was analyzed in a game similar to the adult game; one based on keeping-the-ball;…
Exact run length distribution of the double sampling x-bar chart with estimated process parameters
Directory of Open Access Journals (Sweden)
Teoh, W. L.
2016-05-01
Full Text Available Since the run length distribution is generally highly skewed, a significant concern about focusing too much on the average run length (ARL criterion is that we may miss some crucial information about a control chart’s performance. Thus it is important to investigate the entire run length distribution of a control chart for an in-depth understanding before implementing the chart in process monitoring. In this paper, the percentiles of the run length distribution for the double sampling (DS X chart with estimated process parameters are computed. Knowledge of the percentiles of the run length distribution provides a more comprehensive understanding of the expected behaviour of the run length. This additional information includes the early false alarm, the skewness of the run length distribution, and the median run length (MRL. A comparison of the run length distribution between the optimal ARL-based and MRL-based DS X chart with estimated process parameters is presented in this paper. Examples of applications are given to aid practitioners to select the best design scheme of the DS X chart with estimated process parameters, based on their specific purpose.
Parameter uncertainty effects on variance-based sensitivity analysis
International Nuclear Information System (INIS)
Yu, W.; Harris, T.J.
2009-01-01
In the past several years there has been considerable commercial and academic interest in methods for variance-based sensitivity analysis. The industrial focus is motivated by the importance of attributing variance contributions to input factors. A more complete understanding of these relationships enables companies to achieve goals related to quality, safety and asset utilization. In a number of applications, it is possible to distinguish between two types of input variables-regressive variables and model parameters. Regressive variables are those that can be influenced by process design or by a control strategy. With model parameters, there are typically no opportunities to directly influence their variability. In this paper, we propose a new method to perform sensitivity analysis through a partitioning of the input variables into these two groupings: regressive variables and model parameters. A sequential analysis is proposed, where first an sensitivity analysis is performed with respect to the regressive variables. In the second step, the uncertainty effects arising from the model parameters are included. This strategy can be quite useful in understanding process variability and in developing strategies to reduce overall variability. When this method is used for nonlinear models which are linear in the parameters, analytical solutions can be utilized. In the more general case of models that are nonlinear in both the regressive variables and the parameters, either first order approximations can be used, or numerically intensive methods must be used
Evolving chemometric models for predicting dynamic process parameters in viscose production
Energy Technology Data Exchange (ETDEWEB)
Cernuda, Carlos [Department of Knowledge-Based Mathematical Systems, Johannes Kepler University Linz (Austria); Lughofer, Edwin, E-mail: edwin.lughofer@jku.at [Department of Knowledge-Based Mathematical Systems, Johannes Kepler University Linz (Austria); Suppan, Lisbeth [Kompetenzzentrum Holz GmbH, St. Peter-Str. 25, 4021 Linz (Austria); Roeder, Thomas; Schmuck, Roman [Lenzing AG, 4860 Lenzing (Austria); Hintenaus, Peter [Software Research Center, Paris Lodron University Salzburg (Austria); Maerzinger, Wolfgang [i-RED Infrarot Systeme GmbH, Linz (Austria); Kasberger, Juergen [Recendt GmbH, Linz (Austria)
2012-05-06
Highlights: Black-Right-Pointing-Pointer Quality assurance of process parameters in viscose production. Black-Right-Pointing-Pointer Automatic prediction of spin-bath concentrations based on FTNIR spectra. Black-Right-Pointing-Pointer Evolving chemometric models for efficiently handling changing system dynamics over time (no time-intensive re-calibration needed). Black-Right-Pointing-Pointer Significant reduction of huge errors produced by statistical state-of-the-art calibration methods. Black-Right-Pointing-Pointer Sufficient flexibility achieved by gradual forgetting mechanisms. - Abstract: In viscose production, it is important to monitor three process parameters in order to assure a high quality of the final product: the concentrations of H{sub 2}SO{sub 4}, Na{sub 2}SO{sub 4} and Z{sub n}SO{sub 4}. During on-line production these process parameters usually show a quite high dynamics depending on the fiber type that is produced. Thus, conventional chemometric models, which are trained based on collected calibration spectra from Fourier transform near infrared (FT-NIR) measurements and kept fixed during the whole life-time of the on-line process, show a quite imprecise and unreliable behavior when predicting the concentrations of new on-line data. In this paper, we are demonstrating evolving chemometric models which are able to adapt automatically to varying process dynamics by updating their inner structures and parameters in a single-pass incremental manner. These models exploit the Takagi-Sugeno fuzzy model architecture, being able to model flexibly different degrees of non-linearities implicitly contained in the mapping between near infrared spectra (NIR) and reference values. Updating the inner structures is achieved by moving the position of already existing local regions and by evolving (increasing non-linearity) or merging (decreasing non-linearity) new local linear predictors on demand, which are guided by distance-based and similarity criteria. Gradual
Stochastic User Equilibrium Assignment in Schedule-Based Transit Networks with Capacity Constraints
Directory of Open Access Journals (Sweden)
Wangtu Xu
2012-01-01
Full Text Available This paper proposes a stochastic user equilibrium (SUE assignment model for a schedule-based transit network with capacity constraint. We consider a situation in which passengers do not have the full knowledge about the condition of the network and select paths that minimize a generalized cost function encompassing five components: (1 ride time, which is composed of in-vehicle and waiting times, (2 overload delay, (3 fare, (4 transfer constraints, and (5 departure time difference. We split passenger demands among connections which are the space-time paths between OD pairs of the network. All transit vehicles have a fixed capacity and operate according to some preset timetables. When the capacity constraint of the transit line segment is reached, we show that the Lagrange multipliers of the mathematical programming problem are equivalent to the equilibrium passenger overload delay in the congested transit network. The proposed model can simultaneously predict how passengers choose their transit vehicles to minimize their travel costs and estimate the associated costs in a schedule-based congested transit network. A numerical example is used to illustrate the performance of the proposed model.
Updating parameters of the chicken processing line model
DEFF Research Database (Denmark)
Kurowicka, Dorota; Nauta, Maarten; Jozwiak, Katarzyna
2010-01-01
A mathematical model of chicken processing that quantitatively describes the transmission of Campylobacter on chicken carcasses from slaughter to chicken meat product has been developed in Nauta et al. (2005). This model was quantified with expert judgment. Recent availability of data allows...... updating parameters of the model to better describe processes observed in slaughterhouses. We propose Bayesian updating as a suitable technique to update expert judgment with microbiological data. Berrang and Dickens’s data are used to demonstrate performance of this method in updating parameters...... of the chicken processing line model....
A triangle voting algorithm based on double feature constraints for star sensors
Fan, Qiaoyun; Zhong, Xuyang
2018-02-01
A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.
Deepening Contractions and Collateral Constraints
DEFF Research Database (Denmark)
Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano
and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...
The Adjoint Method for Gradient-based Dynamic Optimization of UV Flash Processes
DEFF Research Database (Denmark)
Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp
2017-01-01
This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Dynamic optimization of UV flash processes is relevant in nonlinear model predictive control of distillation columns, certain two-phase flow pro......-component flash process which demonstrate the importance of the optimization solver, the compiler, and the linear algebra software for the efficiency of dynamic optimization of UV flash processes....
Joudaki, Shahab; Blake, Chris; Johnson, Andrew; Amon, Alexandra; Asgari, Marika; Choi, Ami; Erben, Thomas; Glazebrook, Karl; Harnois-Déraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Mead, Alexander; Miller, Lance; Parkinson, David; Poole, Gregory B.; Schneider, Peter; Viola, Massimo; Wolf, Christian
2018-03-01
We perform a combined analysis of cosmic shear tomography, galaxy-galaxy lensing tomography, and redshift-space multipole power spectra (monopole and quadrupole) using 450 deg2 of imaging data by the Kilo Degree Survey (KiDS-450) overlapping with two spectroscopic surveys: the 2-degree Field Lensing Survey (2dFLenS) and the Baryon Oscillation Spectroscopic Survey (BOSS). We restrict the galaxy-galaxy lensing and multipole power spectrum measurements to the overlapping regions with KiDS, and self-consistently compute the full covariance between the different observables using a large suite of N-body simulations. We methodically analyse different combinations of the observables, finding that the galaxy-galaxy lensing measurements are particularly useful in improving the constraint on the intrinsic alignment amplitude, while the multipole power spectra are useful in tightening the constraints along the lensing degeneracy direction. The fully combined constraint on S_8 ≡ σ _8 √{Ω _m/0.3}=0.742± 0.035, which is an improvement by 20 per cent compared to KiDS alone, corresponds to a 2.6σ discordance with Planck, and is not significantly affected by fitting to a more conservative set of scales. Given the tightening of the parameter space, we are unable to resolve the discordance with an extended cosmology that is simultaneously favoured in a model selection sense, including the sum of neutrino masses, curvature, evolving dark energy and modified gravity. The complementarity of our observables allows for constraints on modified gravity degrees of freedom that are not simultaneously bounded with either probe alone, and up to a factor of three improvement in the S8 constraint in the extended cosmology compared to KiDS alone.
Directory of Open Access Journals (Sweden)
Prashant J. Patil
2016-09-01
Full Text Available Close tolerance and good surface finish are achieved by means of grinding process. This study was carried out for multi-objective optimization of MQL grinding process parameters. Water based Al2O3 and CuO nanofluids of various concentrations are used as lubricant for MQL system. Grinding experiments were carried out on instrumented surface grinding machine. For experimentation purpose Taguchi's method was used. Important process parameters that affect the G ratio and surface finish in MQL grinding are depth of cut, type of lubricant, feed rate, grinding wheel speed, coolant flow rate, and nanoparticle size. Grinding performance was calculated by the measurement G ratio and surface finish. For improvement of grinding process a multi-objective process parameter optimization is performed by use of Taguchi based grey relational analysis. To identify most significant factor of process analysis of variance (ANOVA has been used.
Sensitivity Analysis of features in tolerancing based on constraint function level sets
International Nuclear Information System (INIS)
Ziegler, Philipp; Wartzack, Sandro
2015-01-01
Usually, the geometry of the manufactured product inherently varies from the nominal geometry. This may negatively affect the product functions and properties (such as quality and reliability), as well as the assemblability of the single components. In order to avoid this, the geometric variation of these component surfaces and associated geometry elements (like hole axes) are restricted by tolerances. Since tighter tolerances lead to significant higher manufacturing costs, tolerances should be specified carefully. Therefore, the impact of deviating component surfaces on functions, properties and assemblability of the product has to be analyzed. As physical experiments are expensive, methods of statistical tolerance analysis tools are widely used in engineering design. Current tolerance simulation tools lack of an appropriate indicator for the impact of deviating component surfaces. In the adoption of Sensitivity Analysis methods, there are several challenges, which arise from the specific framework in tolerancing. This paper presents an approach to adopt Sensitivity Analysis methods on current tolerance simulations with an interface module, which bases on level sets of constraint functions for parameters of the simulation model. The paper is an extension and generalization of Ziegler and Wartzack [1]. Mathematical properties of the constraint functions (convexity, homogeneity), which are important for the computational costs of the Sensitivity Analysis, are shown. The practical use of the method is illustrated in a case study of a plain bearing. - Highlights: • Alternative definition of Deviation Domains. • Proof of mathematical properties of the Deviation Domains. • Definition of the interface between Deviation Domains and Sensitivity Analysis. • Sensitivity analysis of a gearbox to show the methods practical use
International Nuclear Information System (INIS)
Thieke, Christian; Bortfeld, Thomas; Niemierko, Andrzej; Nill, Simeon
2003-01-01
Optimization algorithms in inverse radiotherapy planning need information about the desired dose distribution. Usually the planner defines physical dose constraints for each structure of the treatment plan, either in form of minimum and maximum doses or as dose-volume constraints. The concept of equivalent uniform dose (EUD) was designed to describe dose distributions with a higher clinical relevance. In this paper, we present a method to consider the EUD as an optimization constraint by using the method of projections onto convex sets (POCS). In each iteration of the optimization loop, for the actual dose distribution of an organ that violates an EUD constraint a new dose distribution is calculated that satisfies the EUD constraint, leading to voxel-based physical dose constraints. The new dose distribution is found by projecting the current one onto the convex set of all dose distributions fulfilling the EUD constraint. The algorithm is easy to integrate into existing inverse planning systems, and it allows the planner to choose between physical and EUD constraints separately for each structure. A clinical case of a head and neck tumor is optimized using three different sets of constraints: physical constraints for all structures, physical constraints for the target and EUD constraints for the organs at risk, and EUD constraints for all structures. The results show that the POCS method converges stable and given EUD constraints are reached closely
International Nuclear Information System (INIS)
Jia, Zhiwei; Yan, Guozheng; Zhu, Bingquan
2015-01-01
An implanted telemetry system for experimental animals with or without anaesthesia can be used to continuously monitor physiological parameters. This system is significant not only in the study of organisms but also in the evaluation of drug efficacy, artificial organs, and auxiliary devices. The system is composed of a miniature electronic capsule, a wireless power transmission module, a data-recording device, and a processing module. An electrocardiograph, a temperature sensor, and a pressure sensor are integrated in the miniature electronic capsule, in which the signals are transmitted in vitro by wireless communication after filtering, amplification, and A/D sampling. To overcome the power shortage of batteries, a wireless power transmission module based on electromagnetic induction was designed. The transmitting coil of a rectangular-section solenoid and a 3D receiving coil are proposed according to stability and safety constraints. Experiments show that at least 150 mW of power could pick up on the load in a volume of Φ10.5 mm × 11 mm, with a transmission efficiency of 2.56%. Vivisection experiments verified the feasibility of the integrated radio-telemetry system
Energy Technology Data Exchange (ETDEWEB)
Jia, Zhiwei, E-mail: jiayege@hotmail.com [College of Electrical and Information Engineering, Changsha University of Science and Technology, Changsha (China); Yan, Guozheng; Zhu, Bingquan [820 Institute, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China)
2015-04-15
An implanted telemetry system for experimental animals with or without anaesthesia can be used to continuously monitor physiological parameters. This system is significant not only in the study of organisms but also in the evaluation of drug efficacy, artificial organs, and auxiliary devices. The system is composed of a miniature electronic capsule, a wireless power transmission module, a data-recording device, and a processing module. An electrocardiograph, a temperature sensor, and a pressure sensor are integrated in the miniature electronic capsule, in which the signals are transmitted in vitro by wireless communication after filtering, amplification, and A/D sampling. To overcome the power shortage of batteries, a wireless power transmission module based on electromagnetic induction was designed. The transmitting coil of a rectangular-section solenoid and a 3D receiving coil are proposed according to stability and safety constraints. Experiments show that at least 150 mW of power could pick up on the load in a volume of Φ10.5 mm × 11 mm, with a transmission efficiency of 2.56%. Vivisection experiments verified the feasibility of the integrated radio-telemetry system.
An Improved Constraint-Based System for the Verification of Security Protocols
Corin, R.J.; Etalle, Sandro
We propose a constraint-based system for the verification of security protocols that improves upon the one developed by Millen and Shmatikov [30]. Our system features (1) a significantly more efficient implementation, (2) a monotonic behavior, which also allows to detect flaws associated to partial
An Improved Constraint-based system for the verification of security protocols
Corin, R.J.; Etalle, Sandro; Hermenegildo, Manuel V.; Puebla, German
We propose a constraint-based system for the verification of security protocols that improves upon the one developed by Millen and Shmatikov. Our system features (1) a significantly more efficient implementation, (2) a monotonic behavior, which also allows to detect aws associated to partial runs
Towards automatic parameter tuning of stream processing systems
Bilal, Muhammad; Canini, Marco
2017-01-01
for automating parameter tuning for stream-processing systems. Our framework supports standard black-box optimization algorithms as well as a novel gray-box optimization algorithm. We demonstrate the multiple benefits of automated parameter tuning in optimizing
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
International Nuclear Information System (INIS)
Hazwani Halim; Syed Yusainee Syed Yahya; Sahrim Ahmad; Tarawneh, M.A.; Shamsul Bahri, A.R.
2011-01-01
In this study, the exact processing parameter including mixing time, rotor speed and temperature of CNTs and TPNR (NR/LNR/LLDPE) composite has been examined using tensile properties. To prepare the composite, the matrix NR/LLDPE will compatabilized using liquid natural rubber (LNR) with 40 % NR, 10 % LNR and 50 % LLDPE. Then, 2 % CNTs will be incorporated into the matrix using different processing temperature, rotor speed and mixing time. For the temperature, different temperature that used are 135, 140, 145, 150 and 155 degree and for the rotor speed 45, 50, 55, 60 and 65 rpm have been used. As for the mixing time, five different time have been investigated which is 9 min,11 min, 13 min, 15 min and 17 min. The results for Young's modulus and elongation at break show that there is the maximum increment for the composite that have been prepared using 140 degree Celsius, 55 rpm and 13 min. However when the temperature has been increased, both of these properties has been decreased. Based on these results we can conclude that the optimum processing parameter for these CNTs composite is quite similar with the composite of the matrix (TPNR) itself. (author)
A Tuning Procedure for ARX-based MPC of Multivariate Processes
DEFF Research Database (Denmark)
Olesen, Daniel; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp
2013-01-01
We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The stochastic model of the ARX model identified from input-output data...... is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The MPC is designed and implemented based on a state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity function...... to a constraint on the maximum of the sensitivity function. The latter constraint provides a robustness measure that is essential for the procedure. The method is demonstrated for two simulated examples: A Wood-Berry distillation column example and a cement mill example....
Proceedings of the 4th International Workshop on Constraints and Language Processing, CSLP 2007
DEFF Research Database (Denmark)
, and the Programming, Logic and Intelligent Systems Research Group & the Department of Communication, Business and Information Technologies, Roskilde University, Denmark. Finally we dedicate this volume to Peter Rossen Skadhauge, who sadly and unexpectedly passed away last year. He has been a member of the program...... our invited speaker, Annelies Braffort, who will talk on the modelling of spatio-temporal constraints in sign language processing. We will also express a special thanks to Barbara Hemforth who accepted to send us a late paper which emphasizes the importance of the psycholinguistic dimension...... in this research. This volume contains papers accepted for the workshop based on an open call, and each paper has been reviewed by three or four members of the program committee. As editors, we would also like to thank the other members of the organization committee, Philippe Blache and Veronica Dahl, whose...
International Nuclear Information System (INIS)
Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria
2005-01-01
The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography
Jannin, Vincent; Rodier, Jean-David; Musakhanian, Jasmine
2014-05-15
Lipid-based formulations are a viable option to address modern drug delivery challenges such as increasing the oral bioavailability of poorly water-soluble active pharmaceutical ingredients (APIs), or sustaining the drug release of molecules intended for chronic diseases. Esters of fatty acids and glycerol (glycerides) and polyethylene-glycols (polyoxylglycerides) are two main classes of lipid-based excipients used by oral, dermal, rectal, vaginal or parenteral routes. These lipid-based materials are more and more commonly used in pharmaceutical drug products but there is still a lack of understanding of how the manufacturing processes, processing aids, or additives can impact the chemical stability of APIs within the drug product. In that regard, this review summarizes the key parameters to look at when formulating with lipid-based excipients in order to anticipate a possible impact on drug stability or variation of excipient functionality. The introduction presents the chemistry of natural lipids, fatty acids and their properties in relation to the extraction and refinement processes. Then, the key parameters during the manufacturing process influencing the quality of lipid-based excipients are provided. Finally, their critical characteristics are discussed in relation with their intended functionality and ability to interact with APIs and others excipients within the formulation. Copyright © 2014. Published by Elsevier B.V.
Optimization of processing parameters of amaranth grits before grinding into flour
Zharkova, I. M.; Safonova, Yu A.; Slepokurova, Yu I.
2018-05-01
There are the results of experimental studies about the influence of infrared treatment (IR processing) parameters of the amaranth grits before their grinding into flour on the composition and properties of the received product. Using the method called as regressionfactor analysis, the optimal conditions of the thermal processing to the amaranth grits were obtained: the belt speed of the conveyor – 0.049 m/s; temperature of amaranth grits in the tempering silo – 65.4 °C the thickness of the layer of amaranth grits on the belt is 3 - 5 mm and the lamp power is 69.2 kW/m2. The conducted researches confirmed that thermal effect to the amaranth grains in the IR setting allows getting flour with a smaller size of starch grains, with the increased water-holding ability, and with a changed value of its glycemic index. Mathematical processing of experimental data allowed establishing the dependence of the structural and technological characteristics of the amaranth flour on the IR processing parameters of amaranth grits. The obtained results are quite consistent with the experimental ones that proves the effectiveness of optimization based on mathematical planning of the experiment to determine the influence of heat treatment optimal parameters of the amaranth grits on the functional and technological properties of the flour received from it.
Disease-induced resource constraints can trigger explosive epidemics
Böttcher, L.; Woolley-Meza, O.; Araújo, N. A. M.; Herrmann, H. J.; Helbing, D.
2015-11-01
Advances in mathematical epidemiology have led to a better understanding of the risks posed by epidemic spreading and informed strategies to contain disease spread. However, a challenge that has been overlooked is that, as a disease becomes more prevalent, it can limit the availability of the capital needed to effectively treat those who have fallen ill. Here we use a simple mathematical model to gain insight into the dynamics of an epidemic when the recovery of sick individuals depends on the availability of healing resources that are generated by the healthy population. We find that epidemics spiral out of control into “explosive” spread if the cost of recovery is above a critical cost. This can occur even when the disease would die out without the resource constraint. The onset of explosive epidemics is very sudden, exhibiting a discontinuous transition under very general assumptions. We find analytical expressions for the critical cost and the size of the explosive jump in infection levels in terms of the parameters that characterize the spreading process. Our model and results apply beyond epidemics to contagion dynamics that self-induce constraints on recovery, thereby amplifying the spreading process.
International Nuclear Information System (INIS)
Wang, J.S.Y.; Narasimhan, T.N.
1993-06-01
This report discusses conceptual models and mathematical equations, analyzes distributions and correlations among hydrological parameters of soils and tuff, introduces new path integration approaches, and outlines scaling procedures to model potential-driven fluid flow in heterogeneous media. To properly model the transition from fracture-dominated flow under saturated conditions to matrix-dominated flow under partially saturated conditions, characteristic curves and permeability functions for fractures and matrix need to be improved and validated. Couplings from two-phase flow, heat transfer, solute transport, and rock deformation to liquid flow are also important. For stochastic modeling of alternating units of welded and nonwelded tuff or formations bounded by fault zones, correlations and constraints on average values of saturated permeability and air entry scaling factor between different units need to be imposed to avoid unlikely combinations of parameters and predictions. Large-scale simulations require efficient and verifiable numerical algorithms. New path integration approaches based on postulates of minimum work and mass conservation to solve flow geometry and potential distribution simultaneously are introduced. This verifiable integral approach, together with fractal scaling procedures to generate statistical realizations with parameter distribution, correlation, and scaling taken into account, can be used to quantify uncertainties and generate the cumulative distribution function for groundwater travel times
Optimisation Of Cutting Parameters Of Composite Material Laser Cutting Process By Taguchi Method
Lokesh, S.; Niresh, J.; Neelakrishnan, S.; Rahul, S. P. Deepak
2018-03-01
The aim of this work is to develop a laser cutting process model that can predict the relationship between the process input parameters and resultant surface roughness, kerf width characteristics. The research conduct is based on the Design of Experiment (DOE) analysis. Response Surface Methodology (RSM) is used in this work. It is one of the most practical and most effective techniques to develop a process model. Even though RSM has been used for the optimization of the laser process, this research investigates laser cutting of materials like Composite wood (veneer)to be best circumstances of laser cutting using RSM process. The input parameters evaluated are focal length, power supply and cutting speed, the output responses being kerf width, surface roughness, temperature. To efficiently optimize and customize the kerf width and surface roughness characteristics, a machine laser cutting process model using Taguchi L9 orthogonal methodology was proposed.
The impact of structural error on parameter constraint in a climate model
McNeall, Doug; Williams, Jonny; Booth, Ben; Betts, Richard; Challenor, Peter; Wiltshire, Andy; Sexton, David
2016-11-01
Uncertainty in the simulation of the carbon cycle contributes significantly to uncertainty in the projections of future climate change. We use observations of forest fraction to constrain carbon cycle and land surface input parameters of the global climate model FAMOUS, in the presence of an uncertain structural error. Using an ensemble of climate model runs to build a computationally cheap statistical proxy (emulator) of the climate model, we use history matching to rule out input parameter settings where the corresponding climate model output is judged sufficiently different from observations, even allowing for uncertainty. Regions of parameter space where FAMOUS best simulates the Amazon forest fraction are incompatible with the regions where FAMOUS best simulates other forests, indicating a structural error in the model. We use the emulator to simulate the forest fraction at the best set of parameters implied by matching the model to the Amazon, Central African, South East Asian, and North American forests in turn. We can find parameters that lead to a realistic forest fraction in the Amazon, but that using the Amazon alone to tune the simulator would result in a significant overestimate of forest fraction in the other forests. Conversely, using the other forests to tune the simulator leads to a larger underestimate of the Amazon forest fraction. We use sensitivity analysis to find the parameters which have the most impact on simulator output and perform a history-matching exercise using credible estimates for simulator discrepancy and observational uncertainty terms. We are unable to constrain the parameters individually, but we rule out just under half of joint parameter space as being incompatible with forest observations. We discuss the possible sources of the discrepancy in the simulated Amazon, including missing processes in the land surface component and a bias in the climatology of the Amazon.
Affect of different ICT processing parameters to the quality of tomograms
International Nuclear Information System (INIS)
Zhou Jiang; Sun Lingxia; Ye Yunchang
2009-01-01
The quality of ICT tomograms is affected by detecting processing parameters and image processing methods besides the performances of ICT systems. Optimal processing parameters and image processing methods can promote not only the quality of tomogram but also the resolution. Some research work was carried out about processing parameters and image processing methods including choice of collimator, filter, false color composite image. And some examples were given in this paper, which can provide the ICT analyst with reference. (authors)
Momentum constraint relaxation
International Nuclear Information System (INIS)
Marronetti, Pedro
2006-01-01
Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly
Model-independent cosmological constraints from growth and expansion
L'Huillier, Benjamin; Shafieloo, Arman; Kim, Hyungjin
2018-05-01
Reconstructing the expansion history of the Universe from Type Ia supernovae data, we fit the growth rate measurements and put model-independent constraints on some key cosmological parameters, namely, Ωm, γ, and σ8. The constraints are consistent with those from the concordance model within the framework of general relativity, but the current quality of the data is not sufficient to rule out modified gravity models. Adding the condition that dark energy density should be positive at all redshifts, independently of its equation of state, further constrains the parameters and interestingly supports the concordance model.
A constraint-based bottom-up counterpart to definite clause grammars
DEFF Research Database (Denmark)
Christiansen, Henning
2004-01-01
A new grammar formalism, CHR Grammars (CHRG), is proposed that provides a constraint-solving approach to language analysis, built on top of the programming language of Constraint Handling Rules in the same way as Definite Clause Grammars (DCG) on Prolog. CHRG works bottom-up and adds the following......, integrity constraints, operators a la assumption grammars, and to incorporate other constraint solvers. (iv)~Context-sensitive rules that apply for disambiguation, coordination in natural language and tagger-like rules....
Application of a methodology based on the Theory of Constraints in the sector of tourism services
Directory of Open Access Journals (Sweden)
Reyner Pérez Campdesuñer
2017-04-01
Full Text Available Purpose: The objective of the research was aimed at achieving the implementation of the theory of constraints on the operating conditions of a hotel, which differs by its characteristics of traditional processes that have applied this method, from the great heterogeneity of resources needed to meet the demand of customers. Design/methodology/approach: To achieve this purpose, a method of generating conversion equations that allowed to express all the resources of the organization under study depending on the number of customers to serve facilitating comparison between different resources and estimated demand through techniques developed traditional forecasting, these features were integrated into the classical methodology of theory of constraints. Findings: The application of tools designed for hospitality organizations allowed to demonstrate the applicability of the theory of constraints on entities under conditions different from the usual, develop a set of conversion equations of different resources facilitating comparison with demand and consequently achieve improve levels of efficiency and effectiveness of the organization. Originality/value: The originality of the research is summarized in the application of the theory of constraints in a very different from the usual conditions, covering 100% of the processes and resources in hospitality organizations.
Energy Technology Data Exchange (ETDEWEB)
Alayli, N. [Université Paris 13, Sorbonne Paris Cité, Laboratoire des Sciences des Procédés et des Matériaux, Centre National de la Recherche Scientifique, Unité Propre de Recherche 3407, 99 avenue Jean Baptiste Clément, F-93430 Villetaneuse (France); Université de Versailles-Saint-Quentin-en-Yvelines, Sorbonne Universités, Université Pierre et Marie Curie, Université Paris 06, Centre National de la Recherche Scientifique/INSU, Laboratoire Atmosphères Milieux Observations Spatiales-IPSL, Quartier des Garennes, 11 Boulevard d' Alembert, F-78280 Guyancourt (France); Schoenstein, F., E-mail: frederic.schoenstein@univ-paris13.fr [Université Paris 13, Sorbonne Paris Cité, Laboratoire des Sciences des Procédés et des Matériaux, Centre National de la Recherche Scientifique, Unité Propre de Recherche 3407, 99 avenue Jean Baptiste Clément, F-93430 Villetaneuse (France); Girard, A. [Office National d' Étude et de Recherches Aérospatiales, Laboratoire d' Étude des Microstructures, Centre National de la Recherche Scientifique, Unité Mixte de Recherche 104, 29 avenue de la Division Leclerc, F-92322 Châtillon (France); and others
2014-11-14
Processing parameters of Spark Plasma Sintering (SPS) technique were constrained to process nano sized silver particles bound in a paste for interconnection in power electronic devices. A novel strategy combining debinding step and consolidation processes (SPS) in order to elaborate nano-structured silver bulk material is investigated. Optimum parameters were sought for industrial power electronics packaging from the microstructural and morphological properties of the sintered material. The latter was studied by Scanning Electron Microscope (SEM) and X-Ray Diffraction (XRD) to determine the density and the grain size of crystallites. Two types of samples, termed S1 (bulk) and S2 (multilayer) were elaborated and characterized. They are homogeneous with a low degree of porosity and a good adhesion to the substrate and the process parameters are compatible with industrial constraints. As the experimental results show, the mean crystallite size is between 60 nm and 790 nm with a density between 50% and 92% resulting in mechanical and thermal properties that are better than that of lead free solder. The best SPS sintering parameters, the applied pressure, the temperature and the processing time were determined as being 3 MPa, 300 °C and 1 min respectively when the desizing time of the preprocessing step was kept below 5 min at 150 °C. Using these processing parameters, acceptable for automotive packaging industry, a semi-conductor power chip was successfully connected to a metalized substrate by sintered silver with thermal and electrical properties better than those of current solders and with thermomechanical properties allowing absorption of thermoplastic stresses. - Highlights: • The sintered silver joints have nanometric structure. • The grain growth was controlled by the SPS sintering parameters. • New connection material improve thermal and electrical properties of current solders. • Interconnection's plastic strain can absorb thermo
International Nuclear Information System (INIS)
Sathiya, P.; Ajith, P. M.; Soundararajan, R.
2013-01-01
The present study is focused on welding of super austenitic stainless steel sheet using gas metal arc welding process with AISI 904 L super austenitic stainless steel with solid wire of 1.2 mm diameter. Based on the Box - Behnken design technique, the experiments are carried out. The input parameters (gas flow rate, voltage, travel speed and wire feed rate) ranges are selected based on the filler wire thickness and base material thickness and the corresponding output variables such as bead width (BW), bead height (BH) and depth of penetration (DP) are measured using optical microscopy. Based on the experimental data, the mathematical models are developed as per regression analysis using Design Expert 7.1 software. An attempt is made to minimize the bead width and bead height and maximize the depth of penetration using genetic algorithm.
Energy Technology Data Exchange (ETDEWEB)
Sathiya, P. [National Institute of Technology Tiruchirappalli (India); Ajith, P. M. [Department of Mechanical Engineering Rajiv Gandhi Institute of Technology, Kottayam (India); Soundararajan, R. [Sri Krishna College of Engineering and Technology, Coimbatore (India)
2013-08-15
The present study is focused on welding of super austenitic stainless steel sheet using gas metal arc welding process with AISI 904 L super austenitic stainless steel with solid wire of 1.2 mm diameter. Based on the Box - Behnken design technique, the experiments are carried out. The input parameters (gas flow rate, voltage, travel speed and wire feed rate) ranges are selected based on the filler wire thickness and base material thickness and the corresponding output variables such as bead width (BW), bead height (BH) and depth of penetration (DP) are measured using optical microscopy. Based on the experimental data, the mathematical models are developed as per regression analysis using Design Expert 7.1 software. An attempt is made to minimize the bead width and bead height and maximize the depth of penetration using genetic algorithm.
Rigid Body Time Integration by Convected Base Vectors with Implicit Constraints
DEFF Research Database (Denmark)
Krenk, Steen; Nielsen, Martin Bjerre
2013-01-01
of the kinetic energy used in the present formulation is deliberately chosen to correspond to a rigid body rotation, and the orthonormality constraints are introduced via the equivalent Green strain components of the base vectors. The particular form of the extended inertia tensor used here implies a set...
Constraints from jet calculus on quark recombination
International Nuclear Information System (INIS)
Jones, L.M.; Lassila, K.E.; Willen, D.
1979-01-01
Within the QCD jet calculus formalism, we deduce an equation describing recombination of quarks and antiquarks into mesons within a quark or gluon jet. This equation relates the recombination function R(x 1 ,x 2 ,x) used in current literature to the fragmentation function for producing that same meson out of the parton initiating the jet. We submit currently used recombination functions to our consistency test, taking as input mainly the u-quark fragmentation data into π + mesons, but also s-quark fragmentation into K - mesons. The constraint is well satisfied at large Q 2 for large moments. Our results depend on one parameter, Q 0 2 , the constraint equation being satisfied for small values of this parameter
Neubert, M.; Winkler, J.
2012-12-01
This contribution continues an article series [1,2] about the nonlinear model-based control of the Czochralski crystal growth process. The key idea of the presented approach is to use a sophisticated combination of nonlinear model-based and conventional (linear) PI controllers for tracking of both, crystal radius and growth rate. Using heater power and pulling speed as manipulated variables several controller structures are possible. The present part tries to systematize the properties of the materials to be grown in order to get unambiguous decision criteria for a most profitable choice of the controller structure. For this purpose a material specific constant M called interface mobility and a more process specific constant S called system response number are introduced. While the first one summarizes important material properties like thermal conductivity and latent heat the latter one characterizes the process by evaluating the average axial thermal gradients at the phase boundary and the actual growth rate at which the crystal is grown. Furthermore these characteristic numbers are useful for establishing a scheduling strategy for the PI controller parameters in order to improve the controller performance. Finally, both numbers give a better understanding of the general thermal system dynamics of the Czochralski technique.
Bio-oil from fast pyrolysis of lignin: Effects of process and upgrading parameters.
Fan, Liangliang; Zhang, Yaning; Liu, Shiyu; Zhou, Nan; Chen, Paul; Cheng, Yanling; Addy, Min; Lu, Qian; Omar, Muhammad Mubashar; Liu, Yuhuan; Wang, Yunpu; Dai, Leilei; Anderson, Erik; Peng, Peng; Lei, Hanwu; Ruan, Roger
2017-10-01
Effects of process parameters on the yield and chemical profile of bio-oil from fast pyrolysis of lignin and the processes for lignin-derived bio-oil upgrading were reviewed. Various process parameters including pyrolysis temperature, reactor types, lignin characteristics, residence time, and feeding rate were discussed and the optimal parameter conditions for improved bio-oil yield and quality were concluded. In terms of lignin-derived bio-oil upgrading, three routes including pretreatment of lignin, catalytic upgrading, and co-pyrolysis of hydrogen-rich materials have been investigated. Zeolite cracking and hydrodeoxygenation (HDO) treatment are two main methods for catalytic upgrading of lignin-derived bio-oil. Factors affecting zeolite activity and the main zeolite catalytic mechanisms for lignin conversion were analyzed. Noble metal-based catalysts and metal sulfide catalysts are normally used as the HDO catalysts and the conversion mechanisms associated with a series of reactions have been proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters
Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.
2018-03-01
Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters (i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.
Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters
Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.
2018-06-01
Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.
Burkatovskaya, Yuliya Borisovna; Kabanova, T.; Khaustov, Pavel Aleksandrovich
2016-01-01
CUSUM algorithm for controlling chain state switching in the Markov modulated Poissonprocess was investigated via simulation. Recommendations concerning the parameter choice were givensubject to characteristics of the process. Procedure of the process parameter estimation was described.
Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques
Directory of Open Access Journals (Sweden)
Nabila Khodeir
2018-01-01
Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook
An ontological knowledge based system for selection of process monitoring and analysis tools
DEFF Research Database (Denmark)
Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul
2010-01-01
monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....
ASPIRE: An Authoring System and Deployment Environment for Constraint-Based Tutors
Mitrovic, Antonija; Martin, Brent; Suraweera, Pramuditha; Zakharov, Konstantin; Milik, Nancy; Holland, Jay; McGuigan, Nicholas
2009-01-01
Over the last decade, the Intelligent Computer Tutoring Group (ICTG) has implemented many successful constraint-based Intelligent Tutoring Systems (ITSs) in a variety of instructional domains. Our tutors have proven their effectiveness not only in controlled lab studies but also in real classrooms, and some of them have been commercialized.…
Sampling-based exploration of folded state of a protein under kinematic and geometric constraints
Yao, Peggy
2011-10-04
Flexibility is critical for a folded protein to bind to other molecules (ligands) and achieve its functions. The conformational selection theory suggests that a folded protein deforms continuously and its ligand selects the most favorable conformations to bind to. Therefore, one of the best options to study protein-ligand binding is to sample conformations broadly distributed over the protein-folded state. This article presents a new sampler, called kino-geometric sampler (KGS). This sampler encodes dominant energy terms implicitly by simple kinematic and geometric constraints. Two key technical contributions of KGS are (1) a robotics-inspired Jacobian-based method to simultaneously deform a large number of interdependent kinematic cycles without any significant break-up of the closure constraints, and (2) a diffusive strategy to generate conformation distributions that diffuse quickly throughout the protein folded state. Experiments on four very different test proteins demonstrate that KGS can efficiently compute distributions containing conformations close to target (e.g., functional) conformations. These targets are not given to KGS, hence are not used to bias the sampling process. In particular, for a lysine-binding protein, KGS was able to sample conformations in both the intermediate and functional states without the ligand, while previous work using molecular dynamics simulation had required the ligand to be taken into account in the potential function. Overall, KGS demonstrates that kino-geometric constraints characterize the folded subset of a protein conformation space and that this subset is small enough to be approximated by a relatively small distribution of conformations. © 2011 Wiley Periodicals, Inc.
Data Based Parameter Estimation Method for Circular-scanning SAR Imaging
Directory of Open Access Journals (Sweden)
Chen Gong-bo
2013-06-01
Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.
Raw material changes and their processing parameters in an extrusion cooking process
DEFF Research Database (Denmark)
Cheng, Hongyuan; Friis, Alan
In this work, the effects of raw material and process parameters on product expansion in a fish feed extrusion process were investigated. Four different recipes were studied with a pilot-scale twin-screw co-rotating extruder according to a set of pre-defined processing conditions. In the four rec...
Trajectory reshaping based guidance with impact time and angle constraints
Directory of Open Access Journals (Sweden)
Zhao Yao
2016-08-01
Full Text Available This study presents a novel impact time and angle constrained guidance law for homing missiles. The guidance law is first developed with the prior-assumption of a stationary target, which is followed by the practical extension to a maneuvering target scenario. To derive the closed-form guidance law, the trajectory reshaping technique is utilized and it results in defining a specific polynomial function with two unknown coefficients. These coefficients are determined to satisfy the impact time and angle constraints as well as the zero miss distance. Furthermore, the proposed guidance law has three additional guidance gains as design parameters which make it possible to adjust the guided trajectory according to the operational conditions and missile’s capability. Numerical simulations are presented to validate the effectiveness of the proposed guidance law.
International Nuclear Information System (INIS)
Yap, J.T.; Chen, C.T.; Cooper, M.
1995-01-01
The authors have previously developed a knowledge-based method of factor analysis to analyze dynamic nuclear medicine image sequences. In this paper, the authors analyze dynamic PET cerebral glucose metabolism and neuroreceptor binding studies. These methods have shown the ability to reduce the dimensionality of the data, enhance the image quality of the sequence, and generate meaningful functional images and their corresponding physiological time functions. The new information produced by the factor analysis has now been used to improve the estimation of various physiological parameters. A principal component analysis (PCA) is first performed to identify statistically significant temporal variations and remove the uncorrelated variations (noise) due to Poisson counting statistics. The statistically significant principal components are then used to reconstruct a noise-reduced image sequence as well as provide an initial solution for the factor analysis. Prior knowledge such as the compartmental models or the requirement of positivity and simple structure can be used to constrain the analysis. These constraints are used to rotate the factors to the most physically and physiologically realistic solution. The final result is a small number of time functions (factors) representing the underlying physiological processes and their associated weighting images representing the spatial localization of these functions. Estimation of physiological parameters can then be performed using the noise-reduced image sequence generated from the statistically significant PCs and/or the final factor images and time functions. These results are compared to the parameter estimation using standard methods and the original raw image sequences. Graphical analysis was performed at the pixel level to generate comparable parametric images of the slope and intercept (influx constant and distribution volume)
Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.
López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth
2010-08-01
In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.
An Introduction to 'Creativity Constraints'
DEFF Research Database (Denmark)
Onarheim, Balder; Biskjær, Michael Mose
2013-01-01
Constraints play a vital role as both restrainers and enablers in innovation processes by governing what the creative agent/s can and cannot do, and what the output can and cannot be. Notions of constraints are common in creativity research, but current contributions are highly dispersed due to n...
Application of the Theory of Constraints in Project Based Structures
Directory of Open Access Journals (Sweden)
Martynas Sarapinas
2011-04-01
Full Text Available The article deals with the application of the Theory of Constraints (TOC in project management. This article involves a short introduction to TOC as a project management method and deep analysis of project management specialties using the TOC: TOC based project planning, timetable management, tasks synchronization, project control and “relay runner work ethic”. Moreover, the article describes traditional and TOC based project management theories in their comparison, and emphasize the main benefits we received as the results of the study. Article in Lithuanian
Iterated non-linear model predictive control based on tubes and contractive constraints.
Murillo, M; Sánchez, G; Giovanini, L
2016-05-01
This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Creativity from Constraints in Engineering Design
DEFF Research Database (Denmark)
Onarheim, Balder
2012-01-01
This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...
International Nuclear Information System (INIS)
Yao, Ji; Ishak, Mustapha; Lin, Weikang; Troxel, Michael
2017-01-01
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.
Energy Technology Data Exchange (ETDEWEB)
Yao, Ji; Ishak, Mustapha; Lin, Weikang [Department of Physics, The University of Texas at Dallas, Dallas, TX 75080 (United States); Troxel, Michael, E-mail: jxy131230@utdallas.edu, E-mail: mxi054000@utdallas.edu, E-mail: wxl123830@utdallas.edu, E-mail: michael.a.troxel@gmail.com [Department of Physics, Ohio State University, Columbus, OH 43210 (United States)
2017-10-01
Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious contaminants to weak lensing. These systematics need to be isolated and mitigated in order for ongoing and future lensing surveys to reach their full potential. The IA self-calibration (SC) method was shown in previous studies to be able to reduce the GI contamination by up to a factor of 10 for the 2-point and 3-point correlations. The SC method does not require the assumption of an IA model in its working and can extract the GI signal from the same photo-z survey offering the possibility to test and understand structure formation scenarios and their relationship to IA models. In this paper, we study the effects of the IA SC mitigation method on the precision and accuracy of cosmological parameter constraints from future cosmic shear surveys LSST, WFIRST and Euclid. We perform analytical and numerical calculations to estimate the loss of precision and the residual bias in the best fit cosmological parameters after the self-calibration is performed. We take into account uncertainties from photometric redshifts and the galaxy bias. We find that the confidence contours are slightly inflated from applying the SC method itself while a significant increase is due to the inclusion of the photo-z uncertainties. The bias of cosmological parameters is reduced from several-σ, when IA is not corrected for, to below 1-σ after SC is applied. These numbers are comparable to those resulting from applying the method of marginalizing over IA model parameters despite the fact that the two methods operate very differently. We conclude that implementing the SC for these future cosmic-shear surveys will not only allow one to efficiently mitigate the GI contaminant but also help to understand their modeling and link to structure formation.
A compendium of chameleon constraints
Energy Technology Data Exchange (ETDEWEB)
Burrage, Clare [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104 (United States)
2016-11-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.
A compendium of chameleon constraints
International Nuclear Information System (INIS)
Burrage, Clare; Sakstein, Jeremy
2016-01-01
The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.
Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint
Khoshsokhan, S.; Rajabi, R.; Zayyani, H.
2017-09-01
Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.
Relationship between process parameters and properties of multifunctional needlepunched geotextiles
CSIR Research Space (South Africa)
Rawal, A
2006-04-01
Full Text Available , and filtration. In this study, the effect of process parameters, namely, feed rate, stroke frequency, and depth of needle penetration has been investigated on various properties of needlepunched geotextiles. These process parameters are then empirically related...
Constraint-Based Abstraction of a Model Checker for Inﬁnite State Systems
DEFF Research Database (Denmark)
Banda, Gourinath; Gallagher, John Patrick
Abstract interpretation-based model checking provides an approach to verifying properties of inﬁnite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal t...... to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver....
International Nuclear Information System (INIS)
Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang
2009-01-01
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Directory of Open Access Journals (Sweden)
Wenz Frederik
2009-09-01
Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The
DEFF Research Database (Denmark)
Bannow, J; Benjamins, J-W; Wohlert, J
2017-01-01
to verify the wet-foam stability at different pHs. The pH influenced the amount of solubilized drug and the processing-window was very narrow at high drug loadings. The results were compared to real foaming-experiments and solid state analysis of the final cellular solids. The parameters were assembled...... into a processing chart, highlighting the importance of the right combination of processing parameters (pH and time-point of pH adjustment) in order to successfully prepare cellular solid materials with up to 46 wt% drug loading....
Selection of parameters for advanced machining processes using firefly algorithm
Directory of Open Access Journals (Sweden)
Rajkamal Shukla
2017-02-01
Full Text Available Advanced machining processes (AMPs are widely utilized in industries for machining complex geometries and intricate profiles. In this paper, two significant processes such as electric discharge machining (EDM and abrasive water jet machining (AWJM are considered to get the optimum values of responses for the given range of process parameters. The firefly algorithm (FA is attempted to the considered processes to obtain optimized parameters and the results obtained are compared with the results given by previous researchers. The variation of process parameters with respect to the responses are plotted to confirm the optimum results obtained using FA. In EDM process, the performance parameter “MRR” is increased from 159.70 gm/min to 181.6723 gm/min, while “Ra” and “REWR” are decreased from 6.21 μm to 3.6767 μm and 6.21% to 6.324 × 10−5% respectively. In AWJM process, the value of the “kerf” and “Ra” are decreased from 0.858 mm to 0.3704 mm and 5.41 mm to 4.443 mm respectively. In both the processes, the obtained results show a significant improvement in the responses.
Variation of physicochemical parameters during a composting process
International Nuclear Information System (INIS)
Faria C, D.M.; Ballesteros, M.I.; Bendeck, M.
1999-01-01
Two composting processes were carried out; they lasted for about 165 days. In one of the processes the decomposition of the material was performed only by microorganisms only (direct composting) and in the other one, by microorganisms and earthworms -Eisenla foetida- (indirect composting). The first one was carried out in a composting system called c amas a nd the indirect one was carried out in its initial phase in a system of p anelas , then the wastes were transferred to a c ama . The materials were treated in both processes with lime, ammonium nitrate and microorganisms. Periodical samples were taken from different places of the pile and a temperature control was made weekly. The following physicochemical parameters were analyzed in each sample: Humidity, color, pH soil : water in ratios of 1:5 and 1:10, ash, organic matter, CIC, contents of carbon and nitrogen and C/N ratio. In the aqueous extract, C/N ratio and percentage of hydro solubles were analyzed. It was also made a germination assay taking measurements of the percentage of garden cress seeds (Lepidium sativum) that germinated in the aqueous extract. The parameters variation in each process let us to establish that the greatest changes in the material happened in the initial phases of the process (thermophilic and mesophilic phases); the presence of microorganisms was the limiting factor in the dynamic of the process; on the other hand, the earthworm addition did not accelerate the mineralization of organic matter. The results let us to establish that the color determination is not an effective parameter in order to evaluate the degree of maturity of the compost. Other parameters such as temperature and germination percentage can be made as routine test to determine the process rate. Determination of CIC, ash and hydro solubles content are recommended to evaluate the optimal maturity degree in the material. It is proposed changes such as to reduce the composting time to a maximum of 100 days and to
Notes on Timed Concurrent Constraint Programming
DEFF Research Database (Denmark)
Nielsen, Mogens; Valencia, Frank D.
2004-01-01
and program reactive systems. This note provides a comprehensive introduction to the background for and central notions from the theory of tccp. Furthermore, it surveys recent results on a particular tccp calculus, ntcc, and it provides a classification of the expressive power of various tccp languages.......A constraint is a piece of (partial) information on the values of the variables of a system. Concurrent constraint programming (ccp) is a model of concurrency in which agents (also called processes) interact by telling and asking information (constraints) to and from a shared store (a constraint...
Model-based reasoning and the control of process plants
International Nuclear Information System (INIS)
Vaelisuo, Heikki
1993-02-01
In addition to feedback control, safe and economic operation of industrial process plants requires discrete-event type logic control like for example automatic control sequences, interlocks, etc. A lot of complex routine reasoning is involved in the design and verification and validation (VandV) of such automatics. Similar reasoning tasks are encountered during plant operation in action planning and fault diagnosis. The low-level part of the required problem solving is so straightforward that it could be accomplished by a computer if only there were plant models which allow versatile mechanised reasoning. Such plant models and corresponding inference algorithms are the main subject of this report. Deep knowledge and qualitative modelling play an essential role in this work. Deep knowledge refers to mechanised reasoning based on the first principles of the phenomena in the problem domain. Qualitative modelling refers to knowledge representation formalism and related reasoning methods which allow solving problems on an abstraction level higher than for example traditional simulation and optimisation. Prolog is a commonly used platform for artificial intelligence (Al) applications. Constraint logic languages like CLP(R) and Prolog-III extend the scope of logic programming to numeric problem solving. In addition they allow a programming style which often reduces the computational complexity significantly. An approach to model-based reasoning implemented in constraint logic programming language CLP(R) is presented. The approach is based on some of the principles of QSIM, an algorithm for qualitative simulation. It is discussed how model-based reasoning can be applied in the design and VandV of plant automatics and in action planning during plant operation. A prototype tool called ISIR is discussed and some initial results obtained during the development of the tool are presented. The results presented originate from preliminary test results of the prototype obtained
Ergodicity and Parameter Estimates for Infinite-Dimensional Fractional Ornstein-Uhlenbeck Process
International Nuclear Information System (INIS)
Maslowski, Bohdan; Pospisil, Jan
2008-01-01
Existence and ergodicity of a strictly stationary solution for linear stochastic evolution equations driven by cylindrical fractional Brownian motion are proved. Ergodic behavior of non-stationary infinite-dimensional fractional Ornstein-Uhlenbeck processes is also studied. Based on these results, strong consistency of suitably defined families of parameter estimators is shown. The general results are applied to linear parabolic and hyperbolic equations perturbed by a fractional noise
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk
2018-04-01
The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.
The dose-volume constraint satisfaction problem for inverse treatment planning with field segments
International Nuclear Information System (INIS)
Michalski, Darek; Xiao, Ying; Censor, Yair; Galvin, James M
2004-01-01
The prescribed goals of radiation treatment planning are often expressed in terms of dose-volume constraints. We present a novel formulation of a dose-volume constraint satisfaction search for the discretized radiation therapy model. This approach does not rely on any explicit cost function. Inverse treatment planning uses the aperture-based approach with predefined, according to geometric rules, segmental fields. The solver utilizes the simultaneous version of the cyclic subgradient projection algorithm. This is a deterministic iterative method designed for solving the convex feasibility problems. A prescription is expressed with the set of inequalities imposed on the dose at the voxel resolution. Additional constraint functions control the compliance with selected points of the expected cumulative dose-volume histograms. The performance of this method is tested on prostate and head-and-neck cases. The relationships with other models and algorithms of similar conceptual origin are discussed. The demonstrated advantages of the method are: the equivalence of the algorithmic and prescription parameters, the intuitive setup of free parameters, and the improved speed of the method as compared to similar iterative as well as other techniques. The technique reported here will deliver approximate solutions for inconsistent prescriptions
Directory of Open Access Journals (Sweden)
Zhenhua Wang
2016-04-01
Full Text Available In this article, the cutting parameters optimization method for aluminum alloy AlMn1Cu in high-speed milling was studied in order to properly select the high-speed cutting parameters. First, a back propagation neural network model for predicting surface roughness of AlMn1Cu was proposed. The prediction model can improve the prediction accuracy and well work out the higher-order nonlinear relationship between surface roughness and cutting parameters. Second, considering the constraints of technical requirements on surface roughness, a mathematical model for optimizing cutting parameters based on the Bayesian neural network prediction model of surface roughness was established so as to obtain the maximum machining efficiency. The genetic algorithm adopting the homogeneous design to initialize population as well as steady-state reproduction without duplicates was also presented. The application indicates that the algorithm can effectively avoid precocity, strengthen global optimization, and increase the calculation efficiency. Finally, a case was presented on the application of the proposed cutting parameters optimization algorithm to optimize the cutting parameters.
Self-adaptive Green-Ampt infiltration parameters obtained from measured moisture processes
Directory of Open Access Journals (Sweden)
Long Xiang
2016-07-01
Full Text Available The Green-Ampt (G-A infiltration model (i.e., the G-A model is often used to characterize the infiltration process in hydrology. The parameters of the G-A model are critical in applications for the prediction of infiltration and associated rainfall-runoff processes. Previous approaches to determining the G-A parameters have depended on pedotransfer functions (PTFs or estimates from experimental results, usually without providing optimum values. In this study, rainfall simulators with soil moisture measurements were used to generate rainfall in various experimental plots. Observed runoff data and soil moisture dynamic data were jointly used to yield the infiltration processes, and an improved self-adaptive method was used to optimize the G-A parameters for various types of soil under different rainfall conditions. The two G-A parameters, i.e., the effective hydraulic conductivity and the effective capillary drive at the wetting front, were determined simultaneously to describe the relationships between rainfall, runoff, and infiltration processes. Through a designed experiment, the method for determining the G-A parameters was proved to be reliable in reflecting the effects of pedologic background in G-A type infiltration cases and deriving the optimum G-A parameters. Unlike PTF methods, this approach estimates the G-A parameters directly from infiltration curves obtained from rainfall simulation experiments so that it can be used to determine site-specific parameters. This study provides a self-adaptive method of optimizing the G-A parameters through designed field experiments. The parameters derived from field-measured rainfall-infiltration processes are more reliable and applicable to hydrological models.
International Nuclear Information System (INIS)
Hanra, M.S.; Verma, R.K.; Ramani, M.P.S.
1982-01-01
Desalination Experimental Facility (DEF) based on multistage flash desalination process has been set up by the Desalination Division of the Bhabha Atomic Research Centre, Bombay. The design parameters of DEF and materials used for various equipment and parts of DEF are mentioned. DEF was operated for 2300 hours in six operational runs. The range of operational parameters maintained during operation and observations on the performance of the materials of construction are given. Detailed comparison has been made for the orocess parameters in DEF and those in a large size plant. (M.G.B.)
Interpolation of final geometry and result fields in process parameter space
Misiun, Grzegorz Stefan; Wang, Chao; Geijselaers, Hubertus J.M.; van den Boogaard, Antonius H.; Saanouni, K.
2016-01-01
Different routes to produce a product in a bulk forming process can be described by a limited set of process parameters. The parameters determine the final geometry as well as the distribution of state variables in the final shape. Ring rolling has been simulated using different parameter settings.
Koltsov, A. G.; Shamutdinov, A. H.; Blokhin, D. A.; Krivonos, E. V.
2018-01-01
A new classification of parallel kinematics mechanisms on symmetry coefficient, being proportional to mechanism stiffness and accuracy of the processing product using the technological equipment under study, is proposed. A new version of the Stewart platform with a high symmetry coefficient is presented for analysis. The workspace of the mechanism under study is described, this space being a complex solid figure. The workspace end points are reached by the center of the mobile platform which moves in parallel related to the base plate. Parameters affecting the processing accuracy, namely the static and dynamic stiffness, natural vibration frequencies are determined. The capability assessment of the mechanism operation under various loads, taking into account resonance phenomena at different points of the workspace, was conducted. The study proved that stiffness and therefore, processing accuracy with the use of the above mentioned mechanisms are comparable with the stiffness and accuracy of medium-sized series-produced machines.
Covariant constraints for generic massive gravity and analysis of its characteristics
DEFF Research Database (Denmark)
Deser, S.; Sandora, M.; Waldron, A.
2014-01-01
We perform a covariant constraint analysis of massive gravity valid for its entire parameter space, demonstrating that the model generically propagates 5 degrees of freedom; this is also verified by a new and streamlined Hamiltonian description. The constraint's covariant expression permits...
Evaluation of Control Parameters for the Activated Sludge Process
Stall, T. Ray; Sherrard, Josephy H.
1978-01-01
An evaluation of the use of the parameters currently being used to design and operate the activated sludge process is presented. The advantages and disadvantages for the use of each parameter are discussed. (MR)
Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya
2017-10-01
Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.
Conservative constraints on early cosmology with MONTE PYTHON
International Nuclear Information System (INIS)
Audren, Benjamin; Lesgourgues, Julien; Benabed, Karim; Prunet, Simon
2013-01-01
Models for the latest stages of the cosmological evolution rely on a less solid theoretical and observational ground than the description of earlier stages like BBN and recombination. As suggested in a previous work by Vonlanthen et al., it is possible to tweak the analysis of CMB data in such way to avoid making assumptions on the late evolution, and obtain robust constraints on ''early cosmology parameters''. We extend this method in order to marginalise the results over CMB lensing contamination, and present updated results based on recent CMB data. Our constraints on the minimal early cosmology model are weaker than in a standard ΛCDM analysis, but do not conflict with this model. Besides, we obtain conservative bounds on the effective neutrino number and neutrino mass, showing no hints for extra relativistic degrees of freedom, and proving in a robust way that neutrinos experienced their non-relativistic transition after the time of photon decoupling. This analysis is also an occasion to describe the main features of the new parameter inference code MONTE PYTHON, that we release together with this paper. MONTE PYTHON is a user-friendly alternative to other public codes like COSMOMC, interfaced with the Boltzmann code CLASS
Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection
Directory of Open Access Journals (Sweden)
Fabrizia Caiazzo
2018-04-01
Full Text Available Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti–6Al–4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.
Investigation of Laser Welding of Ti Alloys for Cognitive Process Parameters Selection.
Caiazzo, Fabrizia; Caggiano, Alessandra
2018-04-20
Laser welding of titanium alloys is attracting increasing interest as an alternative to traditional joining techniques for industrial applications, with particular reference to the aerospace sector, where welded assemblies allow for the reduction of the buy-to-fly ratio, compared to other traditional mechanical joining techniques. In this research work, an investigation on laser welding of Ti⁻6Al⁻4V alloy plates is carried out through an experimental testing campaign, under different process conditions, in order to perform a characterization of the produced weld bead geometry, with the final aim of developing a cognitive methodology able to support decision-making about the selection of the suitable laser welding process parameters. The methodology is based on the employment of artificial neural networks able to identify correlations between the laser welding process parameters, with particular reference to the laser power, welding speed and defocusing distance, and the weld bead geometric features, on the basis of the collected experimental data.
Directory of Open Access Journals (Sweden)
Ilija KOVACEVIC
2016-09-01
Full Text Available The paper analyzes the problem of friction stir welding (FSW technology. The mechanism of thermo-mechanical process of the FSW method has been identified and a correlation between the weld zone and its microstructure established. Presented are the basic analytical formulations for the definition of temperature fields. Analysis of influential parameters of welding FSW technology at the zone of the weld material and mechanical properties of the realized joint was performed. Influential welding parameters were defined based on tool geometry, technological parameters of processing and the axial load of tool. Specific problems with the FSW process are related to gaps (holes left behind by a tool at the end of the process and inflexibility of welding regarding the degree of variation of material thickness. Numerical simulation of process welding FSW proceeding was carried out on the example of Aluminum Alloy (AA 2219 using the ANSYS Mechanical ADPL (Transient Thermal software package. The defined was the temperature field in the welding process at specified time intervals.DOI: http://dx.doi.org/10.5755/j01.ms.22.3.10022
Sadashiva, M.; Shivanand, H. K.; Vidyasagar, H. N.
2018-04-01
The Current work is aimed to investigate the effect of process parameters in friction stir welding of Aluminium 2024 base alloy and Aluminium 2024 matrix alloy reinforced with E Glass and Silicon Carbide reinforcements. The process involved a set of synthesis techniques incorporating stir casting methodology resulting in fabrication of the composite material. This composite material that is synthesized is then machined to obtain a plate of dimensions 100 mm * 50 mm * 6 mm. The plate is then friction stir welded at different set of parameters viz. the spindle speed of 600 rpm, 900 rpm and 1200 rpm and feed rate of 40 mm/min, 80 mm/min and 120 mm/min for analyzing the process capability. The study of the given set of parameters is predominantly important to understand the physics of the process that may lead to better properties of the joint, which is very much important in perspective to its use in advanced engineering applications, especially in aerospace domain that uses Aluminium 2024 alloy for wing and fuselage structures under tension.
Biogeochemical and hydrological constraints on concentration-discharge curves
Moatar, Florentina; Abbott, Ben; Minaudo, Camille; Curie, Florence; Pinay, Gilles
2017-04-01
The relationship between concentration and discharge (C-Q) can give insight into the location, abundance, rate of production or consumption, and transport dynamics of elements in coupled terrestrial-aquatic ecosystems. Consequently, the investigation of C-Q relationships for multiple elements at multiple spatial and temporal scales can be a powerful tool to address three of ecohydrology's fundamental questions: where does water comes from, how long does it stay, and what happens to the solutes and particulates it carries along the way. We analyzed long-term water quality data from 300 monitoring stations covering nearly half of France to investigate how elemental properties, catchment characteristics, and hydrological parameters influence C-Q. Based on previous work, we segmented the hydrograph, calculating independent C-Q slopes for flows above and below the median discharge. We found that most elements only expressed two of the nine possible C-Q modalities, indicating strong elemental control of C-Q shape. Catchment characteristics including land use and human population had a strong impact on concentration but typically did not influence the C-Q slopes, also suggesting inherent constraints on elemental production and transport. Biological processes appeared to regulate C-Q slope at low flows for biologically-reactive elements, but at high flows, these processes became unimportant, and most parameters expressed chemostatic behavior. This study provides a robust description of possible C-Q shapes for a wide variety of catchments and elements and demonstrates the value of low-frequency, long-term data collected by water quality agencies.
International Nuclear Information System (INIS)
Seljak, Uros; Makarov, Alexey; McDonald, Patrick; Anderson, Scott F.; Bahcall, Neta A.; Cen, Renyue; Gunn, James E.; Lupton, Robert H.; Schlegel, David J.; Brinkmann, J.; Burles, Scott; Doi, Mamoru; Ivezic, Zeljko; Kent, Stephen; Loveday, Jon; Munn, Jeffrey A.; Nichol, Robert C.; Ostriker, Jeremiah P.; Schneider, Donald P.; Berk, Daniel E. Vanden
2005-01-01
We combine the constraints from the recent Lyα forest analysis of the Sloan Digital Sky Survey (SDSS) and the SDSS galaxy bias analysis with previous constraints from SDSS galaxy clustering, the latest supernovae, and 1st year WMAP cosmic microwave background anisotropies. We find significant improvements on all of the cosmological parameters compared to previous constraints, which highlights the importance of combining Lyα forest constraints with other probes. Combining WMAP and the Lyα forest we find for the primordial slope n s =0.98±0.02. We see no evidence of running, dn/dlnk=-0.003±0.010, a factor of 3 improvement over previous constraints. We also find no evidence of tensors, r 2 model is within the 2-sigma contour, V∝φ 4 is outside the 3-sigma contour. For the amplitude we find σ 8 =0.90±0.03 from the Lyα forest and WMAP alone. We find no evidence of neutrino mass: for the case of 3 massive neutrino families with an inflationary prior, eV and the mass of lightest neutrino is m 1 ν λ =0.72±0.02, w(z=0.3)=-0.98 -0.12 +0.10 , the latter changing to w(z=0.3)=-0.92 -0.10 +0.09 if tensors are allowed. We find no evidence for variation of the equation of state with redshift, w(z=1)=-1.03 -0.28 +0.21 . These results rely on the current understanding of the Lyα forest and other probes, which need to be explored further both observationally and theoretically, but extensive tests reveal no evidence of inconsistency among different data sets used here
Li, Fengxian; Yi, Jianhong; Eckert, Jürgen
2017-12-01
Powder forged connecting rods have the problem of non-uniform density distributions because of their complex geometric shape. The densification behaviors of powder metallurgy (PM) connecting rod preforms during hot forging processes play a significant role in optimizing the connecting rod quality. The deformation behaviors of a connecting rod preform, a Fe-3Cu-0.5C (wt pct) alloy compacted and sintered by the powder metallurgy route (PM Fe-Cu-C), were investigated using the finite element method, while damage and friction behaviors of the material were considered in the complicated forging process. The calculated results agree well with the experimental results. The relationship between the processing parameters of hot forging and the relative density of the connecting rod was revealed. The results showed that the relative density of the hot forged connecting rod at the central shank changed significantly compared with the relative density at the big end and at the small end. Moreover, the relative density of the connecting rod was sensitive to the processing parameters such as the forging velocity and the initial density of the preform. The optimum forging processing parameters were determined and presented by using an orthogonal design method. This work suggests that the processing parameters can be optimized to prepare a connecting rod with uniform density distribution and can help to better meet the requirements of the connecting rod industry.
General constraints on the age and chemical evolution of the Galaxy
International Nuclear Information System (INIS)
Meyer, B.S.; Schramm, D.N.
1986-05-01
The formalism of Schramm and Wasserburg (1970) for determining the mean age of the elements is extended. Model-independent constraints (constraints that are independent of a specific form for the effective nucleosynthesis rate and Galactic chemical evolution over time) are derived on the first four terms in the expansion giving the mean age of the elements, and from these constraints limits are derived on the total duration of nucleosynthesis. These limits require only input of the Schramm-Wasserburg parameter Δ/sup max/ and of the ratio of the mean time for formation of the elements to the total duration of nucleosynthesis, t/sub nu//T. The former quantity is a function of nuclear input parameters. Limits on the latter are obtained from constraints on the relative rate of nucleosynthesis derived from the 232 Th/ 238 U, 235 U/ 238 U, and shorter-lived chronometric pairs. 65 refs
Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome.
Directory of Open Access Journals (Sweden)
Jannis Schuecker
2017-02-01
Full Text Available The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.
(1) H-MRS processing parameters affect metabolite quantification
DEFF Research Database (Denmark)
Bhogal, Alex A; Schür, Remmelt R; Houtepen, Lotte C
2017-01-01
investigated the influence of model parameters and spectral quantification software on fitted metabolite concentration values. Sixty spectra in 30 individuals (repeated measures) were acquired using a 7-T MRI scanner. Data were processed by four independent research groups with the freedom to choose their own...... + NAAG/Cr + PCr and Glu/Cr + PCr, respectively. Metabolite quantification using identical (1) H-MRS data was influenced by processing parameters, basis sets and software choice. Locally preferred processing choices affected metabolite quantification, even when using identical software. Our results......Proton magnetic resonance spectroscopy ((1) H-MRS) can be used to quantify in vivo metabolite levels, such as lactate, γ-aminobutyric acid (GABA) and glutamate (Glu). However, there are considerable analysis choices which can alter the accuracy or precision of (1) H-MRS metabolite quantification...
Mueller, Christina J; White, Corey N; Kuchinke, Lars
2017-11-27
The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.
Roy, K.; Peltier, W. R.
2017-12-01
Our understanding of the Earth-Ice-Ocean interactions that have accompanied the large glaciation-deglaciation process characteristic of the last half of the Pleistocene has benefited significantly from the development of high-quality models of the Glacial Isostatic Adjustment (GIA) process. These models provide fundamental insight on the large changes in sea level and land ice cover over this time period, as well as key constraints on the viscosity structure of the Earth's interior. Their development has benefited from the recent availability of high-quality constraints from regions of forebulge collapse. In particular, over North America, the joint use of high-quality sea level data from the U.S. East coast, together with the vast network of precise space-geodetic observations of crustal motion existing over most of the interior of the continent, has led to the latest ICE-7G_NA (VM7) model (Roy & Peltier, GJI, 2017). In this paper, exciting opportunities provided by such high-quality observations related to the GIA process will be discussed, not only in the context of the continuing effort to refine global models of this phenomenon, but also in terms of the fundamental insight they may provide on outstanding issues in high-pressure geophysics, paleoclimatology or hydrogeology. Specific examples where such high-quality observations can be used (either separately, or using a combination of independent sources) will be presented, focusing particularly on constraints from the North American continent and from the Mediterranean basin. This work will demonstrate that, given the high-quality of currently available constraints on the GIA process, considerable further geophysical insight can be obtained based upon the use of spherically-symmetric models of the viscosity structure of the planet.
Freedom and constraint analysis and optimization
Brouwer, Dannis Michel; Boer, Steven; Aarts, Ronald G.K.M.; Meijaard, Jacob Philippus; Jonker, Jan B.
2011-01-01
Many mathematical and intuitive methods for constraint analysis of mechanisms have been proposed. In this article we compare three methods. Method one is based on Grüblers equation. Method two uses an intuitive analysis method based on opening kinematic loops and evaluating the constraints at the
Influence of processing parameters on PZT thick films
International Nuclear Information System (INIS)
Huang, Oliver; Bandyopadhyay, Amit; Bose, Susmita
2005-01-01
We have studied influence of processing parameters on the microstructure and ferroelectric properties of lead zirconate titanate (PZT)-based thick films in the range of 5-25 μm. PZT and 2% La-doped PZT thick films were processed using a modified sol-gel process. In this process, PZT- and La-doped PZT powders were first prepared via sol-gel. These powders were calcined and then used with respective sols to form a slurry. Slurry composition was optimized to spin-coat thick films on platinized Si substrate (Si/SiO 2 /Ti/Pt). Spinning rate, acceleration and slurry deposition techniques were optimized to form thick films with uniform thickness and without any cracking. Increasing solids loading was found to enhance the surface smoothness of the film and decrease porosity. Films were tested for their electrical properties and ferroelectric fatigue response. The maximum polarization obtained was 40 μC/cm 2 at 250 kV/cm for PZT thick film and 30 μC/cm 2 at 450 kV/cm for La-doped PZT thick film. After 10 9 cycles of fatiguing at 35 kHz, La-doped PZT showed better resistance for ferroelectric fatigue compared with un-doped PZT films
Observational Constraints on Quark Matter in Neutron Stars
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
We study the observational constraints of mass and redshift on the properties of the equation of state (EOS) for quark matter in compact stars based on the quasi-particle description. We discuss two scenarios: strange stars and hybrid stars. We construct the equations of state utilizing an extended MIT bag model taking the medium effect into account for quark matter and the relativistic mean field theory for hadron matter. We show that quark matter may exist in strange stars and in the interior of neutron stars. The bag constant is a key parameter that affects strongly the mass of strange stars. The medium effect can lead to the stiffer hybrid-star EOS approaching the pure hadronic EOS, due to the reduction of quark matter, and hence the existence of heavy hybrid stars. We find that a middle range coupling constant may be the best choice for the hybrid stars being compatible with the observational constraints.
Directory of Open Access Journals (Sweden)
Paweł Sitek
2016-01-01
Full Text Available This paper proposes a hybrid programming framework for modeling and solving of constraint satisfaction problems (CSPs and constraint optimization problems (COPs. Two paradigms, CLP (constraint logic programming and MP (mathematical programming, are integrated in the framework. The integration is supplemented with the original method of problem transformation, used in the framework as a presolving method. The transformation substantially reduces the feasible solution space. The framework automatically generates CSP and COP models based on current values of data instances, questions asked by a user, and set of predicates and facts of the problem being modeled, which altogether constitute a knowledge database for the given problem. This dynamic generation of dedicated models, based on the knowledge base, together with the parameters changing externally, for example, the user’s questions, is the implementation of the autonomous search concept. The models are solved using the internal or external solvers integrated with the framework. The architecture of the framework as well as its implementation outline is also included in the paper. The effectiveness of the framework regarding the modeling and solution search is assessed through the illustrative examples relating to scheduling problems with additional constrained resources.
Two non-parametric methods for derivation of constraints from radiotherapy dose–histogram data
International Nuclear Information System (INIS)
Ebert, M A; Kennedy, A; Joseph, D J; Gulliford, S L; Buettner, F; Foo, K; Haworth, A; Denham, J W
2014-01-01
Dose constraints based on histograms provide a convenient and widely-used method for informing and guiding radiotherapy treatment planning. Methods of derivation of such constraints are often poorly described. Two non-parametric methods for derivation of constraints are described and investigated in the context of determination of dose-specific cut-points—values of the free parameter (e.g., percentage volume of the irradiated organ) which best reflect resulting changes in complication incidence. A method based on receiver operating characteristic (ROC) analysis and one based on a maximally-selected standardized rank sum are described and compared using rectal toxicity data from a prostate radiotherapy trial. Multiple test corrections are applied using a free step-down resampling algorithm, which accounts for the large number of tests undertaken to search for optimal cut-points and the inherent correlation between dose–histogram points. Both methods provide consistent significant cut-point values, with the rank sum method displaying some sensitivity to the underlying data. The ROC method is simple to implement and can utilize a complication atlas, though an advantage of the rank sum method is the ability to incorporate all complication grades without the need for grade dichotomization. (note)
New constraints in absorptive capacity and the optimum rate of petroleum output
Energy Technology Data Exchange (ETDEWEB)
El Mallakh, R
1980-01-01
Economic policy in four oil-producing countries is analyzed within a framework that combines a qualitative assessment of the policy-making process with an empirical formulation based on historical and current trends in these countries. The concept of absorptive capacity is used to analyze the optimum rates of petroleum production in Iran, Iraq, Saudi Arabia, and Kuwait. A control solution with an econometric model is developed which is then modified for alternative development strategies based on analysis of factors influencing production decisions. The study shows the consistencies and inconsistencies between the goals of economic growth, oil production, and exports, and the constraints on economic development. Simulation experiments incorporated a number of the constraints on absorptive capacity. Impact of other constraints such as income distribution and political stability is considered qualitatively. (DLC)
Liu, Jianguo; Yang, Bo; Chen, Changzhen
2013-02-01
The optimization of operating parameters for the isolation of peroxidase from horseradish (Armoracia rusticana) roots with ultrafiltration (UF) technology was systemically studied. The effects of UF operating conditions on the transmission of proteins were quantified using the parameter scanning UF. These conditions included solution pH, ionic strength, stirring speed and permeate flux. Under optimized conditions, the purity of horseradish peroxidase (HRP) obtained was greater than 84 % after a two-stage UF process and the recovery of HRP from the feedstock was close to 90 %. The resulting peroxidase product was then analysed by isoelectric focusing, SDS-PAGE and circular dichroism, to confirm its isoelectric point, molecular weight and molecular secondary structure. The effects of calcium ion on HRP specific activities were also experimentally determined.
Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu
2018-05-01
This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.
An algorithm for gradient-based dynamic optimization of UV ﬂash processes
DEFF Research Database (Denmark)
Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Gaspar, Jozsef
2017-01-01
This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Such optimal control problems are important in several engineering applications, for instance in control of distillation columns, in certain two...... softwareaswellastheperformanceofdiﬀerentcompilersinaLinuxoperatingsystem. Thesetestsindicatethatreal-timenonlinear model predictive control of UV ﬂash processes is computationally feasible....
Observational constraints on interstellar chemistry
International Nuclear Information System (INIS)
Winnewisser, G.
1984-01-01
The author points out presently existing observational constraints in the detection of interstellar molecular species and the limits they may cast on our knowledge of interstellar chemistry. The constraints which arise from the molecular side are summarised and some technical difficulties encountered in detecting new species are discussed. Some implications for our understanding of molecular formation processes are considered. (Auth.)
Directory of Open Access Journals (Sweden)
Prakash Kumar Sahu
2015-03-01
Full Text Available The purpose of this paper is to optimize the process parameter to get the better mechanical properties of friction stir welded AM20 magnesium alloy using Taguchi Grey relational analysis (GRA. The considered process parameters are welding speed, tool rotation speed, shoulder diameter and plunging depth. The experiments were carried out by using Taguchi's L18 factorial design of experiment. The processes parameters were optimized and ranked the parameters based on the GRA. The percentage influence of each process parameter on the weld quality was also quantified. A validation experimental run was conducted using optimal process condition, which was obtained from the analysis, to show the improvement in mechanical properties of the joint. This study also shows the feasibility of the GRA with Taguchi technique for improvement in welding quality of magnesium alloy.
New Constraints on the running-mass inflation model
Covi, Laura; Lyth, David H.; Melchiorri, Alessandro
2002-01-01
We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...
Domain general constraints on statistical learning.
Thiessen, Erik D
2011-01-01
All theories of language development suggest that learning is constrained. However, theories differ on whether these constraints arise from language-specific processes or have domain-general origins such as the characteristics of human perception and information processing. The current experiments explored constraints on statistical learning of patterns, such as the phonotactic patterns of an infants' native language. Infants in these experiments were presented with a visual analog of a phonotactic learning task used by J. R. Saffran and E. D. Thiessen (2003). Saffran and Thiessen found that infants' phonotactic learning was constrained such that some patterns were learned more easily than other patterns. The current results indicate that infants' learning of visual patterns shows the same constraints as infants' learning of phonotactic patterns. This is consistent with theories suggesting that constraints arise from domain-general sources and, as such, should operate over many kinds of stimuli in addition to linguistic stimuli. © 2011 The Author. Child Development © 2011 Society for Research in Child Development, Inc.
Evaluation of Injection Molding Process Parameters for Manufacturing Polyethylene Terephthalate
Directory of Open Access Journals (Sweden)
Marwah O.M.F.
2017-01-01
Full Text Available Quality control is an important aspect in manufacturing process. The quality of product in injection moulding is influenced by injection moulding process parameter. In this study, the effect of injection moulding parameter on defects quantity of PET preform was investigated. Optimizing the parameter of injection moulding process is critical to enhance productivity where parameters must operate at an optimum level for an acceptable performance. Design of Experiment (DOE by factorial design approach was used to find an optimum parameter setting and reduce the defects. In this case study, Minitab 17 software was used to analyses the data. The selected input parameters were mould hot runner temperature, water cooling chiller temperature 1 and water cooling chiller temperature 2. Meanwhile, the output for the process was defects quantity of the preform. The relationship between input and output of the process was analyzed using regression method and Analysis of Variance (ANOVA. In order to interpolate the experiment data, mathematical modeling was used which consists of different types of regression equation. Next, from the model, 95% confidence level (p-value was considered and the significant parameter was figured out. This study involved a collaboration with a preform injection moulding company which was Nilai Legasi Plastik Sdn Bhd. The collaboration enabled the researchers to collect the data and also help the company to improve the quality of its production. The results of the study showed that the optimum parameter setting that could reduce the defect quantity of preform was MHR= 88°C, CT1= 24°C and CT2= 27°C. The comparison defect quantity analysis between current companies setting with the optimum setting showed improvement by 21% reduction of defect quantity at the optimum setting. Finally, from the optimization plot, the validation error between the prediction value and experiment was 1.72%. The result proved that quality of products
Designing a Constraint Based Parser for Sanskrit
Kulkarni, Amba; Pokar, Sheetal; Shukl, Devanand
Verbal understanding (śā bdabodha) of any utterance requires the knowledge of how words in that utterance are related to each other. Such knowledge is usually available in the form of cognition of grammatical relations. Generative grammars describe how a language codes these relations. Thus the knowledge of what information various grammatical relations convey is available from the generation point of view and not the analysis point of view. In order to develop a parser based on any grammar one should then know precisely the semantic content of the grammatical relations expressed in a language string, the clues for extracting these relations and finally whether these relations are expressed explicitly or implicitly. Based on the design principles that emerge from this knowledge, we model the parser as finding a directed Tree, given a graph with nodes representing the words and edges representing the possible relations between them. Further, we also use the Mīmā ṃsā constraint of ākā ṅkṣā (expectancy) to rule out non-solutions and sannidhi (proximity) to prioritize the solutions. We have implemented a parser based on these principles and its performance was found to be satisfactory giving us a confidence to extend its functionality to handle the complex sentences.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in
Constraint-based component-modeling for knowledge-based design
Kolb, Mark A.
1992-01-01
The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.
Machine Translation Using Constraint-Based Synchronous Grammar
Institute of Scientific and Technical Information of China (English)
WONG Fai; DONG Mingchui; HU Dongcheng
2006-01-01
A synchronous grammar based on the formalism of context-free grammar was developed by generalizing the first component of production that models the source text. Unlike other synchronous grammars,the grammar allows multiple target productions to be associated to a single production rule which can be used to guide a parser to infer different possible translational equivalences for a recognized input string according to the feature constraints of symbols in the pattern. An extended generalized LR algorithm was adapted to the parsing of the proposed formalism to analyze the syntactic structure of a language. The grammar was used as the basis for building a machine translation system for Portuguese to Chinese translation. The empirical results show that the grammar is more expressive when modeling the translational equivalences of parallel texts for machine translation and grammar rewriting applications.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Constraint Programming for Context Comprehension
DEFF Research Database (Denmark)
Christiansen, Henning
2014-01-01
A close similarity is demonstrated between context comprehension, such as discourse analysis, and constraint programming. The constraint store takes the role of a growing knowledge base learned throughout the discourse, and a suitable con- straint solver does the job of incorporating new pieces...
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
Directory of Open Access Journals (Sweden)
Q. Zhou
2017-07-01
Full Text Available Visual Odometry (VO is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC and Random Sample Consensus (RANSAC algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation. The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation
Directory of Open Access Journals (Sweden)
Gustav Jansson
2018-02-01
Full Text Available Research on platform-based production systems for house-building has focused on production and manufacturing issues. The aim of this research is to explore how the architectural design process contributes to the industrialised house-building industry from the perspective of creative design work. It also aims to describe how constraints affect architectural design work in the engineer-to-order context, when using platform-based production systems. Architects with experience in using platform-based building systems with different degrees of constraints were interviewed regarding creative aspects of the design work. The interviews, together with documents relating to platform constraints, were then analysed from the perspective of artistic and engineering design theories. The results show the benefits and issues of using platform constraints, both with prefabrication of volumetric modules, as well as prefabricated slab and wall elements. The study highlights a major research gap by describing how architectural work, from both the creative artistic and engineering design perspectives, is affected by constraints in the building platform: (1 the architectural design work goes through a series of divergent and convergent processes where the divergent processes are explorative and the convergent processes are solution-oriented; and (2, there is a trade-off between creativity and efficiency in the design work. Open parameters for layout design are key to architectural creativity, while predefinition supports efficiency. The results also provide an understanding of the potential for creativity in artistic and engineering work tasks through different phases in design, and how they are related to constraints in the platform. The main limitation of the research is the number of interviewed architects who had different background experiences of working with different types of platform constraints. More studies are needed to confirm the observations and to
Parameter identification in multinomial processing tree models
Schmittmann, V.D.; Dolan, C.V.; Raijmakers, M.E.J.; Batchelder, W.H.
2010-01-01
Multinomial processing tree models form a popular class of statistical models for categorical data that have applications in various areas of psychological research. As in all statistical models, establishing which parameters are identified is necessary for model inference and selection on the basis
Wang, Jin; Wang, Hui-Ping; Wang, Xiaojie; Cui, Haichao; Lu, Fenggui
2015-03-01
This paper investigates hot cracking rate in Al fiber laser welding under various process conditions and performs corresponding process optimization. First, effects of welding process parameters such as distance between welding center line and its closest trim edge, laser power and welding speed on hot cracking rate were investigated experimentally with response surface methodology (RSM). The hot cracking rate in the paper is defined as ratio of hot cracking length over the total weld seam length. Based on the experimental results following Box-Behnken design, a prediction model for the hot cracking rate was developed using a second order polynomial function considering only two factor interaction. The initial prediction result indicated that the established model could predict the hot cracking rate adequately within the range of welding parameters being used. The model was then used to optimize welding parameters to achieve cracking-free welds.
First constraints on the running of non-Gaussianity.
Becker, Adam; Huterer, Dragan
2012-09-21
We use data from the Wilkinson Microwave Anisotropy probe temperature maps to constrain a scale-dependent generalization of the popular "local" model for primordial non-Gaussianity. In the model where the parameter f(NL) is allowed to run with scale k, f(NL)(k) = f*(NL) (k/k(piv))(n)(fNL), we constrain the running to be n(f)(NL) = 0.30(-1.2)(+1.9) at 95% confidence, marginalized over the amplitude f*(NL). The constraints depend somewhat on the prior probabilities assigned to the two parameters. In the near future, constraints from a combination of Planck and large-scale structure surveys are expected to improve this limit by about an order of magnitude and usefully constrain classes of inflationary models.
Finding Mass Constraints Through Third Neutrino Mass Eigenstate Decay
Gangolli, Nakul; de Gouvêa, André; Kelly, Kevin
2018-01-01
In this paper we aim to constrain the decay parameter for the third neutrino mass utilizing already accepted constraints on the other mixing parameters from the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS). The main purpose of this project is to determine the parameters that will allow the Jiangmen Underground Neutrino Observatory (JUNO) to observe a decay parameter with some statistical significance. Another goal is to determine the parameters that JUNO could detect in the case that the third neutrino mass is lighter than the first two neutrino species. We also replicate the results that were found in the JUNO Conceptual Design Report (CDR). By utilizing Χ2-squared analysis constraints have been put on the mixing angles, mass squared differences, and the third neutrino decay parameter. These statistical tests take into account background noise and normalization corrections and thus the finalized bounds are a good approximation for the true bounds that JUNO can detect. If the decay parameter is not included in our models, the 99% confidence interval lies within The bounds 0s to 2.80x10-12s. However, if we account for a decay parameter of 3x10-5 ev2, then 99% confidence interval lies within 8.73x10-12s to 8.73x10-11s.
Exploring cosmic origins with CORE: Cosmological parameters
Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed
Model based process-product design and analysis
DEFF Research Database (Denmark)
Gani, Rafiqul
This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....