International Nuclear Information System (INIS)
Flammini, Francesco; Marrone, Stefano; Mazzocca, Nicola; Vittorini, Valeria
2009-01-01
A large number of safety-critical control systems are based on N-modular redundant architectures, using majority voters on the outputs of independent computation units. In order to assess the compliance of these architectures with international safety standards, the frequency of hazardous failures must be analyzed by developing and solving proper formal models. Furthermore, the impact of maintenance faults has to be considered, since imperfect maintenance may degrade the safety integrity level of the system. In this paper, we present both a failure model for voting architectures based on Bayesian networks and a maintenance model based on continuous time Markov chains, and we propose to combine them according to a compositional multiformalism modeling approach in order to analyze the impact of imperfect maintenance on the system safety. We also show how the proposed approach promotes the reuse and the interchange of models as well the interchange of solving tools.
A Modular Approach to Redundant Robot Control
International Nuclear Information System (INIS)
Anderson, R.J.
1997-12-01
This paper describes a modular approach for computing redundant robot kinematics. First some conventional redundant control methods are presented and shown to be 'passive control laws', i.e. they can be represented by a network consisting of passive elements. These networks are then put into modular form by applying scattering operator techniques. Additional subnetwork modules can then be added to further shape the motion. Modules for obstacle detection, joint limit avoidance, proximity sensing, and for imposing nonlinear velocity constraints are presented. The resulting redundant robot control system is modular, flexible and robust
Cordier, Marie-Odile; Dague, Philippe; Lévy, François; Montmain, Jacky; Staroswiecki, Marcel; Travé-Massuyès, Louise
2004-10-01
Two distinct and parallel research communities have been working along the lines of the model-based diagnosis approach: the fault detection and isolation (FDI) community and the diagnostic (DX) community that have evolved in the fields of automatic control and artificial intelligence, respectively. This paper clarifies and links the concepts and assumptions that underlie the FDI analytical redundancy approach and the DX consistency-based logical approach. A formal framework is proposed in order to compare the two approaches and the theoretical proof of their equivalence together with the necessary and sufficient conditions is provided.
Fuzzy modeling of analytical redundancy for sensor failure detection
International Nuclear Information System (INIS)
Tsai, T.M.; Chou, H.P.
1991-01-01
Failure detection and isolation (FDI) in dynamic systems may be accomplished by testing the consistency of the system via analytically redundant relations. The redundant relation is basically a mathematical model relating system inputs and dissimilar sensor outputs from which information is extracted and subsequently examined for the presence of failure signatures. Performance of the approach is often jeopardized by inherent modeling error and noise interference. To mitigate such effects, techniques such as Kalman filtering, auto-regression-moving-average (ARMA) modeling in conjunction with probability tests are often employed. These conventional techniques treat the stochastic nature of uncertainties in a deterministic manner to generate best-estimated model and sensor outputs by minimizing uncertainties. In this paper, the authors present a different approach by treating the effect of uncertainties with fuzzy numbers. Coefficients in redundant relations derived from first-principle physical models are considered as fuzzy parameters and on-line updated according to system behaviors. Failure detection is accomplished by examining the possibility that a sensor signal occurred in an estimated fuzzy domain. To facilitate failure isolation, individual FDI monitors are designed for each interested sensor
International Nuclear Information System (INIS)
Gholinezhad, Hadi; Zeinal Hamadani, Ali
2017-01-01
This paper develops a new model for redundancy allocation problem. In this paper, like many recent papers, the choice of the redundancy strategy is considered as a decision variable. But, in our model each subsystem can exploit both active and cold-standby strategies simultaneously. Moreover, the model allows for component mixing such that components of different types may be used in each subsystem. The problem, therefore, boils down to determining the types of components, redundancy levels, and number of active and cold-standby units of each type for each subsystem to maximize system reliability by considering such constraints as available budget, weight, and space. Since RAP belongs to the NP-hard class of optimization problems, a genetic algorithm (GA) is developed for solving the problem. Finally, the performance of the proposed algorithm is evaluated by applying it to a well-known test problem from the literature with relatively satisfactory results. - Highlights: • A new model for the redundancy allocation problem in series–parallel systems is proposed. • The redundancy strategy of each subsystem is considered as a decision variable and can be active, cold-standby or mixed. • Component mixing is allowed, in other words components of any subsystem can be non-identical. • A genetic algorithm is developed for solving the problem. • Computational experiments demonstrate that the new model leads to interesting results.
Structural Equation Models in a Redundancy Analysis Framework With Covariates.
Lovaglio, Pietro Giorgio; Vittadini, Giorgio
2014-01-01
A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.
Directory of Open Access Journals (Sweden)
Anand Prakash
2014-03-01
Full Text Available Wireless Sensor Networks (WSNs with their dynamic applications gained a tremendous attention of researchers. Constant monitoring of critical situations attracted researchers to utilize WSNs at vast platforms. The main focus in WSNs is to enhance network localization as much as one could, for efficient and optimal utilization of resources. Different approaches based upon redundancy are proposed for optimum functionality. Localization is always related with redundancy of sensor nodes deployed at remote areas for constant and fault tolerant monitoring. In this work, we propose a comparison of classic flooding and the gossip protocol for homogenous networks which enhances stability and throughput quiet significantly.
The heuristic value of redundancy models of aging.
Boonekamp, Jelle J; Briga, Michael; Verhulst, Simon
2015-11-01
Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links. To this end, we explore the heuristic value of redundancy models of aging to develop a deeper insight into the mechanisms causing variation in senescence and lifespan. We start by showing (i) how different redundancy model parameters affect projected aging and mortality, and (ii) how variation in redundancy model parameters relates to variation in parameters of the Gompertz equation. Lifestyle changes or medical interventions during life can modify mortality rate, and we investigate (iii) how interventions that change specific redundancy parameters within the model affect subsequent mortality and actuarial senescence. Lastly, as an example of data-directed modelling and the insights that can be gained from this, (iv) we fit a redundancy model to mortality patterns observed by Mair et al. (2003; Science 301: 1731-1733) in Drosophila that were subjected to dietary restriction and temperature manipulations. Mair et al. found that dietary restriction instantaneously reduced mortality rate without affecting aging, while temperature manipulations had more transient effects on mortality rate and did affect aging. We show that after adjusting model parameters the redundancy model describes both effects well, and a comparison of the parameter values yields a deeper insight in the mechanisms causing these contrasting effects. We see replacement of the redundancy model parameters by more detailed sub-models of these parameters as a next step in linking demographic patterns to underlying molecular mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
REDUNDANT ELECTRIC MOTOR DRIVE CONTROL UNIT DESIGN USING AUTOMATA-BASED APPROACH
Directory of Open Access Journals (Sweden)
Yuri Yu. Yankin
2014-11-01
Full Text Available Implementation of redundant unit for motor drive control based on programmable logic devices is discussed. Continuous redundancy method is used. As compared to segregated standby redundancy and whole system standby redundancy, such method provides preservation of all unit functions in case of redundancy and gives the possibility for continuous monitoring of major and redundant elements. Example of that unit is given. Electric motor drive control channel block diagram contains two control units – the major and redundant; it also contains four power supply units. Control units programming was carried out using automata-based approach. Electric motor drive control channel model was developed; it provides complex simulation of control state-machine and power converter. Through visibility and hierarchy of finite state machines debug time was shortened as compared to traditional programming. Control state-machine description using hardware description language is required for its synthesis with FPGA-devices vendor design software. This description was generated automatically by MATLAB software package. To verify results two prototype control units, two prototype power supply units, and device mock-up were developed and manufactured. Units were installed in the device mock-up. Prototype units were created in accordance with requirements claimed to deliverable hardware. Control channel simulation and tests results in the perfect state and during imitation of major element fault are presented. Automata-based approach made it possible to observe and debug control state-machine transitions during simulation of transient processes, occurring at imitation of faults. Results of this work can be used in development of fault tolerant electric motor drive control channels.
Reliability model for common mode failures in redundant safety systems
International Nuclear Information System (INIS)
Fleming, K.N.
1974-12-01
A method is presented for computing the reliability of redundant safety systems, considering both independent and common mode type failures. The model developed for the computation is a simple extension of classical reliability theory. The feasibility of the method is demonstrated with the use of an example. The probability of failure of a typical diesel-generator emergency power system is computed based on data obtained from U. S. diesel-generator operating experience. The results are compared with reliability predictions based on the assumption that all failures are independent. The comparison shows a significant increase in the probability of redundant system failure, when common failure modes are considered. (U.S.)
Tommasino, Paolo; Campolo, Domenico
2017-02-03
In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.
Fault Tolerance for Industrial Actuators in Absence of Accurate Models and Hardware Redundancy
DEFF Research Database (Denmark)
Papageorgiou, Dimitrios; Blanke, Mogens; Niemann, Hans Henrik
2015-01-01
This paper investigates Fault-Tolerant Control for closed-loop systems where only coarse models are available and there is lack of actuator and sensor redundancies. The problem is approached in the form of a typical servomotor in closed-loop. A linear model is extracted from input/output data to ...
Probability Model for Data Redundancy Detection in Sensor Networks
Directory of Open Access Journals (Sweden)
Suman Kumar
2009-01-01
Full Text Available Sensor networks are made of autonomous devices that are able to collect, store, process and share data with other devices. Large sensor networks are often redundant in the sense that the measurements of some nodes can be substituted by other nodes with a certain degree of confidence. This spatial correlation results in wastage of link bandwidth and energy. In this paper, a model for two associated Poisson processes, through which sensors are distributed in a plane, is derived. A probability condition is established for data redundancy among closely located sensor nodes. The model generates a spatial bivariate Poisson process whose parameters depend on the parameters of the two individual Poisson processes and on the distance between the associated points. The proposed model helps in building efficient algorithms for data dissemination in the sensor network. A numerical example is provided investigating the advantage of this model.
SIMULATION MODEL FOR DESIGN SUPPORT OF INFOCOMM REDUNDANT SYSTEMS
Directory of Open Access Journals (Sweden)
V. A. Bogatyrev
2016-09-01
Full Text Available Subject of Research. The paper deals with the effectiveness of multipath transfer of request copies through the network and their redundant service without the use of laborious analytical modeling. The model and support tools for the design of highly reliable distributed systems based on simulation modeling have been created. Method. The effectiveness of many variants of service organization and delivery through the network to the query servers is formed and analyzed. Options for providing redundant service and delivery via the network to the servers of request copies are also considered. The choice of variants for the distribution and service of requests is carried out taking into account the criticality of queries to the time of their stay in the system. The request is considered successful if at least one of its copies is accurately delivered to the working server, ready to service the request received through a network, if it is fulfilled in the set time. Efficiency analysis of the redundant transmission and service of requests is based on the model built in AnyLogic 7 simulation environment. Main Results. Simulation experiments based on the proposed models have shown the effectiveness of redundant transmission of copies of queries (packets to the servers in the cluster through multiple paths with redundant service of request copies by a group of servers in the cluster. It is shown that this solution allows increasing the probability of exact execution of at least one copy of the request within the required time. We have carried out efficiency evaluation of destruction of outdated request copies in the queues of network nodes and the cluster. We have analyzed options for network implementation of multipath transfer of request copies to the servers in the cluster over disjoint paths, possibly different according to the number of their constituent nodes. Practical Relevance. The proposed simulation models can be used when selecting the optimal
. Redundancy and blocking in the spatial domain: A connectionist model
Directory of Open Access Journals (Sweden)
I. P. L. Mc Laren
2002-01-01
Full Text Available How can the observations of spatial blocking (Rodrigo, Chamizo, McLaren & Mackintosh, 1997 and cue redundancy (OKeefe and Conway, 1978 be reconciled within the framework provided by an error-correcting, connectionist account of spatial navigation? I show that an implementation of McLarens (1995 better beta model can serve this purpose, and examine some of the implications for spatial learning and memory.
Study of redundant Models in reliability prediction of HXMT's HES
International Nuclear Information System (INIS)
Wang Jinming; Liu Congzhan; Zhang Zhi; Ji Jianfeng
2010-01-01
Two redundant equipment structures of HXMT's HES are proposed firstly, the block backup and dual system cold-redundancy. Then prediction of the reliability is made by using parts count method. Research of comparison and analysis is also performed on the two proposals. A conclusion is drawn that a higher reliability and longer service life could be offered by taking a redundant equipment structure of block backup. (authors)
Reliability-redundancy optimization by means of a chaotic differential evolution approach
International Nuclear Information System (INIS)
Coelho, Leandro dos Santos
2009-01-01
The reliability design is related to the performance analysis of many engineering systems. The reliability-redundancy optimization problems involve selection of components with multiple choices and redundancy levels that produce maximum benefits, can be subject to the cost, weight, and volume constraints. Classical mathematical methods have failed in handling nonconvexities and nonsmoothness in optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solution in reliability-redundancy optimization problems. Evolutionary algorithms (EAs) - paradigms of evolutionary computation field - are stochastic and robust meta-heuristics useful to solve reliability-redundancy optimization problems. EAs such as genetic algorithm, evolutionary programming, evolution strategies and differential evolution are being used to find global or near global optimal solution. A differential evolution approach based on chaotic sequences using Lozi's map for reliability-redundancy optimization problems is proposed in this paper. The proposed method has a fast convergence rate but also maintains the diversity of the population so as to escape from local optima. An application example in reliability-redundancy optimization based on the overspeed protection system of a gas turbine is given to show its usefulness and efficiency. Simulation results show that the application of deterministic chaotic sequences instead of random sequences is a possible strategy to improve the performance of differential evolution.
International Nuclear Information System (INIS)
Santos Coelho, Leandro dos
2009-01-01
The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature
Energy Technology Data Exchange (ETDEWEB)
Santos Coelho, Leandro dos [Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Pontifical Catholic University of Parana, PUCPR, Imaculada Conceicao, 1155, 80215-901 Curitiba, Parana (Brazil)], E-mail: leandro.coelho@pucpr.br
2009-04-15
The reliability-redundancy optimization problems can involve the selection of components with multiple choices and redundancy levels that produce maximum benefits, and are subject to the cost, weight, and volume constraints. Many classical mathematical methods have failed in handling nonconvexities and nonsmoothness in reliability-redundancy optimization problems. As an alternative to the classical optimization approaches, the meta-heuristics have been given much attention by many researchers due to their ability to find an almost global optimal solutions. One of these meta-heuristics is the particle swarm optimization (PSO). PSO is a population-based heuristic optimization technique inspired by social behavior of bird flocking and fish schooling. This paper presents an efficient PSO algorithm based on Gaussian distribution and chaotic sequence (PSO-GC) to solve the reliability-redundancy optimization problems. In this context, two examples in reliability-redundancy design problems are evaluated. Simulation results demonstrate that the proposed PSO-GC is a promising optimization technique. PSO-GC performs well for the two examples of mixed-integer programming in reliability-redundancy applications considered in this paper. The solutions obtained by the PSO-GC are better than the previously best-known solutions available in the recent literature.
An Approach for Removing Redundant Data from RFID Data Streams
Mahdin, Hairulnizam; Abawajy, Jemal
2011-01-01
Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches. PMID:22163730
An Approach for Removing Redundant Data from RFID Data Streams
Directory of Open Access Journals (Sweden)
Hairulnizam Mahdin
2011-10-01
Full Text Available Radio frequency identification (RFID systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches.
On modeling human reliability in space flights - Redundancy and recovery operations
Aarset, M.; Wright, J. F.
The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.
Directory of Open Access Journals (Sweden)
Moath Kassim
2018-05-01
Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors
Directory of Open Access Journals (Sweden)
Shima MohammadZadeh Dogahe
2015-01-01
Full Text Available A novel integrated model is proposed to optimize the redundancy allocation problem (RAP and the reliability-centered maintenance (RCM simultaneously. A system of both repairable and nonrepairable components has been considered. In this system, electronic components are nonrepairable while mechanical components are mostly repairable. For nonrepairable components, a redundancy allocation problem is dealt with to determine optimal redundancy strategy and number of redundant components to be implemented in each subsystem. In addition, a maintenance scheduling problem is considered for repairable components in order to identify the best maintenance policy and optimize system reliability. Both active and cold standby redundancy strategies have been taken into account for electronic components. Also, net present value of the secondary cost including operational and maintenance costs has been calculated. The problem is formulated as a biobjective mathematical programming model aiming to reach a tradeoff between system reliability and cost. Three metaheuristic algorithms are employed to solve the proposed model: Nondominated Sorting Genetic Algorithm (NSGA-II, Multiobjective Particle Swarm Optimization (MOPSO, and Multiobjective Firefly Algorithm (MOFA. Several test problems are solved using the mentioned algorithms to test efficiency and effectiveness of the solution approaches and obtained results are analyzed.
Directory of Open Access Journals (Sweden)
Liberles David A
2006-03-01
Full Text Available Abstract Background The exchange of nucleotides at synonymous sites in a gene encoding a protein is believed to have little impact on the fitness of a host organism. This should be especially true for synonymous transitions, where a pyrimidine nucleotide is replaced by another pyrimidine, or a purine is replaced by another purine. This suggests that transition redundant exchange (TREx processes at the third position of conserved two-fold codon systems might offer the best approximation for a neutral molecular clock, serving to examine, within coding regions, theories that require neutrality, determine whether transition rate constants differ within genes in a single lineage, and correlate dates of events recorded in genomes with dates in the geological and paleontological records. To date, TREx analysis of the yeast genome has recognized correlated duplications that established a new metabolic strategies in fungi, and supported analyses of functional change in aromatases in pigs. TREx dating has limitations, however. Multiple transitions at synonymous sites may cause equilibration and loss of information. Further, to be useful to correlate events in the genomic record, different genes within a genome must suffer transitions at similar rates. Results A formalism to analyze divergence at two fold redundant codon systems is presented. This formalism exploits two-state approach-to-equilibrium kinetics from chemistry. This formalism captures, in a single equation, the possibility of multiple substitutions at individual sites, avoiding any need to "correct" for these. The formalism also connects specific rate constants for transitions to specific approximations in an underlying evolutionary model, including assumptions that transition rate constants are invariant at different sites, in different genes, in different lineages, and at different times. Therefore, the formalism supports analyses that evaluate these approximations. Transitions at synonymous
Resolving kinematic redundancy with constraints using the FSP (Full Space Parameterization) approach
International Nuclear Information System (INIS)
Pin, F.G.; Tulloch, F.A.
1996-01-01
A solution method is presented for the motion planning and control of kinematically redundant serial-link manipulators in the presence of motion constraints such as joint limits or obstacles. Given a trajectory for the end-effector, the approach utilizes the recently proposed Full Space Parameterization (FSP) method to generate a parameterized expression for the entire space of solutions of the unconstrained system. At each time step, a constrained optimization technique is then used to analytically find the specific joint motion solution that satisfies the desired task objective and all the constraints active during the time step. The method is applicable to systems operating in a priori known environments or in unknown environments with sensor-based obstacle detection. The derivation of the analytical solution is first presented for a general type of kinematic constraint and is then applied to the problem of motion planning for redundant manipulators with joint limits and obstacle avoidance. Sample results using planar and 3-D manipulators with various degrees of redundancy are presented to illustrate the efficiency and wide applicability of constrained motion planning using the FSP approach
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
A model for the coupling of failure rates in a redundant system
International Nuclear Information System (INIS)
Kleppmann, W.G.; Wutschig, R.
1986-01-01
A model is developed which takes into acount the coupling between failure rates or identical components in different redundancies of a safety system, i.e., the fact that the failure rates of identical components subjected to the same operating conditions will scatter less than the failure rates of any two components of the same type. It is shown that with increasing coupling the expectation value and the variance of the distribution of the failure probability of the redundant system increases. A consistent way to incorporate operating experience in a Bayesian framework is developed and the reults are presented. (orig.)
Static stiffness modeling of a novel hybrid redundant robot machine
International Nuclear Information System (INIS)
Li Ming; Wu Huapeng; Handroos, Heikki
2011-01-01
This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.
Angeler, David G; Viedma, Olga; Moreno, José M
2009-11-01
Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.
The heuristic value of redundancy models of aging
Boonekamp, Jelle J.; Briga, Michael; Verhulst, Simon
Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links.
The heuristic value of redundancy models of aging
Boonekamp, Jelle J.; Briga, Michael; Verhulst, Simon
2015-01-01
Molecular studies of aging aim to unravel the cause(s) of aging bottom-up, but linking these mechanisms to organismal level processes remains a challenge. We propose that complementary top-down data-directed modelling of organismal level empirical findings may contribute to developing these links.
International Nuclear Information System (INIS)
Attar, Ahmad; Raissi, Sadigh; Khalili-Damghani, Kaveh
2017-01-01
A simulation-based optimization (SBO) method is proposed to handle multi-objective joint availability-redundancy allocation problem (JARAP). Here, there is no emphasis on probability distributions of time to failures and repair times for multi-state multi-component series-parallel configuration under active, cold and hot standby strategies. Under such conditions, estimation of availability is not a trivial task. First, an efficient computer simulation model is proposed to estimate the availability of the aforementioned system. Then, the estimated availability values are used in a repetitive manner as parameter of a two-objective joint availability-redundancy allocation optimization model through SBO mechanism. The optimization model is then solved using two well-known multi-objective evolutionary computation algorithms, i.e., non-dominated sorting genetic algorithm (NSGA-II), and Strength Pareto Evolutionary Algorithm (SPEA2). The proposed SBO approach is tested using non-exponential numerical example with multi-state repairable components. The results are presented and discussed through different demand scenarios under cold and hot standby strategies. Furthermore, performance of NSGA-II and SPEA2 are statistically compared regarding multi-objective accuracy, and diversity metrics. - Highlights: • A Simulation-Based Optimization (SBO) procedure is introduced for JARAP. • The proposed SBO works for any given failure and repair times. • An efficient simulation procedure is developed to estimate availability. • Customized NSGA-II and SPEA2 are proposed to solve the bi-objective JARAP. • Statistical analysis is employed to test the performance of optimization methods.
Testing the race model inequality in redundant stimuli with variable onset asynchrony
DEFF Research Database (Denmark)
Gondan, Matthias
2009-01-01
distributions of response times for the single-modality stimuli. It has been derived for synchronous stimuli and for stimuli with stimulus onset asynchrony (SOA). In most experiments with asynchronous stimuli, discrete SOA values are chosen and the race model inequality is separately tested for each SOA. Due...... to SOAs at which the violation of the race model prediction is expected to be large. In addition, the method enables data analysis for experiments in which stimuli are presented with SOA from a continuous distribution rather than in discrete steps.......In speeded response tasks with redundant signals, parallel processing of the signals is tested by the race model inequality. This inequality states that given a race of two signals, the cumulative distribution of response times for redundant stimuli never exceeds the sum of the cumulative...
Pauci ex tanto numero: reduce redundancy in multi-model ensembles
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-08-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.
Pauci ex tanto numero: reducing redundancy in multi-model ensembles
Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.
2013-02-01
We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.
Modeling and Control of the Redundant Parallel Adjustment Mechanism on a Deployable Antenna Panel
Directory of Open Access Journals (Sweden)
Lili Tian
2016-10-01
Full Text Available With the aim of developing multiple input and multiple output (MIMO coupling systems with a redundant parallel adjustment mechanism on the deployable antenna panel, a structural control integrated design methodology is proposed in this paper. Firstly, the modal information from the finite element model of the structure of the antenna panel is extracted, and then the mathematical model is established with the Hamilton principle; Secondly, the discrete Linear Quadratic Regulator (LQR controller is added to the model in order to control the actuators and adjust the shape of the panel. Finally, the engineering practicality of the modeling and control method based on finite element analysis simulation is verified.
A model of primate visual cortex based on category-specific redundancies in natural images
Malmir, Mohsen; Shiry Ghidary, S.
2010-12-01
Neurophysiological and computational studies have proposed that properties of natural images have a prominent role in shaping selectivity of neurons in the visual cortex. An important property of natural images that has been studied extensively is the inherent redundancy in these images. In this paper, the concept of category-specific redundancies is introduced to describe the complex pattern of dependencies between responses of linear filters to natural images. It is proposed that structural similarities between images of different object categories result in dependencies between responses of linear filters in different spatial scales. It is also proposed that the brain gradually removes these dependencies in different areas of the ventral visual hierarchy to provide a more efficient representation of its sensory input. The authors proposed a model to remove these redundancies and trained it with a set of natural images using general learning rules that are developed to remove dependencies between responses of neighbouring neurons. Results of experiments demonstrate the close resemblance of neuronal selectivity between different layers of the model and their corresponding visual areas.
Application of model-based and knowledge-based measuring methods as analytical redundancy
International Nuclear Information System (INIS)
Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.
1997-01-01
The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs
Energy Technology Data Exchange (ETDEWEB)
Burak, K. [Invensys Process Systems, M/S C42-2B, 33 Commercial Street, Foxboro, MA 02035 (United States)
2006-07-01
We describe the Ethernet systems and their evolution: LAN Segmentation, DUAL networks, network loops, network redundancy and redundant network access. Ethernet (IEEE 802.3) is an open standard with no licensing fees and its specifications are freely available. As a result, it is the most popular data link protocol in use. It is important that the network be redundant and standard Ethernet protocols like RSTP (IEEE 802.1w) provide the fast network fault detection and recovery times that is required today. As Ethernet does continue to evolve, network redundancy is and will be a mixture of technology standards. So it is very important that both end-stations and networking devices be Ethernet (IEEE 802.3) compliant. Then when new technologies, such as the IEEE 802.1aq Shortest Path Bridging protocol, come to market they can be easily deployed in the network without worry.
International Nuclear Information System (INIS)
Burak, K.
2006-01-01
We describe the Ethernet systems and their evolution: LAN Segmentation, DUAL networks, network loops, network redundancy and redundant network access. Ethernet (IEEE 802.3) is an open standard with no licensing fees and its specifications are freely available. As a result, it is the most popular data link protocol in use. It is important that the network be redundant and standard Ethernet protocols like RSTP (IEEE 802.1w) provide the fast network fault detection and recovery times that is required today. As Ethernet does continue to evolve, network redundancy is and will be a mixture of technology standards. So it is very important that both end-stations and networking devices be Ethernet (IEEE 802.3) compliant. Then when new technologies, such as the IEEE 802.1aq Shortest Path Bridging protocol, come to market they can be easily deployed in the network without worry
Directory of Open Access Journals (Sweden)
Redko V. V.
2011-12-01
Full Text Available The paper discusses improvement of accuracy of measurands measurements with the use of measuring channel with nonlinear calibration curve. There is proposed a mathematical model, which describes process of redundant measurements for measuring channel when it’s measurement function is a polynomial of third power.
International Nuclear Information System (INIS)
Zhang, Enze; Wu, Yifei; Chen, Qingwei
2014-01-01
This paper proposes a practical approach, combining bare-bones particle swarm optimization and sensitivity-based clustering for solving multi-objective reliability redundancy allocation problems (RAPs). A two-stage process is performed to identify promising solutions. Specifically, a new bare-bones multi-objective particle swarm optimization algorithm (BBMOPSO) is developed and applied in the first stage to identify a Pareto-optimal set. This algorithm mainly differs from other multi-objective particle swarm optimization algorithms in the parameter-free particle updating strategy, which is especially suitable for handling the complexity and nonlinearity of RAPs. Moreover, by utilizing an approach based on the adaptive grid to update the global particle leaders, a mutation operator to improve the exploration ability and an effective constraint handling strategy, the integrated BBMOPSO algorithm can generate excellent approximation of the true Pareto-optimal front for RAPs. This is followed by a data clustering technique based on difference sensitivity in the second stage to prune the obtained Pareto-optimal set and obtain a small, workable sized set of promising solutions for system implementation. Two illustrative examples are presented to show the feasibility and effectiveness of the proposed approach
Directory of Open Access Journals (Sweden)
Zhong Lunlong
2015-04-01
Full Text Available In safety-critical systems such as transportation aircraft, redundancy of actuators is introduced to improve fault tolerance. How to make the best use of remaining actuators to allow the system to continue achieving a desired operation in the presence of some actuators failures is the main subject of this paper. Considering that many dynamical systems, including flight dynamics of a transportation aircraft, can be expressed as an input affine nonlinear system, a new state representation is adopted here where the output dynamics are related with virtual inputs associated with the intended operation. This representation, as well as the distribution matrix associated with the effectiveness of the remaining operational actuators, allows us to define different levels of fault tolerant governability with respect to actuators’ failures. Then, a two-stage control approach is developed, leading first to the inversion of the output dynamics to get nominal values for the virtual inputs and then to the solution of a linear quadratic (LQ problem to compute the solicitation of each operational actuator. The proposed approach is applied to the control of a transportation aircraft which performs a stabilized roll maneuver while a partial failure appears. Two fault scenarios are considered and the resulting performance of the proposed approach is displayed and discussed.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
International Nuclear Information System (INIS)
Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan
2016-01-01
The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.
Introduction to the special issue: parsimony and redundancy in models of language.
Wiechmann, Daniel; Kerz, Elma; Snider, Neal; Jaeger, T Florian
2013-09-01
One of the most fundamental goals in linguistic theory is to understand the nature of linguistic knowledge, that is, the representations and mechanisms that figure in a cognitively plausible model of human language-processing. The past 50 years have witnessed the development and refinement of various theories about what kind of 'stuff' human knowledge of language consists of, and technological advances now permit the development of increasingly sophisticated computational models implementing key assumptions of different theories from both rationalist and empiricist perspectives. The present special issue does not aim to present or discuss the arguments for and against the two epistemological stances or discuss evidence that supports either of them (cf. Bod, Hay, & Jannedy, 2003; Christiansen & Chater, 2008; Hauser, Chomsky, & Fitch, 2002; Oaksford & Chater, 2007; O'Donnell, Hauser, & Fitch, 2005). Rather, the research presented in this issue, which we label usage-based here, conceives of linguistic knowledge as being induced from experience. According to the strongest of such accounts, the acquisition and processing of language can be explained with reference to general cognitive mechanisms alone (rather than with reference to innate language-specific mechanisms). Defined in these terms, usage-based approaches encompass approaches referred to as experience-based, performance-based and/or emergentist approaches (Amrnon & Snider, 2010; Bannard, Lieven, & Tomasello, 2009; Bannard & Matthews, 2008; Chater & Manning, 2006; Clark & Lappin, 2010; Gerken, Wilson, & Lewis, 2005; Gomez, 2002;
The Development of Synchronization Function for Triple Redundancy System Based on SCADE
Directory of Open Access Journals (Sweden)
Moupeng
2015-07-01
Full Text Available Redundancy technique is an effective approach to improve the reliability and security of flight control system, synchronization function of redundancy system is the key technology of redundancy management. The flight control computer synchronization model is developed by graphical modeling method in the SCADE development environment, the automatic code generation technology is used to generate high level reliable embedded real-time code for synchronization function, omitting the code test process, shorten the development cycle. In the practical application, the program can accomplish the functional synchronization, and lay a well foundation for the redundancy system.
Neilson, Peter D; Neilson, Megan D
2005-09-01
Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.
Quantum redundancies and local realism
International Nuclear Information System (INIS)
Horodecki, R.; Horodecki, P.
1994-01-01
The basic properties of quantum redundancies are presented. The previous definitions of the informationally coherent quantum (ICQ) system are generalized in terms of the redundancies. The ICQ systems are also considered in the context of local realism in terms of the information integrity factor η. The classical region η≤qslant[1]/[2] for the two classes of mixed, nonfactorizable states admitting the local hidden variable model is found. ((orig.))
Directory of Open Access Journals (Sweden)
A. Ya. Krasinskii
2014-01-01
Full Text Available The method of research stability and stabilization of equilibrium of systems with geometrical constraints is elaborated and used for equilibrium for real mechatronic arrangement GBB1005 Ball & Beam. For mathematical model construction is used Shul'gin's equations with redundant coordinates. The through differentiation geometrical constraints obtained kinematic (holonomic constraints is necessary add for stability analysis. Asymptotic stability equilibrium for mechanical systems with redundant coordinates is possible , in spite of formal reduction to Lyapunov's especial case, if the number zero roots is equal the number constraints . More exact nonlinear mathematical model of the mechanical component Ball &Beam is considered in this paper. One nonlinear geometric constrain in this problem is allow find the new equilibrium position. The choice of linear control subsystem is depend from the choice of redundant coordinate.
Directory of Open Access Journals (Sweden)
Loet Leydesdorff
2010-01-01
Full Text Available Mutual information among three or more dimensions (μ* = –Q has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an alternative measure of interaction information based on iterative approximation of maximum entropies. Q can then be considered as a measure of the difference between interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system—a model entertained by the system itself—on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references.
Ferreira Júnior, Washington Soares; Siqueira, Clarissa Fernanda Queiroz; de Albuquerque, Ulysses Paulino
2012-01-01
We use the model of utilitarian redundancy as a basis for research. This model provides predictions that have not been tested by other research. In this sense, we sought to investigate the stem bark extraction between preferred and less-preferred species by a rural community in Caatinga environment. In addition, we sought to explain local preferences to observe if preferred plants have a higher content of tannins than less-preferred species. For this, we selected seven preferred species and seven less-preferred species from information obtained from semistructured interviews applied to 49 informants. Three areas of vegetation around the community were also selected, in which individuals were tagged, and were measured the diameter at ground level (DGL) diameter at breast height (DBH), and measurements of available and extracted bark areas. Samples of bark of the species were also collected for the evaluation of tannin content, obtained by the method of radial diffusion. From the results, the preferred species showed a greater area of bark removed. However, the tannin content showed no significant differences between preferred and less-preferred plants. These results show there is a relationship between preference and use, but this preference is not related to the total tannins content.
Directory of Open Access Journals (Sweden)
Washington Soares Ferreira Júnior
2012-01-01
Full Text Available We use the model of utilitarian redundancy as a basis for research. This model provides predictions that have not been tested by other research. In this sense, we sought to investigate the stem bark extraction between preferred and less-preferred species by a rural community in Caatinga environment. In addition, we sought to explain local preferences to observe if preferred plants have a higher content of tannins than less-preferred species. For this, we selected seven preferred species and seven less-preferred species from information obtained from semistructured interviews applied to 49 informants. Three areas of vegetation around the community were also selected, in which individuals were tagged, and were measured the diameter at ground level (DGL diameter at breast height (DBH, and measurements of available and extracted bark areas. Samples of bark of the species were also collected for the evaluation of tannin content, obtained by the method of radial diffusion. From the results, the preferred species showed a greater area of bark removed. However, the tannin content showed no significant differences between preferred and less-preferred plants. These results show there is a relationship between preference and use, but this preference is not related to the total tannins content.
Ferreira Júnior, Washington Soares; Siqueira, Clarissa Fernanda Queiroz; de Albuquerque, Ulysses Paulino
2012-01-01
We use the model of utilitarian redundancy as a basis for research. This model provides predictions that have not been tested by other research. In this sense, we sought to investigate the stem bark extraction between preferred and less-preferred species by a rural community in Caatinga environment. In addition, we sought to explain local preferences to observe if preferred plants have a higher content of tannins than less-preferred species. For this, we selected seven preferred species and seven less-preferred species from information obtained from semistructured interviews applied to 49 informants. Three areas of vegetation around the community were also selected, in which individuals were tagged, and were measured the diameter at ground level (DGL) diameter at breast height (DBH), and measurements of available and extracted bark areas. Samples of bark of the species were also collected for the evaluation of tannin content, obtained by the method of radial diffusion. From the results, the preferred species showed a greater area of bark removed. However, the tannin content showed no significant differences between preferred and less-preferred plants. These results show there is a relationship between preference and use, but this preference is not related to the total tannins content. PMID:22319546
Directory of Open Access Journals (Sweden)
Salman IJAZ
2018-05-01
Full Text Available In this paper, a methodology has been developed to address the issue of force fighting and to achieve precise position tracking of control surface driven by two dissimilar actuators. The nonlinear dynamics of both actuators are first approximated as fractional order models. Based on the identified models, three fractional order controllers are proposed for the whole system. Two Fractional Order PID (FOPID controllers are dedicated to improving transient response and are designed in a position feedback configuration. In order to synchronize the actuator dynamics, a third fractional order PI controller is designed, which feeds the force compensation signal in position feedback loop of both actuators. Nelder-Mead (N-M optimization technique is employed in order to optimally tune controller parameters based on the proposed performance criteria. To test the proposed controllers according to real flight condition, an external disturbance of higher amplitude that acts as airload is applied directly on the control surface. In addition, a disturbance signal function of system states is applied to check the robustness of proposed controller. Simulation results on nonlinear system model validated the performance of the proposed scheme as compared to optimal PID and high gain PID controllers. Keywords: Aerospace, Fractional order control, Model identification, Nelder-Mead optimization, Robustness
Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei
2017-08-01
Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal
International Nuclear Information System (INIS)
Kim, Heungseob; Kim, Pansoo
2017-01-01
To maximize the reliability of a system, the traditional reliability–redundancy allocation problem (RRAP) determines the component reliability and level of redundancy for each subsystem. This paper proposes an advanced RRAP that also considers the optimal redundancy strategy, either active or cold standby. In addition, new examples are presented for it. Furthermore, the exact reliability function for a cold standby redundant subsystem with an imperfect detector/switch is suggested, and is expected to replace the previous approximating model that has been used in most related studies. A parallel genetic algorithm for solving the RRAP as a mixed-integer nonlinear programming model is presented, and its performance is compared with those of previous studies by using numerical examples on three benchmark problems. - Highlights: • Optimal strategy is proposed to solve reliability redundancy allocation problem. • The redundancy strategy uses parallel genetic algorithm. • Improved reliability function for a cold standby subsystem is suggested. • Proposed redundancy strategy enhances the system reliability.
International Nuclear Information System (INIS)
Kong, Xiangyong; Gao, Liqun; Ouyang, Haibin; Li, Steven
2015-01-01
In most research on redundancy allocation problem (RAP), the redundancy strategy for each subsystem is assumed to be predetermined and fixed. This paper focuses on a specific RAP with multiple strategy choices (RAP-MSC), in which both active redundancy and cold standby redundancy can be selected as an additional decision variable for individual subsystems. To do so, the component type, redundancy strategy and redundancy level for each subsystem should be chosen subject to the system constraints appropriately such that the system reliability is maximized. Meanwhile, imperfect switching for cold standby redundancy is considered and a k-Erlang distribution is introduced to model the time-to-failure component as well. Given the importance and complexity of RAP-MSC, we propose a new efficient simplified version of particle swarm optimization (SPSO) to solve such NP-hard problems. In this method, a new position updating scheme without velocity is presented with stochastic disturbance and a low probability. Moreover, it is compared with several well-known PSO variants and other state-of-the-art approaches in the literature to evaluate its performance. The experiment results demonstrate the superiority of SPSO as an alternative for solving the RAP-MSC. - Highlights: • A more realistic RAP form with multiple strategy choices is focused. • Redundancy strategies are to be selected rather than fixed in general RAP. • A new simplified particle swarm optimization is proposed. • Higher reliabilities are achieved than the state-of-the-art approaches.
Distributed redundancy and robustness in complex systems
Randles, Martin
2011-03-01
The uptake and increasing prevalence of Web 2.0 applications, promoting new large-scale and complex systems such as Cloud computing and the emerging Internet of Services/Things, requires tools and techniques to analyse and model methods to ensure the robustness of these new systems. This paper reports on assessing and improving complex system resilience using distributed redundancy, termed degeneracy in biological systems, to endow large-scale complicated computer systems with the same robustness that emerges in complex biological and natural systems. However, in order to promote an evolutionary approach, through emergent self-organisation, it is necessary to specify the systems in an \\'open-ended\\' manner where not all states of the system are prescribed at design-time. In particular an observer system is used to select robust topologies, within system components, based on a measurement of the first non-zero Eigen value in the Laplacian spectrum of the components\\' network graphs; also known as the algebraic connectivity. It is shown, through experimentation on a simulation, that increasing the average algebraic connectivity across the components, in a network, leads to an increase in the variety of individual components termed distributed redundancy; the capacity for structurally distinct components to perform an identical function in a particular context. The results are applied to a specific application where active clustering of like services is used to aid load balancing in a highly distributed network. Using the described procedure is shown to improve performance and distribute redundancy. © 2010 Elsevier Inc.
Redundant correlation effect on personalized recommendation
Qiu, Tian; Han, Teng-Yue; Zhong, Li-Xin; Zhang, Zi-Ke; Chen, Guang
2014-02-01
The high-order redundant correlation effect is investigated for a hybrid algorithm of heat conduction and mass diffusion (HHM), through both heat conduction biased (HCB) and mass diffusion biased (MDB) correlation redundancy elimination processes. The HCB and MDB algorithms do not introduce any additional tunable parameters, but keep the simple character of the original HHM. Based on two empirical datasets, the Netflix and MovieLens, the HCB and MDB are found to show better recommendation accuracy for both the overall objects and the cold objects than the HHM algorithm. Our work suggests that properly eliminating the high-order redundant correlations can provide a simple and effective approach to accurate recommendation.
Palpebral redundancy from hypothyroidism.
Wortsman, J; Wavak, P
1980-01-01
A patient is described with disabling palpebral edema. Primary hypothyroidism had been previously diagnosed and treated. Testing of thyroid function revealed persistence of the hypothyroidism. Treatment with L-thyroxine produced normalization of the biochemical parameters and resolution of palpebral edema. The search for hypothyrodism in patients with palpebral redundancy is emphasized.
International Nuclear Information System (INIS)
Unseren, M.A.
1993-04-01
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993
Energy Technology Data Exchange (ETDEWEB)
Unseren, M.A.
1993-04-01
The report discusses the orientation tracking control problem for a kinematically redundant, autonomous manipulator moving in a three dimensional workspace. The orientation error is derived using the normalized quaternion error method of Ickes, the Luh, Walker, and Paul error method, and a method suggested here utilizing the Rodrigues parameters, all of which are expressed in terms of normalized quaternions. The analytical time derivatives of the orientation errors are determined. The latter, along with the translational velocity error, form a dosed loop kinematic velocity model of the manipulator using normalized quaternion and translational position feedback. An analysis of the singularities associated with expressing the models in a form suitable for solving the inverse kinematics problem is given. Two redundancy resolution algorithms originally developed using an open loop kinematic velocity model of the manipulator are extended to properly take into account the orientation tracking control problem. This report furnishes the necessary mathematical framework required prior to experimental implementation of the orientation tracking control schemes on the seven axis CESARm research manipulator or on the seven-axis Robotics Research K1207i dexterous manipulator, the latter of which is to be delivered to the Oak Ridge National Laboratory in 1993.
Input relegation control for gross motion of a kinematically redundant manipulator
Energy Technology Data Exchange (ETDEWEB)
Unseren, M.A.
1992-10-01
This report proposes a method for resolving the kinematic redundancy of a serial link manipulator moving in a three-dimensional workspace. The underspecified problem of solving for the joint velocities based on the classical kinematic velocity model is transformed into a well-specified problem. This is accomplished by augmenting the original model with additional equations which relate a new vector variable quantifying the redundant degrees of freedom (DOF) to the joint velocities. The resulting augmented system yields a well specified solution for the joint velocities. Methods for selecting the redundant DOF quantifying variable and the transformation matrix relating it to the joint velocities are presented so as to obtain a minimum Euclidean norm solution for the joint velocities. The approach is also applied to the problem of resolving the kinematic redundancy at the acceleration level. Upon resolving the kinematic redundancy, a rigid body dynamical model governing the gross motion of the manipulator is derived. A control architecture is suggested which according to the model, decouples the Cartesian space DOF and the redundant DOF.
Reliability Analysis Multiple Redundancy Controller for Nuclear Safety Systems
International Nuclear Information System (INIS)
Son, Gwangseop; Kim, Donghoon; Son, Choulwoong
2013-01-01
This controller is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). The architecture of MRC is briefly described, and the Markov model is developed. Based on the model, the reliability and Mean Time To Failure (MTTF) are analyzed. In this paper, the architecture of MRC for nuclear safety systems is described. The MRC is configured for multiple modular redundancy (MMR) composed of dual modular redundancy (DMR) and triple modular redundancy (TMR). Markov models for MRC architecture was developed, and then the reliability was analyzed by using the model. From the reliability analyses for the MRC, it is obtained that the failure rate of each module in the MRC should be less than 2 Χ 10 -4 /hour and the MTTF average increase rate depending on FCF increment, i. e. ΔMTTF/ΔFCF, is 4 months/0.1
Learners misperceive the benefits of redundant text in multimedia learning.
Fenesi, Barbara; Kim, Joseph A
2014-01-01
Research on metacognition has consistently demonstrated that learners fail to endorse instructional designs that produce benefits to memory, and often prefer designs that actually impair comprehension. Unlike previous studies in which learners were only exposed to a single multimedia design, the current study used a within-subjects approach to examine whether exposure to both redundant text and non-redundant text multimedia presentations improved learners' metacognitive judgments about presentation styles that promote better understanding. A redundant text multimedia presentation containing narration paired with verbatim on-screen text (Redundant) was contrasted with two non-redundant text multimedia presentations: (1) narration paired with images and minimal text (Complementary) or (2) narration paired with minimal text (Sparse). Learners watched presentation pairs of either Redundant + Complementary, or Redundant + Sparse. Results demonstrate that Complementary and Sparse presentations produced highest overall performance on the final comprehension assessment, but the Redundant presentation produced highest perceived understanding and engagement ratings. These findings suggest that learners misperceive the benefits of redundant text, even after direct exposure to a non-redundant, effective presentation.
Directory of Open Access Journals (Sweden)
GILLES E. GIGNAC
2009-03-01
Full Text Available Self-report measures of emotional intelligence (EI have been criticized for not being associated with unique validity, independently of comprehensive measures of personality such as the NEO PI-R. In this investigation, the issue of unique validity was re-directed at personality as measured by the facets of the NEO PI-R. Specifically, based on three samples, the personality facet of Depression within the NEO PI-R was found to be so substantially predicted by ten other NEO PI-R facets as to suggest construct redundancy within the NEO PI-R (i.e., R = .93, R = .99, R = .96. Because mixed-models of EI tend to be associated with clearer construct boundaries than personality, it is suggested that EI may be associated with some scientific utility (i.e., ‘incremental coherence’, even in the total absence of any empirically demonstrable unique validity.
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1992-07-01
This report details the conceptual approaches to be used in calculating radiation doses to individuals throughout the various periods of operations at the Hanford Site. The report considers the major environmental transport pathways--atmospheric, surface water, and ground water--and projects and appropriate modeling technique for each. The modeling sequence chosen for each pathway depends on the available data on doses, the degree of confidence justified by such existing data, and the level of sophistication deemed appropriate for the particular pathway and time period being considered
Structural redundance of NPPs and diagnostics
International Nuclear Information System (INIS)
Znyshev, V.V.; Sabaev, E.F.
1988-01-01
A new approach to functional diagnosis of NPP state based on structural redundance, owing to which in major of the facilities there are elements identical as to the structure and operational conditions, is suggested. The difference from zero by the given value for one parameter measured for various identical elements is an indicator of a failed element and a signal for diagnostic analysis
Interaction control of a redundant mobile manipulator
International Nuclear Information System (INIS)
Chung, J.H.; Velinsky, S.A.; Hess, R.A.
1998-01-01
This paper discusses the modeling and control of a spatial mobile manipulator that consists of a robotic manipulator mounted on a wheeled mobile platform. The Lagrange-d'Alembert formulation is used to obtain a concise description of the dynamics of the system, which is subject to nonholonomic constraints. The complexity of the model is increased by introducing kinematic redundancy, which is created when a multilinked manipulator is used. The kinematic redundancy is resolved by decomposing the mobile manipulator into two subsystems: the mobile platform and the manipulator. The redundancy resolution scheme employs a nonlinear interaction-control algorithm, which is developed and applied to coordinate the two subsystems' controllers. The subsystem controllers are independently designed, based on each subsystem's dynamic characteristics. Simulation results show the promise of the developed algorithm
Self-Healing Networks: Redundancy and Structure
Quattrociocchi, Walter; Caldarelli, Guido; Scala, Antonio
2014-01-01
We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. We then analyze the effect of the level of redundancy on the resilience to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. Finally, we study the effects of redundancy under different connectivity patterns—from planar grids, to small-world, up to scale-free networks—on healing performances. Small-world topologies show that introducing some long-range connections in planar grids greatly enhances the resilience to multiple failures with performances comparable to the case of the most resilient (and least realistic) scale-free structures. Obvious applications of self-healing are in the important field of infrastructural networks like gas, power, water, oil distribution systems. PMID:24533065
Detection of sensor failures in nuclear plants using analytic redundancy
International Nuclear Information System (INIS)
Kitamura, M.
1980-01-01
A method for on-line, nonperturbative detection and identification of sensor failures in nuclear power plants was studied to determine its feasibility. This method is called analytic redundancy, or functional redundancy. Sensor failure has traditionally been detected by comparing multiple signals from redundant sensors, such as in two-out-of-three logic. In analytic redundancy, with the help of an assumed model of the physical system, the signals from a set of sensors are processed to reproduce the signals from all system sensors
Analysis of singularity in redundant manipulators
International Nuclear Information System (INIS)
Watanabe, Koichi
2000-03-01
In the analysis of arm positions and configurations of redundant manipulators, the singularity avoidance problems are important themes. This report presents singularity avoidance computations of a 7 DOF manipulator by using a computer code based on human-arm models. The behavior of the arm escaping from the singular point can be identified satisfactorily through the use of 3-D plotting tools. (author)
Motion control of musculoskeletal systems with redundancy.
Park, Hyunjoo; Durand, Dominique M
2008-12-01
Motion control of musculoskeletal systems for functional electrical stimulation (FES) is a challenging problem due to the inherent complexity of the systems. These include being highly nonlinear, strongly coupled, time-varying, time-delayed, and redundant. The redundancy in particular makes it difficult to find an inverse model of the system for control purposes. We have developed a control system for multiple input multiple output (MIMO) redundant musculoskeletal systems with little prior information. The proposed method separates the steady-state properties from the dynamic properties. The dynamic control uses a steady-state inverse model and is implemented with both a PID controller for disturbance rejection and an artificial neural network (ANN) feedforward controller for fast trajectory tracking. A mechanism to control the sum of the muscle excitation levels is also included. To test the performance of the proposed control system, a two degree of freedom ankle-subtalar joint model with eight muscles was used. The simulation results show that separation of steady-state and dynamic control allow small output tracking errors for different reference trajectories such as pseudo-step, sinusoidal and filtered random signals. The proposed control method also demonstrated robustness against system parameter and controller parameter variations. A possible application of this control algorithm is FES control using multiple contact cuff electrodes where mathematical modeling is not feasible and the redundancy makes the control of dynamic movement difficult.
Software engineering : redundancy is key
Brand, van den M.G.J.; Groote, J.F.
2015-01-01
Software engineers are humans and so they make lots of mistakes. Typically 1 out of 10 to 100 tasks go wrong. The only way to avoid these mistakes is to introduce redundancy in the software engineering process. This article is a plea to consciously introduce several levels of redundancy for each
Predicting genome-wide redundancy using machine learning
Directory of Open Access Journals (Sweden)
Shasha Dennis E
2010-11-01
Full Text Available Abstract Background Gene duplication can lead to genetic redundancy, which masks the function of mutated genes in genetic analyses. Methods to increase sensitivity in identifying genetic redundancy can improve the efficiency of reverse genetics and lend insights into the evolutionary outcomes of gene duplication. Machine learning techniques are well suited to classifying gene family members into redundant and non-redundant gene pairs in model species where sufficient genetic and genomic data is available, such as Arabidopsis thaliana, the test case used here. Results Machine learning techniques that combine multiple attributes led to a dramatic improvement in predicting genetic redundancy over single trait classifiers alone, such as BLAST E-values or expression correlation. In withholding analysis, one of the methods used here, Support Vector Machines, was two-fold more precise than single attribute classifiers, reaching a level where the majority of redundant calls were correctly labeled. Using this higher confidence in identifying redundancy, machine learning predicts that about half of all genes in Arabidopsis showed the signature of predicted redundancy with at least one but typically less than three other family members. Interestingly, a large proportion of predicted redundant gene pairs were relatively old duplications (e.g., Ks > 1, suggesting that redundancy is stable over long evolutionary periods. Conclusions Machine learning predicts that most genes will have a functionally redundant paralog but will exhibit redundancy with relatively few genes within a family. The predictions and gene pair attributes for Arabidopsis provide a new resource for research in genetics and genome evolution. These techniques can now be applied to other organisms.
Obstacle avoidance for kinematically redundant robots using an adaptive fuzzy logic algorithm
International Nuclear Information System (INIS)
Beheshti, M.T.H.; Tehrani, A.K.
1999-05-01
In this paper the Adaptive Fuzzy Logic approach for solving the inverse kinematics of redundant robots in an environment with obstacles is presented. The obstacles are modeled as convex bodies. A fuzzy rule base that is updated via an adaptive law is used to solve the inverse kinematic problem. Additional rules have been introduced to take care of the obstacles avoidance problem. The proposed method has advantages such as high accuracy, simplicity of computations and generality for all redundant robots. Simulation results illustrate much better tracking performance than the dynamic base solution for a given trajectory in cartesian space, while guaranteeing a collision-free trajectory and observation of a mechanical joint limit
Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro
2017-07-01
This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.
Redundant interferometric calibration as a complex optimization problem
Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.
2018-05-01
Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.
Prioritising Redundant Network Component for HOWBAN Survivability Using FMEA
Directory of Open Access Journals (Sweden)
Cheong Loong Chan
2017-01-01
Full Text Available Deploying redundant component is the ubiquitous approach to improve the reliability and survivability of a hybrid optical wireless broadband access network (HOWBAN. Much work has been done to study the cost and impact of deploying redundant component in the network but no formal tools have been used to enable the evaluation and decision to prioritise the deployment of redundant facilities in the network. In this paper we show how FMEA (Failure Mode Effect and Analysis technique can be adapted to identify the critical segment in the network and prioritise the redundant component to be deployed to ensure network survivability. Our result showed that priority must be given to redundancy to mitigate grid power outage particularly in less developed countries which is poised for rapid expansion in broadband services.
Compliant behaviour of redundant robot arm - experiments with null-space
Directory of Open Access Journals (Sweden)
Petrović Petar B.
2015-01-01
Full Text Available This paper presents theoretical and experimental aspects of Jacobian nullspace use in kinematically redundant robots for achieving kinetostatically consistent control of their compliant behavior. When the stiffness of the robot endpoint is dominantly influenced by the compliance of the robot joints, generalized stiffness matrix can be mapped into joint space using appropriate congruent transformation. Actuation stiffness matrix achieved by this transformation is generally nondiagonal. Off-diagonal elements of the actuation matrix can be generated by redundant actuation only (polyarticular actuators, but such kind of actuation is very difficult to realize practically in technical systems. The approach of solving this problem which is proposed in this paper is based on the use of kinematic redundancy and nullspace of the Jacobian matrix. Evaluation of the developed analytical model was done numerically by a minimal redundant robot with one redundant d.o.f. and experimentally by a 7 d.o.f. Yaskawa SIA 10F robot arm. [Projekat Ministarstva nauke Republike Srbije, br. TR35007
Trophic redundancy reduces vulnerability to extinction cascades.
Sanders, Dirk; Thébault, Elisa; Kehoe, Rachel; Frank van Veen, F J
2018-03-06
Current species extinction rates are at unprecedentedly high levels. While human activities can be the direct cause of some extinctions, it is becoming increasingly clear that species extinctions themselves can be the cause of further extinctions, since species affect each other through the network of ecological interactions among them. There is concern that the simplification of ecosystems, due to the loss of species and ecological interactions, increases their vulnerability to such secondary extinctions. It is predicted that more complex food webs will be less vulnerable to secondary extinctions due to greater trophic redundancy that can buffer against the effects of species loss. Here, we demonstrate in a field experiment with replicated plant-insect communities, that the probability of secondary extinctions is indeed smaller in food webs that include trophic redundancy. Harvesting one species of parasitoid wasp led to secondary extinctions of other, indirectly linked, species at the same trophic level. This effect was markedly stronger in simple communities than for the same species within a more complex food web. We show that this is due to functional redundancy in the more complex food webs and confirm this mechanism with a food web simulation model by highlighting the importance of the presence and strength of trophic links providing redundancy to those links that were lost. Our results demonstrate that biodiversity loss, leading to a reduction in redundant interactions, can increase the vulnerability of ecosystems to secondary extinctions, which, when they occur, can then lead to further simplification and run-away extinction cascades. Copyright © 2018 the Author(s). Published by PNAS.
Directory of Open Access Journals (Sweden)
Paula eRubio-Fernández
2016-02-01
Full Text Available Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of redundant color adjectives: factors related to the efficiency of color in the visual context and factors related to the semantic category of the noun. The results of Experiment 1 confirmed that people produce redundant color adjectives when color may facilitate object recognition; e.g., they do so more often in polychrome displays than in monochrome displays, and more often in English (pre-nominal position than in Spanish (post-nominal position. Redundant color adjectives are also used when color is a central property of the object category; e.g., people referred to the color of clothes more often than to the color of geometrical figures (Experiment 1, and they overspecified atypical colors more often than variable and stereotypical colors (Experiment 2. These results are relevant for pragmatic models of referential communication based on Gricean pragmatics and informativeness. An alternative analysis is proposed, which focuses on the efficiency and pertinence of color in a given referential situation.
Parameter identifiability and redundancy: theoretical considerations.
Directory of Open Access Journals (Sweden)
Mark P Little
Full Text Available BACKGROUND: Models for complex biological systems may involve a large number of parameters. It may well be that some of these parameters cannot be derived from observed data via regression techniques. Such parameters are said to be unidentifiable, the remaining parameters being identifiable. Closely related to this idea is that of redundancy, that a set of parameters can be expressed in terms of some smaller set. Before data is analysed it is critical to determine which model parameters are identifiable or redundant to avoid ill-defined and poorly convergent regression. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we outline general considerations on parameter identifiability, and introduce the notion of weak local identifiability and gradient weak local identifiability. These are based on local properties of the likelihood, in particular the rank of the Hessian matrix. We relate these to the notions of parameter identifiability and redundancy previously introduced by Rothenberg (Econometrica 39 (1971 577-591 and Catchpole and Morgan (Biometrika 84 (1997 187-196. Within the widely used exponential family, parameter irredundancy, local identifiability, gradient weak local identifiability and weak local identifiability are shown to be largely equivalent. We consider applications to a recently developed class of cancer models of Little and Wright (Math Biosciences 183 (2003 111-134 and Little et al. (J Theoret Biol 254 (2008 229-238 that generalize a large number of other recently used quasi-biological cancer models. CONCLUSIONS/SIGNIFICANCE: We have shown that the previously developed concepts of parameter local identifiability and redundancy are closely related to the apparently weaker properties of weak local identifiability and gradient weak local identifiability--within the widely used exponential family these concepts largely coincide.
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
Utilizing Nested Normal Form to Design Redundancy Free JSON Schemas
Directory of Open Access Journals (Sweden)
Wai Yin Mok
2016-12-01
Full Text Available JSON (JavaScript Object Notation is a lightweight data-interchange format for the Internet. JSON is built on two structures: (1 a collection of name/value pairs and (2 an ordered list of values (http://www.json.org/. Because of this simple approach, JSON is easy to use and it has the potential to be the data interchange format of choice for the Internet. Similar to XML, JSON schemas allow nested structures to model hierarchical data. As data interchange over the Internet increases exponentially due to cloud computing or otherwise, redundancy free JSON data are an attractive form of communication because they improve the quality of data communication through eliminating update anomaly. Nested Normal Form, a normal form for hierarchical data, is a precise characterization of redundancy. A nested table, or a hierarchical schema, is in Nested Normal Form if and only if it is free of redundancy caused by multivalued and functional dependencies. Using Nested Normal Form as a guide, this paper introduces a JSON schema design methodology that begins with UML use case diagrams, communication diagrams and class diagrams that model a system under study. Based on the use cases’ execution frequencies and the data passed between involved parties in the communication diagrams, the proposed methodology selects classes from the class diagrams to be the roots of JSON scheme trees and repeatedly adds classes from the class diagram to the scheme trees as long as the schemas satisfy Nested Normal Form. This process continues until all of the classes in the class diagram have been added to some JSON scheme trees.
Henten, van E.J.; Schenk, E.J.J.; Willigenburg, van L.G.; Meuleman, J.; Barreiro, P.
2010-01-01
The paper presents results of research on an inverse kinematics algorithm that has been used in a functional model of a cucumber-harvesting robot consisting of a redundant P6R manipulator. Within a first generic approach, the inverse kinematics problem was reformulated as a non-linear programming
Material Modelling - Composite Approach
DEFF Research Database (Denmark)
Nielsen, Lauge Fuglsang
1997-01-01
is successfully justified comparing predicted results with experimental data obtained in the HETEK-project on creep, relaxation, and shrinkage of very young concretes cured at a temperature of T = 20^o C and a relative humidity of RH = 100%. The model is also justified comparing predicted creep, shrinkage......, and internal stresses caused by drying shrinkage with experimental results reported in the literature on the mechanical behavior of mature concretes. It is then concluded that the model presented applied in general with respect to age at loading.From a stress analysis point of view the most important finding...... in this report is that cement paste and concrete behave practically as linear-viscoelastic materials from an age of approximately 10 hours. This is a significant age extension relative to earlier studies in the literature where linear-viscoelastic behavior is only demonstrated from ages of a few days. Thus...
Signal validation in nuclear power plants using redundant measurements
International Nuclear Information System (INIS)
Glockler, O.; Upadhyaya, B.R.; Morgenstern, V.M.
1989-01-01
This paper discusses the basic principles of a multivariable signal validation software system utilizing redundant sensor readings of process variables in nuclear power plants (NPPs). The technique has been tested in numerical experiments, and was applied to actual data from a pressurized water reactor (PWR). The simultaneous checking within one redundant measurement set, and the cross-checking among redundant measurement sets of dissimilar process variables, results in an algorithm capable of detecting and isolating bias-type errors. A case in point occurs when a majority of the direct redundant measurements of more than one process variable has failed simultaneously by a common-mode or correlated failures can be detected by the developed approach. 5 refs
Basic aspects of stochastic reliability analysis for redundancy systems
International Nuclear Information System (INIS)
Doerre, P.
1989-01-01
Much confusion has been created by trying to establish common cause failure (CCF) as an extra phenomenon which has to be treated with extra methods in reliability and data analysis. This paper takes another approach which can be roughly described by the statement that dependent failure is the basic phenomenon, while 'independent failure' refers to a special limiting case, namely the perfectly homogeneous population. This approach is motivated by examples demonstrating that common causes do not lead to dependent failure, so far as physical dependencies like shared components are excluded, and that stochastic dependencies are not related to common causes. The possibility to select more than one failure behaviour from an inhomogeneous population is identified as an additional random process which creates stochastic dependence. However, this source of randomness is usually treated in the deterministic limit, which destroys dependence and hence yields incorrect multiple failure frequencies for redundancy structures, thus creating the need for applying corrective CCF models. (author)
International Nuclear Information System (INIS)
Blume-Kohout, Robin; Zurek, Wojciech H.
2006-01-01
We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of 'singly branching' states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment's size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or 'nonredundant', information
Blume-Kohout, Robin; Zurek, Wojciech H.
2006-06-01
We lay a comprehensive foundation for the study of redundant information storage in decoherence processes. Redundancy has been proposed as a prerequisite for objectivity, the defining property of classical objects. We consider two ensembles of states for a model universe consisting of one system and many environments: the first consisting of arbitrary states, and the second consisting of “singly branching” states consistent with a simple decoherence model. Typical states from the random ensemble do not store information about the system redundantly, but information stored in branching states has a redundancy proportional to the environment’s size. We compute the specific redundancy for a wide range of model universes, and fit the results to a simple first-principles theory. Our results show that the presence of redundancy divides information about the system into three parts: classical (redundant); purely quantum; and the borderline, undifferentiated or “nonredundant,” information.
Control Systems for Hyper-Redundant Robots Based on Artificial Potential Method
Directory of Open Access Journals (Sweden)
Mihaela Florescu
2015-06-01
Full Text Available This paper presents the control method of hyper-redundant robots based on the artificial potential approach. The principles of this method are shown and a suggestive example is offered. Then, the artificial potential method is applied to the case of a tentacle robot starting from the dynamic model of the robot. In addition, a series of results that are obtained through simulation is presented.
Directory of Open Access Journals (Sweden)
Xing Jiang
2018-03-01
Full Text Available In recent years, the ultra-high voltage direct current (UHVDC transmission system has been developed rapidly for its significant long-distance, high-capacity and low-loss properties. Equipment failures and overall outages of the UHVDC system have increasingly vital influence on the power supply of the receiving end grid. To improve the reliability level of UHVDC systems, a quantitative selection and configuration approach of redundant structures is proposed in this paper, which is based on multi-state reliability equivalence. Specifically, considering the symmetry characteristic of an UHVDC system, a state space model is established as a monopole rather than a bipole, which effectively reduces the state space dimensions to be considered by deducing the reliability merging operator of two poles. Considering the standby effect of AC filters and the recovery effect of converter units, the number of available converter units and corresponding probability are expressed with in universal generating function (UGF form. Then, a sensitivity analysis is performed to quantify the impact of component reliability parameters on system reliability and determine the most specific devices that should be configured in the redundant structure. Finally, a cost-benefit analysis is utilized to help determine the optimal scheme of redundant devices. Case studies are conducted to demonstrate the effectiveness and accuracy of the proposed method. Based on the numerical results, configuring a set of redundant transformers is indicated to be of the greatest significance to improve the reliability level of UHVDC transmission systems.
Redundancy in Nigerian Business Organizations: Alternatives (Pp ...
African Journals Online (AJOL)
FIRST LADY
Redundancy in Nigerian Business Organizations: Alternatives (Pp. ... When business downturns ... The galloping pace of information technologies is a harbinger of profound ... Redundant staff in public departments can also be retained as.
Redundant measurements for controlling errors
International Nuclear Information System (INIS)
Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.
1979-07-01
Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program
A succession of theories: purging redundancy from disturbance theory.
Pulsford, Stephanie A; Lindenmayer, David B; Driscoll, Don A
2016-02-01
The topics of succession and post-disturbance ecosystem recovery have a long and convoluted history. There is extensive redundancy within this body of theory, which has resulted in confusion, and the links among theories have not been adequately drawn. This review aims to distil the unique ideas from the array of theory related to ecosystem change in response to disturbance. This will help to reduce redundancy, and improve communication and understanding between researchers. We first outline the broad range of concepts that have developed over the past century to describe community change in response to disturbance. The body of work spans overlapping succession concepts presented by Clements in 1916, Egler in 1954, and Connell and Slatyer in 1977. Other theories describing community change include state and transition models, biological legacy theory, and the application of functional traits to predict responses to disturbance. Second, we identify areas of overlap of these theories, in addition to highlighting the conceptual and taxonomic limitations of each. In aligning each of these theories with one another, the limited scope and relative inflexibility of some theories becomes apparent, and redundancy becomes explicit. We identify a set of unique concepts to describe the range of mechanisms driving ecosystem responses to disturbance. We present a schematic model of our proposed synthesis which brings together the range of unique mechanisms that were identified in our review. The model describes five main mechanisms of transition away from a post-disturbance community: (i) pulse events with rapid state shifts; (ii) stochastic community drift; (iii) facilitation; (iv) competition; and (v) the influence of the initial composition of a post-disturbance community. In addition, stabilising processes such as biological legacies, inhibition or continuing disturbance may prevent a transition between community types. Integrating these six mechanisms with the functional
Joint optimization of redundancy level and spare part inventories
International Nuclear Information System (INIS)
Sleptchenko, Andrei; Heijden, Matthieu van der
2016-01-01
We consider a “k-out-of-N” system with different standby modes. Each of the N components consists of multiple part types. Upon failure, a component can be repaired within a certain time by switching the failed part by a spare, if available. We develop both an exact and a fast approximate analysis to compute the system availability. Next, we jointly optimize the component redundancy level with the inventories of the various spare parts. We find that our approximations are very accurate and suitable for large systems. We apply our model to a case study at a public organization in Qatar, and find that we can improve the availability-to-cost ratio by reducing the redundancy level and increasing the spare part inventories. In general, high redundancy levels appear to be useful only when components are relatively cheap and part replacement times are high. - Highlights: • We analyze a redundant system (k-out-of-N) with multiple parts and spares. • We jointly optimize the redundancy level and the spare part inventories. • We develop an exact method and an approximation to evaluate the system availability. • Adding spare parts and reducing the redundancy level cuts cost by 50% in a case study. • The availability is not very sensitive to the shape of the failure time distribution.
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
Directory of Open Access Journals (Sweden)
W. Bastiaan Kleijn
2005-06-01
Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.
Flat H Redundant Frangible Joint Development
Brown, Chris
2016-01-01
Orion and Commercial Crew Program (CCP) Partners have chosen to use frangible joints for certain separation events. The joints currently available are zero failure tolerant and will be used in mission safety applications. The goal is to further develop a NASA designed redundant frangible joint that will lower flight risk and increase reliability. FY16 testing revealed a successful design in subscale straight test specimens that gained efficiency and supports Orion load requirements. Approach / Innovation A design constraint is that the redundant joint must fit within the current Orion architecture, without the need for additional vehicle modification. This limitation required a design that changed the orientation of the expanding tube assemblies (XTAs), by rotating them 90deg from the standard joint configuration. The change is not trivial and affects the fracture mechanism and structural load paths. To address these changes, the design incorporates cantilevered arms on the break plate. The shock transmission and expansion of the XTA applies force to these arms and creates a prying motion to push the plate walls outward to the point of structural failure at the notched section. The 2014 test design revealed that parts could slip during functioning wasting valuable energy needed to separate the structure with only a single XTA functioning. Dual XTA functioning fully separated the assembly showing a discrepancy can be backed up with redundancy. Work on other fully redundant systems outside NASA is limited to a few patents that have not been subjected to functionality testing Design changes to prevent unwanted slippage (with ICA funding in 2015) showed success with a single XTA. The main goal for FY 2016 was to send the new Flat H RFJ to WSTF where single XTA test failures occurred back in 2014. The plan was to gain efficiency in this design by separating the Flat H RFJ with thicker ligaments with dimensions baselined in 2014. Other modifications included geometry
Evaporator modeling - A hybrid approach
International Nuclear Information System (INIS)
Ding Xudong; Cai Wenjian; Jia Lei; Wen Changyun
2009-01-01
In this paper, a hybrid modeling approach is proposed to model two-phase flow evaporators. The main procedures for hybrid modeling includes: (1) Based on the energy and material balance, and thermodynamic principles to formulate the process fundamental governing equations; (2) Select input/output (I/O) variables responsible to the system performance which can be measured and controlled; (3) Represent those variables existing in the original equations but are not measurable as simple functions of selected I/Os or constants; (4) Obtaining a single equation which can correlate system inputs and outputs; and (5) Identify unknown parameters by linear or nonlinear least-squares methods. The method takes advantages of both physical and empirical modeling approaches and can accurately predict performance in wide operating range and in real-time, which can significantly reduce the computational burden and increase the prediction accuracy. The model is verified with the experimental data taken from a testing system. The testing results show that the proposed model can predict accurately the performance of the real-time operating evaporator with the maximum error of ±8%. The developed models will have wide applications in operational optimization, performance assessment, fault detection and diagnosis
Assessment of redundant systems with imperfect coverage by means of binary decision diagrams
Energy Technology Data Exchange (ETDEWEB)
Myers, Albert F. [Northrop Grumman Corporation, 1840 Century Park East, Los Angeles, CA 90067-2199 (United States)], E-mail: Al.Myers@ngc.com; Rauzy, Antoine [IML/CNRS, 163, Avenue de Luminy, 13288 Marseille Cedex 09 (France)], E-mail: arauzy@iml.univ-mrs.fr
2008-07-15
In this article, we study the assessment of the reliability of redundant systems with imperfect fault coverage. We term fault coverage as the ability of a system to isolate and correctly accommodate failures of redundant elements. For highly reliable systems, such as avionic and space systems, fault coverage is in general imperfect and has a significant impact on system reliability. We review here the different models of imperfect fault coverage. We propose efficient algorithms to assess them separately (as k-out-of-n selectors). We show how to implement these algorithms into a binary decision diagrams engine. Finally, we report experimental results on real life test cases that show on the one hand the importance of imperfect coverage and on the other hand the efficiency of the proposed approach.
High precision redundant robotic manipulator
International Nuclear Information System (INIS)
Young, K.K.D.
1998-01-01
A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs
Timing control by redundant inhibitory neuronal circuits
Energy Technology Data Exchange (ETDEWEB)
Tristan, I., E-mail: itristan@ucsd.edu; Rulkov, N. F.; Huerta, R.; Rabinovich, M. [BioCircuits Institute, University of California, San Diego, La Jolla, California 92093-0402 (United States)
2014-03-15
Rhythms and timing control of sequential activity in the brain is fundamental to cognition and behavior. Although experimental and theoretical studies support the understanding that neuronal circuits are intrinsically capable of generating different time intervals, the dynamical origin of the phenomenon of functionally dependent timing control is still unclear. Here, we consider a new mechanism that is related to the multi-neuronal cooperative dynamics in inhibitory brain motifs consisting of a few clusters. It is shown that redundancy and diversity of neurons within each cluster enhances the sensitivity of the timing control with the level of neuronal excitation of the whole network. The generality of the mechanism is shown to work on two different neuronal models: a conductance-based model and a map-based model.
Timing control by redundant inhibitory neuronal circuits
International Nuclear Information System (INIS)
Tristan, I.; Rulkov, N. F.; Huerta, R.; Rabinovich, M.
2014-01-01
Rhythms and timing control of sequential activity in the brain is fundamental to cognition and behavior. Although experimental and theoretical studies support the understanding that neuronal circuits are intrinsically capable of generating different time intervals, the dynamical origin of the phenomenon of functionally dependent timing control is still unclear. Here, we consider a new mechanism that is related to the multi-neuronal cooperative dynamics in inhibitory brain motifs consisting of a few clusters. It is shown that redundancy and diversity of neurons within each cluster enhances the sensitivity of the timing control with the level of neuronal excitation of the whole network. The generality of the mechanism is shown to work on two different neuronal models: a conductance-based model and a map-based model
The restricted isometry property meets nonlinear approximation with redundant frames
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2013-01-01
with a redundant frame. The main ingredients of our approach are: a) Jackson and Bernstein inequalities, associated to the characterization of certain approximation spaces with interpolation spaces; b) a proof that for overcomplete frames which satisfy a Bernstein inequality, these interpolation spaces are nothing...
Working memory capacity and redundant information processing efficiency.
Endres, Michael J; Houpt, Joseph W; Donkin, Chris; Finn, Peter R
2015-01-01
Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.
Program management aid for redundancy selection and operational guidelines
Hodge, P. W.; Davis, W. L.; Frumkin, B.
1972-01-01
Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.
Coherent network detection of gravitational waves: the redundancy veto
International Nuclear Information System (INIS)
Wen Linqing; Schutz, Bernard F
2005-01-01
A network of gravitational wave detectors is called redundant if, given the direction to a source, the strain induced by a gravitational wave in one or more of the detectors can be fully expressed in terms of the strain induced in others in the network. Because gravitational waves have only two polarizations, any network of three or more differently oriented interferometers with similar observing bands is redundant. The three-armed LISA space interferometer has three outputs that are redundant at low frequencies. The two aligned LIGO interferometers at Hanford WA are redundant, and the LIGO detector at Livingston LA is nearly redundant with either of the Hanford detectors. Redundant networks have a powerful veto against spurious noise, a linear combination of the detector outputs that contains no gravitational wave signal. For LISA, this 'null' output is known as the Sagnac mode, and its use in discriminating between detector noise and a cosmological gravitational wave background is well understood. But the usefulness of the null veto for ground-based detector networks has been ignored until now. We show that it should make it possible to discriminate in a model-independent way between real gravitational waves and accidentally coincident non-Gaussian noise 'events' in redundant networks of two or more broadband detectors. It has been shown that with three detectors, the null output can even be used to locate the direction to the source, and then two other linear combinations of detector outputs give the optimal 'coherent' reconstruction of the two polarization components of the signal. We discuss briefly the implementation of such a detection strategy in realistic networks, where signals are weak, detector calibration is a significant uncertainty, and the various detectors may have different (but overlapping) observing bands
Optimization of robustness of interdependent network controllability by redundant design.
Directory of Open Access Journals (Sweden)
Zenghu Zhang
Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.
Sexual selection, redundancy and survival of the most beautiful
Indian Academy of Sciences (India)
A model is described of a highly redundant complex organism that has overlapping banks of genes such that each vital function is specified by several different genetic systems. This generates a synergistic profile linking probability of survival to the number of deleterious mutations in the genome. Computer models show ...
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
Redundant information encoding in QED during decoherence
Tuziemski, J.; Witas, P.; Korbicz, J. K.
2018-01-01
Broadly understood decoherence processes in quantum electrodynamics, induced by neglecting either the radiation [L. Landau, Z. Phys. 45, 430 (1927), 10.1007/BF01343064] or the charged matter [N. Bohr and L. Rosenfeld, K. Danske Vidensk. Selsk, Math.-Fys. Medd. XII, 8 (1933)], have been studied from the dawn of the theory. However, what happens in between, when a part of the radiation may be observed, as is the case in many real-life situations, has not been analyzed yet. We present such an analysis for a nonrelativistic, pointlike charge and thermal radiation. In the dipole approximation, we solve the dynamics and show that there is a regime where, despite the noise, the observed field carries away almost perfect and hugely redundant information about the charge momentum. We analyze a partial charge-field state and show that it approaches a so-called spectrum broadcast structure.
International Nuclear Information System (INIS)
Zhang, Enze; Chen, Qingwei
2016-01-01
Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective optimization problem is formulated from the original crisp one, where system reliability and cost are simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach has been demonstrated through two numerical examples and a case study of supervisory control and data acquisition (SCADA) system in water resource management. - Highlights: • We model the reliability redundancy allocation problem in an interval environment. • We apply the particle swarm optimization directly on the interval values. • A dominance relation for interval-valued multi-objective functions is defined. • The crowding distance metric is extended to handle imprecise objective functions.
International Nuclear Information System (INIS)
Hoepfer, V.M.; Saleh, J.H.; Marais, K.B.
2009-01-01
Common-cause failures (CCF) are one of the more critical and challenging issues for system reliability and risk analyses. Academic interest in modeling CCF, and more broadly in modeling dependent failures, has steadily grown over the years in the number of publications as well as in the sophistication of the analytical tools used. In the past few years, several influential articles have shed doubts on the relevance of redundancy arguing that 'redundancy backfires' through common-cause failures, and that the latter dominate unreliability, thus defeating the purpose of redundancy. In this work, we take issue with some of the results of these publications. In their stead, we provide a nuanced perspective on the (contingent) value of redundancy subject to common-cause failures. First, we review the incremental reliability and MTTF provided by redundancy subject to common-cause failures. Second, we introduce the concept and develop the analytics of the 'redundancy-relevance boundary': we propose this redundancy-relevance boundary as a design-aid tool that provides an answer to the following question: what level of redundancy is relevant or advantageous given a varying prevalence of common-cause failures? We investigate the conditions under which different levels of redundancy provide an incremental MTTF over that of the single component in the face of common-cause failures. Recognizing that redundancy comes at a cost, we also conduct a cost-benefit analysis of redundancy subject to common-cause failures, and demonstrate how this analysis modifies the redundancy-relevance boundary. We show how the value of redundancy is contingent on the prevalence of common-cause failures, the redundancy level considered, and the monadic cost-benefit ratio. Finally we argue that general unqualified criticism of redundancy is misguided, and efforts are better spent for example on understanding and mitigating the potential sources of common-cause failures rather than deriding the concept
HEDR modeling approach: Revision 1
International Nuclear Information System (INIS)
Shipler, D.B.; Napier, B.A.
1994-05-01
This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies
Repetitive motion planning and control of redundant robot manipulators
Zhang, Yunong
2013-01-01
Repetitive Motion Planning and Control of Redundant Robot Manipulators presents four typical motion planning schemes based on optimization techniques, including the fundamental RMP scheme and its extensions. These schemes are unified as quadratic programs (QPs), which are solved by neural networks or numerical algorithms. The RMP schemes are demonstrated effectively by the simulation results based on various robotic models; the experiments applying the fundamental RMP scheme to a physical robot manipulator are also presented. As the schemes and the corresponding solvers presented in the book have solved the non-repetitive motion problems existing in redundant robot manipulators, it is of particular use in applying theoretical research based on the quadratic program for redundant robot manipulators in industrial situations. This book will be a valuable reference work for engineers, researchers, advanced undergraduate and graduate students in robotics fields. Yunong Zhang is a professor at The School of Informa...
On Planning of FTTH Access Networks with and without Redundancy
DEFF Research Database (Denmark)
Riaz, M. Tahir; Haraldsson, Gustav; Gutierrez Lopez, Jose Manuel
2010-01-01
This paper presents a planning analysis of FTTH access network with and without redundancy. Traditionally, access networks are planned only without redundancy, which is mainly due to lowering the cost of deployment. As fiber optics provide a huge amount of capacity, more and more services are being...... offered on a single fiber connection. As a single point of failure in fiber connection can cause multiple service deprivation therefore redundancy is very crucial. In this work, an automated planning model was used to test different scenarios of implementation. A cost estimation is presented in terms...... of digging and amount of fiber used. Three topologies, including the traditional one “tree topology”, were test with combination of various passive optical technologies....
The error performance analysis over cyclic redundancy check codes
Yoon, Hee B.
1991-06-01
The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.
Directory of Open Access Journals (Sweden)
KULANTHAISAMY, A.
2014-05-01
Full Text Available This paper presents a Multi- objective Optimal Placement of Phasor Measurement Units (MOPP method in large electric transmission systems. It is proposed for minimizing the number of Phasor Measurement Units (PMUs for complete system observability and maximizing the measurement redundancy of the system, simultaneously. The measurement redundancy means that number of times a bus is able to monitor more than once by PMUs set. A higher level of measurement redundancy can maximize the total system observability and it is desirable for a reliable power system state estimation. Therefore, simultaneous optimization of the two conflicting objectives are performed using a binary coded Artificial Bee Colony (ABC algorithm. The complete observability of the power system is first prepared and then, single line loss contingency condition is considered to the main model. The efficiency of the proposed method is validated on IEEE 14, 30, 57 and 118 bus test systems. The valuable approach of ABC algorithm is demonstrated in finding the optimal number of PMUs and their locations by comparing the performance with earlier works.
Redundancy in Nigerian Business Organizations: Alternatives ...
African Journals Online (AJOL)
This theoretical discourse examined the incidence of work redundancy in Nigerian organizations as to offer alternative options. Certainly, some redundancy exercises may be necessary for the survival of the organizations but certain variables may influence employees' reactions to the exercises and thus influence the ...
Increasing The Dexterity Of Redundant Robots
Seraji, Homayoun
1990-01-01
Redundant coordinates used to define additional tasks. Configuration control emerging as effective way to control motions of robot having more degrees of freedom than necessary to define trajectory of end effector and/or of object to be manipulated. Extra or redundant degrees of freedom used to give robot humanlike dexterity and versatility.
Maximization of learning speed in the motor cortex due to neuronal redundancy.
Directory of Open Access Journals (Sweden)
Ken Takiyama
2012-01-01
Full Text Available Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed.
Redundant arrays of IDE drives
Energy Technology Data Exchange (ETDEWEB)
D.A. Sanders et al.
2002-01-02
The authors report tests of redundant arrays of IDE disk drives for use in offline high energy physics data analysis. Parts costs of total systems using commodity EIDE disks are now at the $4000 per Terabyte level. Disk storage prices have now decreased to the point where they equal the cost per Terabyte of Storage Technology tape silos. The disks, however, offer far better granularity; even small institutions can afford to deploy systems. The tests include reports on software RAID-5 systems running under Linux 2.4 using Promise Ultra 100{trademark} disk controllers. RAID-5 protects data in case of a single disk failure by providing parity bits. Tape backup is not required. Journaling file systems are used to allow rapid recovery from crashes. The data analysis strategy is to encapsulate data and CPU processing power. Analysis for a particular part of a data set takes place on the PC where the data resides. The network is only used to put results together. They explore three methods of moving data between sites; internet transfers, not pluggable IDE disks in FireWire cases, and DVD-R disks.
Redundant arrays of IDE drives
International Nuclear Information System (INIS)
Sanders, D.A.
2002-01-01
The authors report tests of redundant arrays of IDE disk drives for use in offline high energy physics data analysis. Parts costs of total systems using commodity EIDE disks are now at the $4000 per Terabyte level. Disk storage prices have now decreased to the point where they equal the cost per Terabyte of Storage Technology tape silos. The disks, however, offer far better granularity; even small institutions can afford to deploy systems. The tests include reports on software RAID-5 systems running under Linux 2.4 using Promise Ultra 100trademark disk controllers. RAID-5 protects data in case of a single disk failure by providing parity bits. Tape backup is not required. Journaling file systems are used to allow rapid recovery from crashes. The data analysis strategy is to encapsulate data and CPU processing power. Analysis for a particular part of a data set takes place on the PC where the data resides. The network is only used to put results together. They explore three methods of moving data between sites; internet transfers, not pluggable IDE disks in FireWire cases, and DVD-R disks
Redundancy Optimization for Error Recovery in Digital Microfluidic Biochips
DEFF Research Database (Denmark)
Alistar, Mirela; Pop, Paul; Madsen, Jan
2015-01-01
Microfluidic-based biochips are replacing the conventional biochemical analyzers, and are able to integrate all the necessary functions for biochemical analysis. The digital microfluidic biochips are based on the manipulation of liquids not as a continuous flow, but as discrete droplets. Research......Microfluidic-based biochips are replacing the conventional biochemical analyzers, and are able to integrate all the necessary functions for biochemical analysis. The digital microfluidic biochips are based on the manipulation of liquids not as a continuous flow, but as discrete droplets....... Researchers have proposed approaches for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determine the allocation, resource binding, scheduling, placement and routing of the operations in the application. During the execution...... propose an online recovery strategy, which decides during the execution of the biochemical application the introduction of the redundancy required for fault-tolerance. We consider both time redundancy, i.e., re-executing erroneous operations, and space redundancy, i.e., creating redundant droplets...
Cohen, Raphael; Elhadad, Michael; Elhadad, Noémie
2013-01-16
The increasing availability of Electronic Health Record (EHR) data and specifically free-text patient notes presents opportunities for phenotype extraction. Text-mining methods in particular can help disease modeling by mapping named-entities mentions to terminologies and clustering semantically related terms. EHR corpora, however, exhibit specific statistical and linguistic characteristics when compared with corpora in the biomedical literature domain. We focus on copy-and-paste redundancy: clinicians typically copy and paste information from previous notes when documenting a current patient encounter. Thus, within a longitudinal patient record, one expects to observe heavy redundancy. In this paper, we ask three research questions: (i) How can redundancy be quantified in large-scale text corpora? (ii) Conventional wisdom is that larger corpora yield better results in text mining. But how does the observed EHR redundancy affect text mining? Does such redundancy introduce a bias that distorts learned models? Or does the redundancy introduce benefits by highlighting stable and important subsets of the corpus? (iii) How can one mitigate the impact of redundancy on text mining? We analyze a large-scale EHR corpus and quantify redundancy both in terms of word and semantic concept repetition. We observe redundancy levels of about 30% and non-standard distribution of both words and concepts. We measure the impact of redundancy on two standard text-mining applications: collocation identification and topic modeling. We compare the results of these methods on synthetic data with controlled levels of redundancy and observe significant performance variation. Finally, we compare two mitigation strategies to avoid redundancy-induced bias: (i) a baseline strategy, keeping only the last note for each patient in the corpus; (ii) removing redundant notes with an efficient fingerprinting-based algorithm. (a)For text mining, preprocessing the EHR corpus with fingerprinting yields
Directory of Open Access Journals (Sweden)
Flávia Rosa Santoro
Full Text Available Resilience is related to the ability of a system to adjust to disturbances. The Utilitarian Redundancy Model has emerged as a tool for investigating the resilience of local medical systems. The model determines the use of species richness for the same therapeutic function as a facilitator of the maintenance of these systems. However, predictions generated from this model have not yet been tested, and a lack of variables exists for deeper analyses of resilience. This study aims to address gaps in the Utilitarian Redundancy Model and to investigate the resilience of two medical systems in the Brazilian semi-arid zone. As a local illness is not always perceived in the same way that biomedicine recognizes, the term "therapeutic targets" is used for perceived illnesses. Semi-structured interviews with local experts were conducted using the free-listing technique to collect data on known medicinal plants, usage preferences, use of redundant species, characteristics of therapeutic targets, and the perceived severity for each target. Additionally, participatory workshops were conducted to determine the frequency of targets. The medical systems showed high species richness but low levels of species redundancy. However, if redundancy was present, it was the primary factor responsible for the maintenance of system functions. Species richness was positively associated with therapeutic target frequencies and negatively related to target severity. Moreover, information about redundant species seems to be largely idiosyncratic; this finding raises questions about the importance of redundancy for resilience. We stress the Utilitarian Redundancy Model as an interesting tool to be used in studies of resilience, but we emphasize that it must consider the distribution of redundancy in terms of the treatment of important illnesses and the sharing of information. This study has identified aspects of the higher and lower vulnerabilities of medical systems, adding
Exploiting Redundancy in an OFDM SDR Receiver
Directory of Open Access Journals (Sweden)
Tomas Palenik
2009-01-01
Full Text Available Common OFDM system contains redundancy necessary to mitigate interblock interference and allows computationally effective single-tap frequency domain equalization in receiver. Assuming the system implements an outer error correcting code and channel state information is available in the receiver, we show that it is possible to understand the cyclic prefix insertion as a weak inner ECC encoding and exploit the introduced redundancy to slightly improve error performance of such a system. In this paper, an easy way to implement modification to an existing SDR OFDM receiver is presented. This modification enables the utilization of prefix redundancy, while preserving full compatibility with existing OFDM-based communication standards.
A Bayesian approach to model uncertainty
International Nuclear Information System (INIS)
Buslik, A.
1994-01-01
A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given
Flexible Procurement of Services with Uncertain Durations using Redundancy
Stein, S; Gerding, E; Rogers, A; Larson, K; Jennings, NR
2009-01-01
Emerging service-oriented technologies allow software agents to automatically procure distributed services to complete complex tasks. However, in many application scenarios, service providers demand financial remuneration, execution times are uncertain and consumers have deadlines for their tasks. In this paper, we address these issues by developing a novel approach that dynamically procures multiple, redundant services over time, in order to ensure success by the deadline. Specifically, we f...
Toward an Integrated Design, Inspection and Redundancy Research Program.
1984-01-01
William Creelman William H. Silcox National Marine Service Standard Oil Company of California St. Louis, Missouri San Francisco, California .-- N...develop physical models and generic tools for analyzing the effects of redundancy, reserve strength, and residual strength on the system behavior of marine...probabilistic analyses to be applicable to real-world problems, this program needs to provide - the deterministic physical models and generic tools upon
Image Registration Using Redundant Wavelet Transforms
National Research Council Canada - National Science Library
Brown, Richard
2001-01-01
.... In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency...
Berg, Melanie D.; Kim, Hak S.; Phan, Anthony M.; Seidleck, Christina M.; Label, Kenneth A.; Pellish, Jonathan A.; Campola, Michael J.
2016-01-01
We present the challenges that arise when using redundant clock domains due to their time-skew. Radiation data show that a singular clock domain provides an improved triple modular redundant (TMR) scheme over redundant clocks.
Redundancy for electric motors in spacecraft applications
Smith, Robert J.; Flew, Alastair R.
1986-01-01
The parts of electric motors which should be duplicated in order to provide maximum reliability in spacecraft application are identified. Various common types of redundancy are described. The advantages and disadvantages of each are noted. The principal types are illustrated by reference to specific examples. For each example, constructional details, basic performance data and failure modes are described, together with a discussion of the suitability of particular redundancy techniques to motor types.
Dynamic Control of Kinematically Redundant Robotic Manipulators
Directory of Open Access Journals (Sweden)
Erling Lunde
1987-07-01
Full Text Available Several methods for task space control of kinematically redundant manipulators have been proposed in the literature. Most of these methods are based on a kinematic analysis of the manipulator. In this paper we propose a control algorithm in which we are especially concerned with the manipulator dynamics. The algorithm is particularly well suited for the class of redundant manipulators consisting of a relatively small manipulator mounted on a larger positioning part.
System Behavior Models: A Survey of Approaches
2016-06-01
OF FIGURES Spiral Model .................................................................................................3 Figure 1. Approaches in... spiral model was chosen for researching and structuring this thesis, shown in Figure 1. This approach allowed multiple iterations of source material...applications and refining through iteration. 3 Spiral Model Figure 1. D. SCOPE The research is limited to a literature review, limited
Rubio-Fern?ndez, Paula
2016-01-01
Color adjectives tend to be used redundantly in referential communication. I propose that redundant color adjectives are often intended to exploit a color contrast in the visual context and hence facilitate object identification, despite not being necessary to establish unique reference. Two language-production experiments investigated two types of factors that may affect the use of redundant color adjectives: factors related to the efficiency of color in the visual context and factors relate...
Learning Actions Models: Qualitative Approach
DEFF Research Database (Denmark)
Bolander, Thomas; Gierasimczuk, Nina
2015-01-01
In dynamic epistemic logic, actions are described using action models. In this paper we introduce a framework for studying learnability of action models from observations. We present first results concerning propositional action models. First we check two basic learnability criteria: finite ident...
Case studies in configuration control for redundant robots
Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.
1989-01-01
A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.
Multisensory processing of redundant information in go/no-go and choice responses
DEFF Research Database (Denmark)
Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias
2014-01-01
In multisensory research, faster responses are commonly observed when multimodal stimuli are presented as compared to unimodal target presentations. This so-called redundant signals effect can be explained by several frameworks including separate activation and coactivation models. The redundant ...... of redundant information provided by different sensory channels and is not restricted to simple responses. The results connect existing theories on multisensory integration with theories on choice behavior....... processes (Schwarz, 1994) within two absorbing barriers. The diffusion superposition model accurately describes mean and variance of response times as well as the proportion of correct responses observed in the two tasks. Linear superposition seems, thus, to be a general principle in integration...
Splenic trauma: Is splenectomy redundant?
Directory of Open Access Journals (Sweden)
S Tandon
2013-01-01
Full Text Available 41 year old male, serving air warrior sustained blunt abdominal trauma, CECT revealed grade III splenic injury. He was managed conservatively with good clinical outcome. Conservatism is the new approach to splenic trauma.
O'Boyle, Ernest H; Forsyth, Donelson R; Banks, George C; Story, Paul A; White, Charles D
2015-12-01
We examined the relationships between Machiavellianism, narcissism, and psychopathy-the three traits of the Dark Triad (DT)-and the Five-Factor Model (FFM) of personality. The review identified 310 independent samples drawn from 215 sources and yielded information pertaining to global trait relationships and facet-level relationships. We used meta-analysis to examine (a) the bivariate relations between the DT and the five global traits and 30 facets of the FFM, (b) the relative importance of each of the FFM global traits in predicting DT, and (c) the relationship between the DT and FFM facets identified in translational models of narcissism and psychopathy. These analyses identified consistent and theoretically meaningful associations between the DT traits and the facets of the FFM. The five traits of the FFM, in a relative importance analysis, accounted for much of the variance in Machiavellianism, narcissism, and psychopathy, respectively, and facet-level analyses identified specific facets of each FFM trait that were consistently associated with narcissism (e.g., angry/hostility, modesty) and psychopathy (e.g., straightforwardness, deliberation). The FFM explained nearly all of the variance in psychopathy (R(2) c = .88) and a substantial portion of the variance in narcissism (R(2) c = .42). © 2014 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Cao, Dingzhou; Murat, Alper; Chinnam, Ratna Babu
2013-01-01
This paper proposes a decomposition-based approach to exactly solve the multi-objective Redundancy Allocation Problem for series-parallel systems. Redundancy allocation problem is a form of reliability optimization and has been the subject of many prior studies. The majority of these earlier studies treat redundancy allocation problem as a single objective problem maximizing the system reliability or minimizing the cost given certain constraints. The few studies that treated redundancy allocation problem as a multi-objective optimization problem relied on meta-heuristic solution approaches. However, meta-heuristic approaches have significant limitations: they do not guarantee that Pareto points are optimal and, more importantly, they may not identify all the Pareto-optimal points. In this paper, we treat redundancy allocation problem as a multi-objective problem, as is typical in practice. We decompose the original problem into several multi-objective sub-problems, efficiently and exactly solve sub-problems, and then systematically combine the solutions. The decomposition-based approach can efficiently generate all the Pareto-optimal solutions for redundancy allocation problems. Experimental results demonstrate the effectiveness and efficiency of the proposed method over meta-heuristic methods on a numerical example taken from the literature.
Incident detection and isolation in drilling using analytical redundancy relations
DEFF Research Database (Denmark)
Willersrud, Anders; Blanke, Mogens; Imsland, Lars
2015-01-01
must be avoided. This paper employs model-based diagnosis using analytical redundancy relations to obtain residuals which are affected differently by the different incidents. Residuals are found to be non-Gaussian - they follow a multivariate t-distribution - hence, a dedicated generalized likelihood...... measurements available. In the latter case, isolation capability is shown to be reduced to group-wise isolation, but the method would still detect all serious events with the prescribed false alarm probability...
Global energy modeling - A biophysical approach
Energy Technology Data Exchange (ETDEWEB)
Dale, Michael
2010-09-15
This paper contrasts the standard economic approach to energy modelling with energy models using a biophysical approach. Neither of these approaches includes changing energy-returns-on-investment (EROI) due to declining resource quality or the capital intensive nature of renewable energy sources. Both of these factors will become increasingly important in the future. An extension to the biophysical approach is outlined which encompasses a dynamic EROI function that explicitly incorporates technological learning. The model is used to explore several scenarios of long-term future energy supply especially concerning the global transition to renewable energy sources in the quest for a sustainable energy system.
Intuitive theories of information: beliefs about the value of redundancy.
Soll, J B
1999-03-01
In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.
International Nuclear Information System (INIS)
Chambari, Amirhossain; Najafi, Amir Abbas; Rahmati, Seyed Habib A.; Karimi, Aida
2013-01-01
The redundancy allocation problem (RAP) is an important reliability optimization problem. This paper studies a specific RAP in which redundancy strategies are chosen. To do so, the choice of the redundancy strategies among active and cold standby is considered as decision variables. The goal is to select the redundancy strategy, component, and redundancy level for each subsystem such that the system reliability is maximized. Since RAP is a NP-hard problem, we propose an efficient simulated annealing algorithm (SA) to solve it. In addition, to evaluating the performance of the proposed algorithm, it is compared with well-known algorithms in the literature for different test problems. The results of the performance analysis show a relatively satisfactory efficiency of the proposed SA algorithm
A Unified Approach to Modeling and Programming
DEFF Research Database (Denmark)
Madsen, Ole Lehrmann; Møller-Pedersen, Birger
2010-01-01
of this paper is to go back to the future and get inspiration from SIMULA and propose a unied approach. In addition to reintroducing the contributions of SIMULA and the Scandinavian approach to object-oriented programming, we do this by discussing a number of issues in modeling and programming and argue3 why we......SIMULA was a language for modeling and programming and provided a unied approach to modeling and programming in contrast to methodologies based on structured analysis and design. The current development seems to be going in the direction of separation of modeling and programming. The goal...
International Nuclear Information System (INIS)
Nourelfath, Mustapha; Châtelet, Eric; Nahas, Nabil
2012-01-01
This paper formulates a joint redundancy and imperfect preventive maintenance planning optimization model for series–parallel multi-state degraded systems. Non identical multi-state components can be used in parallel to improve the system availability by providing redundancy in subsystems. Multiple component choices are available in the market for each subsystem. The status of each component is considered to degrade with use. The objective is to determine jointly the maximal-availability series–parallel system structure and the appropriate preventive maintenance actions, subject to a budget constraint. System availability is defined as the ability to satisfy consumer demand that is represented as a piecewise cumulative load curve. A procedure is used, based on Markov processes and universal moment generating function, to evaluate the multi-state system availability and the cost function. A heuristic approach is also proposed to solve the formulated problem. This heuristic is based on a combination of space partitioning, genetic algorithms (GA) and tabu search (TS). After dividing the search space into a set of disjoint subsets, this approach uses GA to select the subspaces, and applies TS to each selected sub-space.
Redundant actuator development study. [flight control systems for supersonic transport aircraft
Ryder, D. R.
1973-01-01
Current and past supersonic transport configurations are reviewed to assess redundancy requirements for future airplane control systems. Secondary actuators used in stability augmentation systems will probably be the most critical actuator application and require the highest level of redundancy. Two methods of actuator redundancy mechanization have been recommended for further study. Math models of the recommended systems have been developed for use in future computer simulations. A long range plan has been formulated for actuator hardware development and testing in conjunction with the NASA Flight Simulator for Advanced Aircraft.
Reliability optimization of series–parallel systems with mixed redundancy strategy in subsystems
International Nuclear Information System (INIS)
Abouei Ardakan, Mostafa; Zeinal Hamadani, Ali
2014-01-01
Traditionally in redundancy allocation problem (RAP), it is assumed that the redundant components are used based on a predefined active or standby strategies. Recently, some studies consider the situation that both active and standby strategies can be used in a specific system. However, these researches assume that the redundancy strategy for each subsystem can be either active or standby and determine the best strategy for these subsystems by using a proper mathematical model. As an extension to this assumption, a novel strategy, that is a combination of traditional active and standby strategies, is introduced. The new strategy is called mixed strategy which uses both active and cold-standby strategies in one subsystem simultaneously. Therefore, the problem is to determine the component type, redundancy level, number of active and cold-standby units for each subsystem in order to maximize the system reliability. To have a more practical model, the problem is formulated with imperfect switching of cold-standby redundant components and k-Erlang time-to-failure (TTF) distribution. As the optimization of RAP belongs to NP-hard class of problems, a genetic algorithm (GA) is developed. The new strategy and proposed GA are implemented on a well-known test problem in the literature which leads to interesting results. - Highlights: • In this paper the redundancy allocation problem (RAP) for a series–parallel system is considered. • Traditionally there are two main strategies for redundant component namely active and standby. • In this paper a new redundancy strategy which is called “Mixed” redundancy strategy is introduced. • Computational experiments demonstrate that implementing the new strategy lead to interesting results
A control method for manipulators with redundancy
International Nuclear Information System (INIS)
Furusho, Junji; Usui, Hiroyuki
1989-01-01
Redundant manipulators have more ability than nonredundant ones in many aspects such as avoiding obstacles, avoiding singular states, etc. In this paper, a control algorithm for redundant manipulators working under the circumstance in the presence of obstacles is presented. First, the measure of manipulability for robot manipulators under obstacle circumstances is defined. Then, the control algorithm for the obstacle avoidance is derived by using this measure of manipulability. The obstacle avoidance and the maintenance of good posture are simultaneously achieved by this algorithm. Lastly, an experiment and simulation results using an eight degree of freedom manipulator are shown. (author)
Redundancy Elimination in DTN via ACK Mechanism
Directory of Open Access Journals (Sweden)
Xiqing Zhang
2015-08-01
Full Text Available The traditional routing protocols for delay tolerant networks (DTN usually take the strategy of spreading multiple copies of one message to the networks. When one copy reaches destination, the transmission of other copies not only waste the bandwidth but also deprive other messages of the opportunities for transmission. This paper brings up a mechanism to eliminate the redundant copies. By adding an acknowledge field to the packet header to delete redundant copies, it can degrade the network overhead while improve the delivery ratio. Simulation results confirm that the proposed method can improve the performance of epidemic and Spray and Wait routing protocol.
In-flight performance optimization for rotorcraft with redundant controls
Ozdemir, Gurbuz Taha
A conventional helicopter has limits on performance at high speeds because of the limitations of main rotor, such as compressibility issues on advancing side or stall issues on retreating side. Auxiliary lift and thrust components have been suggested to improve performance of the helicopter substantially by reducing the loading on the main rotor. Such a configuration is called the compound rotorcraft. Rotor speed can also be varied to improve helicopter performance. In addition to improved performance, compound rotorcraft and variable RPM can provide a much larger degree of control redundancy. This additional redundancy gives the opportunity to further enhance performance and handling qualities. A flight control system is designed to perform in-flight optimization of redundant control effectors on a compound rotorcraft in order to minimize power required and extend range. This "Fly to Optimal" (FTO) control law is tested in simulation using the GENHEL model. A model of the UH-60, a compound version of the UH-60A with lifting wing and vectored thrust ducted propeller (VTDP), and a generic compound version of the UH-60A with lifting wing and propeller were developed and tested in simulation. A model following dynamic inversion controller is implemented for inner loop control of roll, pitch, yaw, heave, and rotor RPM. An outer loop controller regulates airspeed and flight path during optimization. A Golden Section search method was used to find optimal rotor RPM on a conventional helicopter, where the single redundant control effector is rotor RPM. The FTO builds off of the Adaptive Performance Optimization (APO) method of Gilyard by performing low frequency sweeps on a redundant control for a fixed wing aircraft. A method based on the APO method was used to optimize trim on a compound rotorcraft with several redundant control effectors. The controller can be used to optimize rotor RPM and compound control effectors through flight test or simulations in order to
Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot
International Nuclear Information System (INIS)
Wang Yongbo; Wu Huapeng; Handroos, Heikki
2011-01-01
This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.
Multiple Model Approaches to Modelling and Control,
DEFF Research Database (Denmark)
on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating......, and allows direct incorporation of high-level and qualitative plant knowledge into themodel. These advantages have proven to be very appealing for industrial applications, and the practical, intuitively appealing nature of the framework isdemonstrated in chapters describing applications of local methods...... to problems in the process industries, biomedical applications and autonomoussystems. The successful application of the ideas to demanding problems is already encouraging, but creative development of the basic framework isneeded to better allow the integration of human knowledge with automated learning...
Geometrical approach to fluid models
International Nuclear Information System (INIS)
Kuvshinov, B.N.; Schep, T.J.
1997-01-01
Differential geometry based upon the Cartan calculus of differential forms is applied to investigate invariant properties of equations that describe the motion of continuous media. The main feature of this approach is that physical quantities are treated as geometrical objects. The geometrical notion of invariance is introduced in terms of Lie derivatives and a general procedure for the construction of local and integral fluid invariants is presented. The solutions of the equations for invariant fields can be written in terms of Lagrange variables. A generalization of the Hamiltonian formalism for finite-dimensional systems to continuous media is proposed. Analogously to finite-dimensional systems, Hamiltonian fluids are introduced as systems that annihilate an exact two-form. It is shown that Euler and ideal, charged fluids satisfy this local definition of a Hamiltonian structure. A new class of scalar invariants of Hamiltonian fluids is constructed that generalizes the invariants that are related with gauge transformations and with symmetries (Noether). copyright 1997 American Institute of Physics
Porta, Alberto; Bari, Vlasta; De Maria, Beatrice; Takahashi, Anielle C M; Guzzetti, Stefano; Colombo, Riccardo; Catai, Aparecida M; Raimondi, Ferdinando; Faes, Luca
2017-11-01
Objective: Indexes assessing the balance between redundancy and synergy were hypothesized to be helpful in characterizing cardiovascular control from spontaneous beat-to-beat variations of heart period (HP), systolic arterial pressure (SAP), and respiration (R). Methods: Net redundancy/synergy indexes were derived according to predictability and transfer entropy decomposition strategies via a multivariate linear regression approach. Indexes were tested in two protocols inducing modifications of the cardiovascular regulation via baroreflex loading/unloading (i.e., head-down tilt at -25° and graded head-up tilt at 15°, 30°, 45°, 60°, 75°, and 90°, respectively). The net redundancy/synergy of SAP and R to HP and of HP and R to SAP were estimated over stationary sequences of 256 successive values. Results: We found that: 1) regardless of the target (i.e., HP or SAP) redundancy was prevalent over synergy and this prevalence was independent of type and magnitude of the baroreflex challenge; 2) the prevalence of redundancy disappeared when decoupling inputs from output via a surrogate approach; 3) net redundancy was under autonomic control given that it varied in proportion to the vagal withdrawal during graded head-up tilt; and 4) conclusions held regardless of the decomposition strategy. Conclusion: Net redundancy indexes can monitor changes of cardiovascular control from a perspective completely different from that provided by more traditional univariate and multivariate methods. Significance: Net redundancy measures might provide a practical tool to quantify the reservoir of effective cardiovascular regulatory mechanisms sharing causal influences over a target variable. Objective: Indexes assessing the balance between redundancy and synergy were hypothesized to be helpful in characterizing cardiovascular control from spontaneous beat-to-beat variations of heart period (HP), systolic arterial pressure (SAP), and respiration (R). Methods: Net redundancy
Current approaches to gene regulatory network modelling
Directory of Open Access Journals (Sweden)
Brazma Alvis
2007-09-01
Full Text Available Abstract Many different approaches have been developed to model and simulate gene regulatory networks. We proposed the following categories for gene regulatory network models: network parts lists, network topology models, network control logic models, and dynamic models. Here we will describe some examples for each of these categories. We will study the topology of gene regulatory networks in yeast in more detail, comparing a direct network derived from transcription factor binding data and an indirect network derived from genome-wide expression data in mutants. Regarding the network dynamics we briefly describe discrete and continuous approaches to network modelling, then describe a hybrid model called Finite State Linear Model and demonstrate that some simple network dynamics can be simulated in this model.
Coded aperture imaging with uniformly redundant arrays
International Nuclear Information System (INIS)
Fenimore, E.E.; Cannon, T.M.
1980-01-01
A system is described which uses uniformly redundant arrays to image non-focusable radiation. The array is used in conjunction with a balanced correlation technique to provide a system with no artifacts so that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. The array is mosaicked to reduce required detector size over conventional array detectors. 15 claims
On Redundancy in Describing Linguistic Systems
Directory of Open Access Journals (Sweden)
Vladimir Borissov Pericliev
2015-12-01
Full Text Available On Redundancy in Describing Linguistic Systems The notion of system of linguistic elements figures prominently in most post-Saussurian linguistics up to the present. A “system” is the network of the contrastive (or, distinctive features each element in the system bears to the remaining elements. The meaning (valeur of each element in the system is the set of features that are necessary and jointly sufficient to distinguish this element from all others. The paper addresses the problems of “redundancy”, i.e. the occurrence of features that are not strictly necessary in describing an element in a system. Redundancy is shown to smuggle into the description of linguistic systems, this infelicitous practice illustrated with some examples from the literature (e.g. the classical phonemic analysis of Russian by Cherry, Halle, and Jakobson, 1953. The logic and psychology of the occurrence of redundancy are briefly sketched and it is shown that, in addition to some other problems, redundancy leads to a huge and unresolvable ambiguity of descriptions of linguistic systems (the Buridan’s ass problem.
Impedance Control of a Redundant Parallel Manipulator
DEFF Research Database (Denmark)
Méndez, Juan de Dios Flores; Schiøler, Henrik; Madsen, Ole
2017-01-01
This paper presents the design of Impedance Control to a redundantly actuated Parallel Kinematic Manipulator. The proposed control is based on treating each limb as a single system and their connection through the internal interaction forces. The controller introduces a stiffness and damping...
REDUNDANT ARRAY CONFIGURATIONS FOR 21 cm COSMOLOGY
Energy Technology Data Exchange (ETDEWEB)
Dillon, Joshua S.; Parsons, Aaron R., E-mail: jsdillon@berkeley.edu [Department of Astronomy, UC Berkeley, Berkeley, CA (United States)
2016-08-01
Realizing the potential of 21 cm tomography to statistically probe the intergalactic medium before and during the Epoch of Reionization requires large telescopes and precise control of systematics. Next-generation telescopes are now being designed and built to meet these challenges, drawing lessons from first-generation experiments that showed the benefits of densely packed, highly redundant arrays—in which the same mode on the sky is sampled by many antenna pairs—for achieving high sensitivity, precise calibration, and robust foreground mitigation. In this work, we focus on the Hydrogen Epoch of Reionization Array (HERA) as an interferometer with a dense, redundant core designed following these lessons to be optimized for 21 cm cosmology. We show how modestly supplementing or modifying a compact design like HERA’s can still deliver high sensitivity while enhancing strategies for calibration and foreground mitigation. In particular, we compare the imaging capability of several array configurations, both instantaneously (to address instrumental and ionospheric effects) and with rotation synthesis (for foreground removal). We also examine the effects that configuration has on calibratability using instantaneous redundancy. We find that improved imaging with sub-aperture sampling via “off-grid” antennas and increased angular resolution via far-flung “outrigger” antennas is possible with a redundantly calibratable array configuration.
Bauer, Eric; Eustace, Dan
2012-01-01
"While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy"--
Distributed simulation a model driven engineering approach
Topçu, Okan; Oğuztüzün, Halit; Yilmaz, Levent
2016-01-01
Backed by substantive case studies, the novel approach to software engineering for distributed simulation outlined in this text demonstrates the potent synergies between model-driven techniques, simulation, intelligent agents, and computer systems development.
Service creation: a model-based approach
Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis
1999-01-01
This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed
International Nuclear Information System (INIS)
Safari, Jalal
2012-01-01
This paper proposes a variant of the Non-dominated Sorting Genetic Algorithm (NSGA-II) to solve a novel mathematical model for multi-objective redundancy allocation problems (MORAP). Most researchers about redundancy allocation problem (RAP) have focused on single objective optimization, while there has been some limited research which addresses multi-objective optimization. Also all mathematical multi-objective models of general RAP assume that the type of redundancy strategy for each subsystem is predetermined and known a priori. In general, active redundancy has traditionally received greater attention; however, in practice both active and cold-standby redundancies may be used within a particular system design. The choice of redundancy strategy then becomes an additional decision variable. Thus, the proposed model and solution method are to select the best redundancy strategy, type of components, and levels of redundancy for each subsystem that maximizes the system reliability and minimize total system cost under system-level constraints. This problem belongs to the NP-hard class. This paper presents a second-generation Multiple-Objective Evolutionary Algorithm (MOEA), named NSGA-II to find the best solution for the given problem. The proposed algorithm demonstrates the ability to identify a set of optimal solutions (Pareto front), which provides the Decision Maker (DM) with a complete picture of the optimal solution space. After finding the Pareto front, a procedure is used to select the best solution from the Pareto front. Finally, the advantages of the presented multi-objective model and of the proposed algorithm are illustrated by solving test problems taken from the literature and the robustness of the proposed NSGA-II is discussed.
Models of galaxies - The modal approach
International Nuclear Information System (INIS)
Lin, C.C.; Lowe, S.A.
1990-01-01
The general viability of the modal approach to the spiral structure in normal spirals and the barlike structure in certain barred spirals is discussed. The usefulness of the modal approach in the construction of models of such galaxies is examined, emphasizing the adoption of a model appropriate to observational data for both the spiral structure of a galaxy and its basic mass distribution. 44 refs
Directory of Open Access Journals (Sweden)
Giulia Purpura
2017-07-01
Full Text Available Multisensory processes permit combinations of several inputs, coming from different sensory systems, allowing for a coherent representation of biological events and facilitating adaptation to environment. For these reasons, their application in neurological and neuropsychological rehabilitation has been enhanced in the last decades. Recent studies on animals and human models have indicated that, on one hand multisensory integration matures gradually during post-natal life and development is closely linked to environment and experience and, on the other hand, that modality-specific information seems to do not benefit by redundancy across multiple sense modalities and is more readily perceived in unimodal than in multimodal stimulation. In this review, multisensory process development is analyzed, highlighting clinical effects in animal and human models of its manipulation for rehabilitation of sensory disorders. In addition, new methods of early intervention based on multisensory-based rehabilitation approach and their applications on different infant populations at risk of neurodevelopmental disabilities are discussed.
Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA
Directory of Open Access Journals (Sweden)
Wonhee Lee
2014-02-01
Full Text Available A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT. Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU.
Double Fault Detection of Cone-Shaped Redundant IMUs Using Wavelet Transformation and EPSA
Lee, Wonhee; Park, Chan Gook
2014-01-01
A model-free hybrid fault diagnosis technique is proposed to improve the performance of single and double fault detection and isolation. This is a model-free hybrid method which combines the extended parity space approach (EPSA) with a multi-resolution signal decomposition by using a discrete wavelet transform (DWT). Conventional EPSA can detect and isolate single and double faults. The performance of fault detection and isolation is influenced by the relative size of noise and fault. In this paper; the DWT helps to cancel the high frequency sensor noise. The proposed technique can improve low fault detection and isolation probability by utilizing the EPSA with DWT. To verify the effectiveness of the proposed fault detection method Monte Carlo numerical simulations are performed for a redundant inertial measurement unit (RIMU). PMID:24556675
Multiscale approach to equilibrating model polymer melts
DEFF Research Database (Denmark)
Svaneborg, Carsten; Ali Karimi-Varzaneh, Hossein; Hojdis, Nils
2016-01-01
We present an effective and simple multiscale method for equilibrating Kremer Grest model polymer melts of varying stiffness. In our approach, we progressively equilibrate the melt structure above the tube scale, inside the tube and finally at the monomeric scale. We make use of models designed...
Application of various FLD modelling approaches
Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.
2005-07-01
This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.
Risk Modelling for Passages in Approach Channel
Directory of Open Access Journals (Sweden)
Leszek Smolarek
2013-01-01
Full Text Available Methods of multivariate statistics, stochastic processes, and simulation methods are used to identify and assess the risk measures. This paper presents the use of generalized linear models and Markov models to study risks to ships along the approach channel. These models combined with simulation testing are used to determine the time required for continuous monitoring of endangered objects or period at which the level of risk should be verified.
Does functional redundancy stabilize fish communities?
DEFF Research Database (Denmark)
Rice, Jake; Daan, Niels; Gislason, Henrik
2012-01-01
Functional redundancy is a community property thought to contribute to ecosystem resilience. It is argued that trophic (or other) functional groups with more species have more linkages and opportunities to buffer variation in abundance of individual species. We explored this concept with a 30‐year...... time‐series of data on 83 species sampled in the International Bottom Trawl Survey. Our results were consistent with the hypothesis that functional redundancy leads to more stable (and by inference more resilient) communities. Over the time‐series trophic groups (assigned by diet, size (Lmax) group......, or both factors) with more species had lower coefficients of variation (CVs) in abundance and biomass than did trophic groups with fewer species. These findings are also consistent with Bernoulli’s Law of Large Numbers, a rule that does not require complex ecological and evolutionary processes to produce...
Dynamically redundant particle components in mixtures
International Nuclear Information System (INIS)
Lukacs, B.; Martinas, K.
1984-10-01
Examples are shown for cases in which the number of different kinds of particles in a system is not necessarily equal to the number of particle degrees of freedom in thermodynamical sense, and at the same time, the observed dynamics of the evolution of the system does not indicate a definite number of degrees of freedeom. The possibility for introducing dynamically redundant particles is discussed. (author)
Redundant sensor validation by using fuzzy logic
International Nuclear Information System (INIS)
Holbert, K.E.; Heger, A.S.; Alang-Rashid, N.K.
1994-01-01
This research is motivated by the need to relax the strict boundary of numeric-based signal validation. To this end, the use of fuzzy logic for redundant sensor validation is introduced. Since signal validation employs both numbers and qualitative statements, fuzzy logic provides a pathway for transforming human abstractions into the numerical domain and thus coupling both sources of information. With this transformation, linguistically expressed analysis principles can be coded into a classification rule-base for signal failure detection and identification
Redundancy Determination of HVDC MMC Modules
Directory of Open Access Journals (Sweden)
Chanki Kim
2015-08-01
Full Text Available An availability and a reliability prediction has been made for a high-voltage direct-current (HVDC module of VSC (Voltage Source Converter containing DC/DC converter, gate driver, capacitor and insulated gate bipolar transistors (IGBT. This prediction was made using published failure rates for the electronic equipment. The purpose of this prediction is to determinate the additional module redundancy of VSC and the used method is “binomial failure method”.
Set-Theoretic Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester Allan
Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...
An approach to the drone fleet survivability assessment based on a stochastic continues-time model
Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos
2017-09-01
An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.
Nonlinear Redundancy Analysis. Research Report 88-1.
van der Burg, Eeke; de Leeuw, Jan
A non-linear version of redundancy analysis is introduced. The technique is called REDUNDALS. It is implemented within the computer program for canonical correlation analysis called CANALS. The REDUNDALS algorithm is of an alternating least square (ALS) type. The technique is defined as minimization of a squared distance between criterion…
John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W
2016-01-01
When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.
Mathematical Modeling Approaches in Plant Metabolomics.
Fürtauer, Lisa; Weiszmann, Jakob; Weckwerth, Wolfram; Nägele, Thomas
2018-01-01
The experimental analysis of a plant metabolome typically results in a comprehensive and multidimensional data set. To interpret metabolomics data in the context of biochemical regulation and environmental fluctuation, various approaches of mathematical modeling have been developed and have proven useful. In this chapter, a general introduction to mathematical modeling is presented and discussed in context of plant metabolism. A particular focus is laid on the suitability of mathematical approaches to functionally integrate plant metabolomics data in a metabolic network and combine it with other biochemical or physiological parameters.
Echavarria, E.; Tomiyama, T.; van Bussel, G. J. W.
2007-07-01
The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance.
International Nuclear Information System (INIS)
Echavarria, E; Tomiyama, T; Bussel, G J W van
2007-01-01
The objective of this on-going research is to develop a design methodology to increase the availability for offshore wind farms, by means of an intelligent maintenance system capable of responding to faults by reconfiguring the system or subsystems, without increasing service visits, complexity, or costs. The idea is to make use of the existing functional redundancies within the system and sub-systems to keep the wind turbine operational, even at a reduced capacity if necessary. Re-configuration is intended to be a built-in capability to be used as a repair strategy, based on these existing functionalities provided by the components. The possible solutions can range from using information from adjacent wind turbines, such as wind speed and direction, to setting up different operational modes, for instance re-wiring, re-connecting, changing parameters or control strategy. The methodology described in this paper is based on qualitative physics and consists of a fault diagnosis system based on a model-based reasoner (MBR), and on a functional redundancy designer (FRD). Both design tools make use of a function-behaviour-state (FBS) model. A design methodology based on the re-configuration concept to achieve self-maintained wind turbines is an interesting and promising approach to reduce stoppage rate, failure events, maintenance visits, and to maintain energy output possibly at reduced rate until the next scheduled maintenance
SLS Navigation Model-Based Design Approach
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
International Nuclear Information System (INIS)
Mankamo, T.; Bjoere, S.; Olsson, Lena
1992-12-01
Dependent failure analysis and modeling were developed for high redundancy systems. The study included a comprehensive data analysis of safety and relief valves at the Finnish and Swedish BWR plants, resulting in improved understanding of Common Cause Failure mechanisms in these components. The reference application on the Forsmark 1/2 reactor relief system, constituting of twelve safety/relief lines and two regulating relief lines, covered different safety criteria cases of reactor depressurization and overpressure protection function, and failure to re close sequences. For the quantification of dependencies, the Alpha Factor Model, the Binomial Probability Model and the Common Load Model were compared for applicability in high redundancy systems
Stochastic approaches to inflation model building
International Nuclear Information System (INIS)
Ramirez, Erandy; Liddle, Andrew R.
2005-01-01
While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered
Model validation: a systemic and systematic approach
International Nuclear Information System (INIS)
Sheng, G.; Elzas, M.S.; Cronhjort, B.T.
1993-01-01
The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)
A Conceptual Modeling Approach for OLAP Personalization
Garrigós, Irene; Pardillo, Jesús; Mazón, Jose-Norberto; Trujillo, Juan
Data warehouses rely on multidimensional models in order to provide decision makers with appropriate structures to intuitively analyze data with OLAP technologies. However, data warehouses may be potentially large and multidimensional structures become increasingly complex to be understood at a glance. Even if a departmental data warehouse (also known as data mart) is used, these structures would be also too complex. As a consequence, acquiring the required information is more costly than expected and decision makers using OLAP tools may get frustrated. In this context, current approaches for data warehouse design are focused on deriving a unique OLAP schema for all analysts from their previously stated information requirements, which is not enough to lighten the complexity of the decision making process. To overcome this drawback, we argue for personalizing multidimensional models for OLAP technologies according to the continuously changing user characteristics, context, requirements and behaviour. In this paper, we present a novel approach to personalizing OLAP systems at the conceptual level based on the underlying multidimensional model of the data warehouse, a user model and a set of personalization rules. The great advantage of our approach is that a personalized OLAP schema is provided for each decision maker contributing to better satisfy their specific analysis needs. Finally, we show the applicability of our approach through a sample scenario based on our CASE tool for data warehouse development.
Variational approach to chiral quark models
Energy Technology Data Exchange (ETDEWEB)
Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira
1987-03-01
A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation.
A variational approach to chiral quark models
International Nuclear Information System (INIS)
Futami, Yasuhiko; Odajima, Yasuhiko; Suzuki, Akira.
1987-01-01
A variational approach is applied to a chiral quark model to test the validity of the perturbative treatment of the pion-quark interaction based on the chiral symmetry principle. It is indispensably related to the chiral symmetry breaking radius if the pion-quark interaction can be regarded as a perturbation. (author)
Extracting conceptual models from user stories with Visual Narrator
Lucassen, Garm; Robeer, Marcel; Dalpiaz, Fabiano; van der Werf, Jan Martijn E. M.; Brinkkemper, Sjaak
2017-01-01
Extracting conceptual models from natural language requirements can help identify dependencies, redundancies, and conflicts between requirements via a holistic and easy-to-understand view that is generated from lengthy textual specifications. Unfortunately, existing approaches never gained traction
Redundancy of Redundancy in Justifications of Verdicts of Polish The Constitutional Tribuna
Directory of Open Access Journals (Sweden)
Jan Winczorek
2016-09-01
Full Text Available The results of an empirical study of 150 justifications of verdicts of the Polish Constitutional Tribunal (CT are discussed. CT justifies its decisions mostly on authoritative references to previous decisions and other doxa- type arguments. It thus does not convince the audience of a decision's validity, but rather documents it. Further, the methodology changes depending on features of the case. The results are analysed using a conceptual framework of sociological systems theory. It is shown that CT's justification methodology ignores the redundancy (excess of references and dependencies of the legal system, finding redundancy redundant. This is a risky strategy of decision- making, enabling political influence.
Language as an information system: redundancy and optimization
Directory of Open Access Journals (Sweden)
Irina Mikhaylovna Nekipelova
2015-11-01
Full Text Available The paper is devoted to research of the language system as an information system. The distinguishing feature of any natural living language system is redundant of elements of its structure. Redundancy, broken terms of universality peculiar to artificial information systems, makes language mobile in time and in space. It should be marked out informational redundancy of two types: language redundancy, when information overlay of language units within the system occurs and speech redundancy when condense of information into syntagmatic level occurs. Language redundancy is potential and speech redundancy is actual. In general, it should be noted that the language redundancy is necessary for language: complicating the relationships between language units, language redundancy creates in language situation of choice, leading to a disorder of language system, increasing of entropy and, as a result, the appearing of the information that can be accepted or cannot be by language system. Language redundancy is one of the reasons for growth of information in language. In addition, the information redundancy in language is one of the factors of language system development.
A Set Theoretical Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...
A hybrid modeling approach for option pricing
Hajizadeh, Ehsan; Seifi, Abbas
2011-11-01
The complexity of option pricing has led many researchers to develop sophisticated models for such purposes. The commonly used Black-Scholes model suffers from a number of limitations. One of these limitations is the assumption that the underlying probability distribution is lognormal and this is so controversial. We propose a couple of hybrid models to reduce these limitations and enhance the ability of option pricing. The key input to option pricing model is volatility. In this paper, we use three popular GARCH type model for estimating volatility. Then, we develop two non-parametric models based on neural networks and neuro-fuzzy networks to price call options for S&P 500 index. We compare the results with those of Black-Scholes model and show that both neural network and neuro-fuzzy network models outperform Black-Scholes model. Furthermore, comparing the neural network and neuro-fuzzy approaches, we observe that for at-the-money options, neural network model performs better and for both in-the-money and an out-of-the money option, neuro-fuzzy model provides better results.
Heat transfer modeling an inductive approach
Sidebotham, George
2015-01-01
This innovative text emphasizes a "less-is-more" approach to modeling complicated systems such as heat transfer by treating them first as "1-node lumped models" that yield simple closed-form solutions. The author develops numerical techniques for students to obtain more detail, but also trains them to use the techniques only when simpler approaches fail. Covering all essential methods offered in traditional texts, but with a different order, Professor Sidebotham stresses inductive thinking and problem solving as well as a constructive understanding of modern, computer-based practice. Readers learn to develop their own code in the context of the material, rather than just how to use packaged software, offering a deeper, intrinsic grasp behind models of heat transfer. Developed from over twenty-five years of lecture notes to teach students of mechanical and chemical engineering at The Cooper Union for the Advancement of Science and Art, the book is ideal for students and practitioners across engineering discipl...
Directory of Open Access Journals (Sweden)
Ching-Hsue Cheng
2017-11-01
Full Text Available Obtaining necessary information (and even extracting hidden messages from existing big data, and then transforming them into knowledge, is an important skill. Data mining technology has received increased attention in various fields in recent years because it can be used to find historical patterns and employ machine learning to aid in decision-making. When we find unexpected rules or patterns from the data, they are likely to be of high value. This paper proposes a synthetic feature selection approach (SFSA, which is combined with a support vector machine (SVM to extract patterns and find the key features that influence students’ academic achievement. For verifying the proposed model, two databases, namely, “Student Profile” and “Tutorship Record”, were collected from an elementary school in Taiwan, and were concatenated into an integrated dataset based on students’ names as a research dataset. The results indicate the following: (1 the accuracy of the proposed feature selection approach is better than that of the Minimum-Redundancy-Maximum-Relevance (mRMR approach; (2 the proposed model is better than the listing methods when the six least influential features have been deleted; and (3 the proposed model can enhance the accuracy and facilitate the interpretation of the pattern from a hybrid-type dataset of students’ academic achievement.
Nonperturbative approach to the attractive Hubbard model
International Nuclear Information System (INIS)
Allen, S.; Tremblay, A.-M. S.
2001-01-01
A nonperturbative approach to the single-band attractive Hubbard model is presented in the general context of functional-derivative approaches to many-body theories. As in previous work on the repulsive model, the first step is based on a local-field-type ansatz, on enforcement of the Pauli principle and a number of crucial sumrules. The Mermin-Wagner theorem in two dimensions is automatically satisfied. At this level, two-particle self-consistency has been achieved. In the second step of the approximation, an improved expression for the self-energy is obtained by using the results of the first step in an exact expression for the self-energy, where the high- and low-frequency behaviors appear separately. The result is a cooperon-like formula. The required vertex corrections are included in this self-energy expression, as required by the absence of a Migdal theorem for this problem. Other approaches to the attractive Hubbard model are critically compared. Physical consequences of the present approach and agreement with Monte Carlo simulations are demonstrated in the accompanying paper (following this one)
Quasirelativistic quark model in quasipotential approach
Matveev, V A; Savrin, V I; Sissakian, A N
2002-01-01
The relativistic particles interaction is described within the frames of quasipotential approach. The presentation is based on the so called covariant simultaneous formulation of the quantum field theory, where by the theory is considered on the spatial-like three-dimensional hypersurface in the Minkowski space. Special attention is paid to the methods of plotting various quasipotentials as well as to the applications of the quasipotential approach to describing the characteristics of the relativistic particles interaction in the quark models, namely: the hadrons elastic scattering amplitudes, the mass spectra and widths mesons decays, the cross sections of the deep inelastic leptons scattering on the hadrons
A multiscale modeling approach for biomolecular systems
Energy Technology Data Exchange (ETDEWEB)
Bowling, Alan, E-mail: bowling@uta.edu; Haghshenas-Jaryani, Mahdi, E-mail: mahdi.haghshenasjaryani@mavs.uta.edu [The University of Texas at Arlington, Department of Mechanical and Aerospace Engineering (United States)
2015-04-15
This paper presents a new multiscale molecular dynamic model for investigating the effects of external interactions, such as contact and impact, during stepping and docking of motor proteins and other biomolecular systems. The model retains the mass properties ensuring that the result satisfies Newton’s second law. This idea is presented using a simple particle model to facilitate discussion of the rigid body model; however, the particle model does provide insights into particle dynamics at the nanoscale. The resulting three-dimensional model predicts a significant decrease in the effect of the random forces associated with Brownian motion. This conclusion runs contrary to the widely accepted notion that the motor protein’s movements are primarily the result of thermal effects. This work focuses on the mechanical aspects of protein locomotion; the effect ATP hydrolysis is estimated as internal forces acting on the mechanical model. In addition, the proposed model can be numerically integrated in a reasonable amount of time. Herein, the differences between the motion predicted by the old and new modeling approaches are compared using a simplified model of myosin V.
Tomographical properties of uniformly redundant arrays
International Nuclear Information System (INIS)
Cannon, T.M.; Fenimore, E.E.
1978-01-01
Recent work in coded aperture imaging has shown that the uniformly redundant array (URA) can image distant planar radioactive sources with no artifacts. The performance of two URA apertures when used in a close-up tomographic imaging system is investigated. It is shown that a URA based on m sequences is superior to one based on quadratic residues. The m sequence array not only produces less obnoxious artifacts in tomographic imaging, but is also more resilient to some described detrimental effects of close-up imaging. It is shown that in spite of these close-up effects, tomographic depth resolution increases as the source is moved closer to the detector
Sharing the cost of redundant items
DEFF Research Database (Denmark)
Hougaard, Jens Leth; Moulin, Hervé
2014-01-01
We ask how to share the cost of finitely many public goods (items) among users with different needs: some smaller subsets of items are enough to serve the needs of each user, yet the cost of all items must be covered, even if this entails inefficiently paying for redundant items. Typical examples...... are network connectivity problems when an existing (possibly inefficient) network must be maintained. We axiomatize a family cost ratios based on simple liability indices, one for each agent and for each item, measuring the relative worth of this item across agents, and generating cost allocation rules...... additive in costs....
Designing Broadband Access Networks with Triple Redundancy
DEFF Research Database (Denmark)
Pedersen, Jens Myrup; Riaz, Muhammad Tahir; Knudsen, Thomas Phillip
2005-01-01
An architecture is proposed for designing broadband access networks, which offer triple redundancy to the end users, resulting in networks providing connectivity even in case of any two independent node or line failures. Two physically independent connections are offered by fiber, and the last...... provided by some wireless solution. Based on experience with planning Fiber To The Home, the architecture is designed to meet a number of demands, making it practicable and useful in realworld network planning. The proposed wired topology is planar, and suitable for being fitted onto the road network...
A new approach for developing adjoint models
Farrell, P. E.; Funke, S. W.
2011-12-01
Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and
Eutrophication Modeling Using Variable Chlorophyll Approach
International Nuclear Information System (INIS)
Abdolabadi, H.; Sarang, A.; Ardestani, M.; Mahjoobi, E.
2016-01-01
In this study, eutrophication was investigated in Lake Ontario to identify the interactions among effective drivers. The complexity of such phenomenon was modeled using a system dynamics approach based on a consideration of constant and variable stoichiometric ratios. The system dynamics approach is a powerful tool for developing object-oriented models to simulate complex phenomena that involve feedback effects. Utilizing stoichiometric ratios is a method for converting the concentrations of state variables. During the physical segmentation of the model, Lake Ontario was divided into two layers, i.e., the epilimnion and hypolimnion, and differential equations were developed for each layer. The model structure included 16 state variables related to phytoplankton, herbivorous zooplankton, carnivorous zooplankton, ammonium, nitrate, dissolved phosphorus, and particulate and dissolved carbon in the epilimnion and hypolimnion during a time horizon of one year. The results of several tests to verify the model, close to 1 Nash-Sutcliff coefficient (0.98), the data correlation coefficient (0.98), and lower standard errors (0.96), have indicated well-suited model’s efficiency. The results revealed that there were significant differences in the concentrations of the state variables in constant and variable stoichiometry simulations. Consequently, the consideration of variable stoichiometric ratios in algae and nutrient concentration simulations may be applied in future modeling studies to enhance the accuracy of the results and reduce the likelihood of inefficient control policies.
DEFF Research Database (Denmark)
Kaplan, Sigal; Bortei-Doku, Shaun; Prato, Carlo G.
2018-01-01
This study proposes the investigation of the relations between the perception of safety improvement, the provision of information with road signs, the amount of provided information, and observable and unobservable traits of road users. A web-based survey collected information about the estimation...... of conflicts and the perception of safety improvement in 12 traffic locations grouped according to (i) low amount of information that generated ambiguity and (ii) high amount of information that generated redundancy. Moreover, the web-based survey gathered information about socioeconomic characteristics......) and experience with redundant information (for the purpose of having a sample familiar with one of the issues). A Structural Equation Modelling approach allowed estimating a system of relations that suggested the following: (i) the perception of safety improvement is not related only to road sign comprehension...
EFFICIENCY OF REDUNDANT QUERY EXECUTION IN MULTI-CHANNEL SERVICE SYSTEMS
Directory of Open Access Journals (Sweden)
V. A. Bogatyrev
2016-03-01
Full Text Available Subject of Research.The paper deals with analysis of the effectiveness of redundant queries based on untrusted computing in computer systems, represented by multi-channel queuing systems with a common queue. The objective of research is the possibility of increasing the efficiency of service requests while performing redundant copies of requests in different devices of a multi-channel system under conditions of calculations unreliability. The redundant service of requests requires the infallibility of its implementation at least in one of the devices.Method. We have considered estimation of the average time spent in the system with and without the use of redundant requests at the presentation of a simple queuing model of the M / M / n type to analyze the effectiveness of redundant service of requests. Presented evaluation of the average waiting time in the redundant queries is the upper one, since it ignores the possibility of reducing the average waiting time as a result of the spread of the probability of time querying at different devices. The integrated efficiency of redundant service of requests is defined based on the multiplicative index that takes into account the infallibility of calculations and the average time allowance with respect to the maximum tolerated delay of service. Evaluation of error-free computing at reserved queries is received at the requirement of faultless execution of at least one copy of the request. Main Results. We have shown that the reservation of requests gives the gain in efficiency of the system at low demand rate (load. We have defined the boundaries of expediency (efficiency for redundant service of requests. We have shown the possibility of the effectiveness increasing of the adaptive changes in the multiplicity of the reservation of requests, depending on the intensity of the flow of requests. We have found out that the choice of service discipline in information service systems is largely determined by
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
An improved method for calculating self-motion coordinates for redundant manipulators
International Nuclear Information System (INIS)
Reister, D.B.
1997-04-01
For a redundant manipulator, the objective of redundancy resolution is to follow a specified path in Cartesian space and simultaneously perform another task (for example, maximize an objective function or avoid obstacles) at every point along the path. The conventional methods have several drawbacks: a new function must be defined for each task, the extended Jacobian can be singular, closed cycles in Cartesian space may not yield closed cycles in joint space, and the objective is point-wise redundancy resolution (to determine a single point in joint space for each point in Cartesian space). The author divides the redundancy resolution problem into two parts: (1) calculate self-motion coordinates for all possible positions of a manipulator at each point along a Cartesian path and (2) determination of optimal self-motion coordinates that maximize an objective function along the path. This paper will discuss the first part of the problem. The path-wise approach overcomes all of the drawbacks of conventional redundancy resolution methods: no need to define a new function for each task, extended Jacobian cannot be singular, and closed cycles in extended Cartesian space will yield closed cycles in joint space
Evolutionary modeling-based approach for model errors correction
Directory of Open Access Journals (Sweden)
S. Q. Wan
2012-08-01
Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."
On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.
MODELS OF TECHNOLOGY ADOPTION: AN INTEGRATIVE APPROACH
Directory of Open Access Journals (Sweden)
Andrei OGREZEANU
2015-06-01
Full Text Available The interdisciplinary study of information technology adoption has developed rapidly over the last 30 years. Various theoretical models have been developed and applied such as: the Technology Acceptance Model (TAM, Innovation Diffusion Theory (IDT, Theory of Planned Behavior (TPB, etc. The result of these many years of research is thousands of contributions to the field, which, however, remain highly fragmented. This paper develops a theoretical model of technology adoption by integrating major theories in the field: primarily IDT, TAM, and TPB. To do so while avoiding mess, an approach that goes back to basics in independent variable type’s development is proposed; emphasizing: 1 the logic of classification, and 2 psychological mechanisms behind variable types. Once developed these types are then populated with variables originating in empirical research. Conclusions are developed on which types are underpopulated and present potential for future research. I end with a set of methodological recommendations for future application of the model.
Interfacial Fluid Mechanics A Mathematical Modeling Approach
Ajaev, Vladimir S
2012-01-01
Interfacial Fluid Mechanics: A Mathematical Modeling Approach provides an introduction to mathematical models of viscous flow used in rapidly developing fields of microfluidics and microscale heat transfer. The basic physical effects are first introduced in the context of simple configurations and their relative importance in typical microscale applications is discussed. Then,several configurations of importance to microfluidics, most notably thin films/droplets on substrates and confined bubbles, are discussed in detail. Topics from current research on electrokinetic phenomena, liquid flow near structured solid surfaces, evaporation/condensation, and surfactant phenomena are discussed in the later chapters. This book also: Discusses mathematical models in the context of actual applications such as electrowetting Includes unique material on fluid flow near structured surfaces and phase change phenomena Shows readers how to solve modeling problems related to microscale multiphase flows Interfacial Fluid Me...
Diabetes classification using a redundancy reduction preprocessor
Directory of Open Access Journals (Sweden)
Áurea Celeste Ribeiro
Full Text Available Introduction Diabetes patients can benefit significantly from early diagnosis. Thus, accurate automated screening is becoming increasingly important due to the wide spread of that disease. Previous studies in automated screening have found a maximum accuracy of 92.6%. Methods This work proposes a classification methodology based on efficient coding of the input data, which is carried out by decreasing input data redundancy using well-known ICA algorithms, such as FastICA, JADE and INFOMAX. The classifier used in the task to discriminate diabetics from non-diaibetics is the one class support vector machine. Classification tests were performed using noninvasive and invasive indicators. Results The results suggest that redundancy reduction increases one-class support vector machine performance when discriminating between diabetics and nondiabetics up to an accuracy of 98.47% while using all indicators. By using only noninvasive indicators, an accuracy of 98.28% was obtained. Conclusion The ICA feature extraction improves the performance of the classifier in the data set because it reduces the statistical dependence of the collected data, which increases the ability of the classifier to find accurate class boundaries.
Kinematically Optimal Robust Control of Redundant Manipulators
Galicki, M.
2017-12-01
This work deals with the problem of the robust optimal task space trajectory tracking subject to finite-time convergence. Kinematic and dynamic equations of a redundant manipulator are assumed to be uncertain. Moreover, globally unbounded disturbances are allowed to act on the manipulator when tracking the trajectory by the endeffector. Furthermore, the movement is to be accomplished in such a way as to minimize both the manipulator torques and their oscillations thus eliminating the potential robot vibrations. Based on suitably defined task space non-singular terminal sliding vector variable and the Lyapunov stability theory, we derive a class of chattering-free robust kinematically optimal controllers, based on the estimation of transpose Jacobian, which seem to be effective in counteracting both uncertain kinematics and dynamics, unbounded disturbances and (possible) kinematic and/or algorithmic singularities met on the robot trajectory. The numerical simulations carried out for a redundant manipulator of a SCARA type consisting of the three revolute kinematic pairs and operating in a two-dimensional task space, illustrate performance of the proposed controllers as well as comparisons with other well known control schemes.
A new modelling approach for zooplankton behaviour
Keiyu, A. Y.; Yamazaki, H.; Strickler, J. R.
We have developed a new simulation technique to model zooplankton behaviour. The approach utilizes neither the conventional artificial intelligence nor neural network methods. We have designed an adaptive behaviour network, which is similar to BEER [(1990) Intelligence as an adaptive behaviour: an experiment in computational neuroethology, Academic Press], based on observational studies of zooplankton behaviour. The proposed method is compared with non- "intelligent" models—random walk and correlated walk models—as well as observed behaviour in a laboratory tank. Although the network is simple, the model exhibits rich behavioural patterns similar to live copepods.
Continuum modeling an approach through practical examples
Muntean, Adrian
2015-01-01
This book develops continuum modeling skills and approaches the topic from three sides: (1) derivation of global integral laws together with the associated local differential equations, (2) design of constitutive laws and (3) modeling boundary processes. The focus of this presentation lies on many practical examples covering aspects such as coupled flow, diffusion and reaction in porous media or microwave heating of a pizza, as well as traffic issues in bacterial colonies and energy harvesting from geothermal wells. The target audience comprises primarily graduate students in pure and applied mathematics as well as working practitioners in engineering who are faced by nonstandard rheological topics like those typically arising in the food industry.
Global Environmental Change: An integrated modelling approach
International Nuclear Information System (INIS)
Den Elzen, M.
1993-01-01
Two major global environmental problems are dealt with: climate change and stratospheric ozone depletion (and their mutual interactions), briefly surveyed in part 1. In Part 2 a brief description of the integrated modelling framework IMAGE 1.6 is given. Some specific parts of the model are described in more detail in other Chapters, e.g. the carbon cycle model, the atmospheric chemistry model, the halocarbon model, and the UV-B impact model. In Part 3 an uncertainty analysis of climate change and stratospheric ozone depletion is presented (Chapter 4). Chapter 5 briefly reviews the social and economic uncertainties implied by future greenhouse gas emissions. Chapters 6 and 7 describe a model and sensitivity analysis pertaining to the scientific uncertainties and/or lacunae in the sources and sinks of methane and carbon dioxide, and their biogeochemical feedback processes. Chapter 8 presents an uncertainty and sensitivity analysis of the carbon cycle model, the halocarbon model, and the IMAGE model 1.6 as a whole. Part 4 presents the risk assessment methodology as applied to the problems of climate change and stratospheric ozone depletion more specifically. In Chapter 10, this methodology is used as a means with which to asses current ozone policy and a wide range of halocarbon policies. Chapter 11 presents and evaluates the simulated globally-averaged temperature and sea level rise (indicators) for the IPCC-1990 and 1992 scenarios, concluding with a Low Risk scenario, which would meet the climate targets. Chapter 12 discusses the impact of sea level rise on the frequency of the Dutch coastal defence system (indicator) for the IPCC-1990 scenarios. Chapter 13 presents projections of mortality rates due to stratospheric ozone depletion based on model simulations employing the UV-B chain model for a number of halocarbon policies. Chapter 14 presents an approach for allocating future emissions of CO 2 among regions. (Abstract Truncated)
The impact of the operating environment on the design of redundant configurations
International Nuclear Information System (INIS)
Marseguerra, M.; Padovani, E.; Zio, E.
1999-01-01
Safety systems are often characterized by substantial redundancy and diversification in safety critical components. In principle, such redundancy and diversification can bring benefits when compared to single-component systems. However, it has also been recognized that the evaluation of these benefits should take into account that redundancies cannot be founded, in practice, on the assumption of complete independence, so that the resulting risk profile is strongly dominated by dependent failures. It is therefore mandatory that the effects of common cause failures be estimated in any probabilistic safety assessment (PSA). Recently, in the Hughes model for hardware failures and in the Eckhardt and Lee models for software failures, it was proposed that the stressfulness of the operating environment affects the probability that a particular type of component will fail. Thus, dependence of component failure behaviors can arise indirectly through the variability of the environment which can directly affect the success of a redundant configuration. In this paper we investigate the impact of indirect component dependence by means of the introduction of a probability distribution which describes the variability of the environment. We show that the variance of the distribution of the number, or times, of system failures can give an indication of the presence of the environment. Further, the impact of the environment is shown to affect the reliability and the design of redundant configurations
Fault-Tolerant Region-Based Control of an Underwater Vehicle with Kinematically Redundant Thrusters
Directory of Open Access Journals (Sweden)
Zool H. Ismail
2014-01-01
Full Text Available This paper presents a new control approach for an underwater vehicle with a kinematically redundant thruster system. This control scheme is derived based on a fault-tolerant decomposition for thruster force allocation and a region control scheme for the tracking objective. Given a redundant thruster system, that is, six or more pairs of thrusters are used, the proposed redundancy resolution and region control scheme determine the number of thruster faults, as well as providing the reference thruster forces in order to keep the underwater vehicle within the desired region. The stability of the presented control law is proven in the sense of a Lyapunov function. Numerical simulations are performed with an omnidirectional underwater vehicle and the results of the proposed scheme illustrate the effectiveness in terms of optimizing the thruster forces.
Risk-based replacement strategies for redundant deteriorating reinforced concrete pipe networks
International Nuclear Information System (INIS)
Adey, B.; Bernard, O.; Gerard, B.
2003-01-01
This paper gives an example of how predictive models of the deterioration of reinforced concrete pipes and the consequences of failure can be used to develop risk-based replacement strategies for redundant reinforced concrete pipe networks. It also shows how an accurate deterioration prediction can lead to a reduction of agency costs, and illustrates the limitation of the incremental intervention step algorithm. The main conclusion is that the use of predictive models, such as those developed by Oxand S.A., in the determination of replacement strategies for redundant reinforced concrete pipe networks can lead to a significant reduction in overall costs for the owner of the structure. (author)
The human brain maintains contradictory and redundant auditory sensory predictions.
Directory of Open Access Journals (Sweden)
Marika Pieszek
Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.
Markov Chains For Testing Redundant Software
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
About the complete loss of functions assumed by redundant systems
International Nuclear Information System (INIS)
Boaretto, Y.; Cayol, A.; Fourest, M.; Guimbail, H.
1980-04-01
Are to be taken into account situations resulting from loss of redundant safety systems. Two ways of approach were to be probed: evaluation of the failure probability and analysis of the consequences of those situations. The first way leads to improve reliability of concerned systems, the second way to set up mitigating means. Before TMI-2 occured, safety advices had already been issued about three kinds of situations: anticipated transients without scram, loss of ultimate heat sink, simultaneous loss of out-and inside power supplies. That, in some cases, something had to be done to improve safety showed the rightness of the concern. Next step is the study of the loss of both normal and emergency feedwater: The regulatory request has been issued on September 1979
Crime Modeling using Spatial Regression Approach
Saleh Ahmar, Ansari; Adiatma; Kasim Aidid, M.
2018-01-01
Act of criminality in Indonesia increased both variety and quantity every year. As murder, rape, assault, vandalism, theft, fraud, fencing, and other cases that make people feel unsafe. Risk of society exposed to crime is the number of reported cases in the police institution. The higher of the number of reporter to the police institution then the number of crime in the region is increasing. In this research, modeling criminality in South Sulawesi, Indonesia with the dependent variable used is the society exposed to the risk of crime. Modelling done by area approach is the using Spatial Autoregressive (SAR) and Spatial Error Model (SEM) methods. The independent variable used is the population density, the number of poor population, GDP per capita, unemployment and the human development index (HDI). Based on the analysis using spatial regression can be shown that there are no dependencies spatial both lag or errors in South Sulawesi.
Optimal redundant systems for works with random processing time
International Nuclear Information System (INIS)
Chen, M.; Nakagawa, T.
2013-01-01
This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems
Stramaglia, Sebastiano; Angelini, Leonardo; Wu, Guorong; Cortes, Jesus M; Faes, Luca; Marinazzo, Daniele
2016-12-01
We develop a framework for the analysis of synergy and redundancy in the pattern of information flow between subsystems of a complex network. The presence of redundancy and/or synergy in multivariate time series data renders difficulty to estimate the neat flow of information from each driver variable to a given target. We show that adopting an unnormalized definition of Granger causality, one may put in evidence redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently, we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. We report the application of the proposed approach to resting state functional magnetic resonance imaging data from the Human Connectome Project showing that redundant pairs of regions arise mainly due to space contiguity and interhemispheric symmetry, while synergy occurs mainly between nonhomologous pairs of regions in opposite hemispheres. Redundancy and synergy, in healthy resting brains, display characteristic patterns, revealed by the proposed approach. The pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits.
Merging Digital Surface Models Implementing Bayesian Approaches
Sadeq, H.; Drummond, J.; Li, Z.
2016-06-01
In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES
Directory of Open Access Journals (Sweden)
H. Sadeq
2016-06-01
Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.
Redundancy and Reliability for an HPC Data Centre
Erhan Yılmaz
2012-01-01
Defining a level of redundancy is a strategic question when planning a new data centre, as it will directly impact the entire design of the building as well as the construction and operational costs. It will also affect how to integrate future extension plans into the design. Redundancy is also a key strategic issue when upgrading or retrofitting an existing facility. Redundancy is a central strategic question to any business that relies on data centres for its operation. In th...
Reliability of redundant structures of nuclear reactor protection systems
International Nuclear Information System (INIS)
Vojnovic, B.
1983-01-01
In this paper, reliability of various redundant structures of PWR protection systems has been analysed. Structures of reactor tip systems as well as the systems for activation of safety devices have been presented. In all those systems redundancy is achieved by means of so called majority voting logic ('r out of n' structures). Different redundant devices have been compared, concerning probability of occurrence of safe as well as unsafe failures. (author)
Does functional redundancy stabilize fish communities?
DEFF Research Database (Denmark)
Rice, Jake; Daan, Niels; Gislason, Henrik
2013-01-01
in abundance or biomass could be accounted for by the Law of Large Numbers, providing no evidence that specific ecological processes or co-adaptations are necessary to produce this effect. This implies that successful conservation policies to maintain the resilience of a marine fish community could be based......Functional redundancy of species sharing a feeding strategy and/or maximum size has been hypothesized to contribute to increased resilience of marine fish communities (the “portfolio effect”). A consistent time-series of survey data of fish in the North Sea was used to examine if trophic functional...... groups or maximum length of species (Lmax) groups with larger numbers of species had lower coefficients of variation in abundance and biomass over time than did groupings with fewer species. Results supported this hypothesis. However, the stabilizing effect of numbers of species in a group on variation...
Analysis and Design of Offset QPSK Using Redundant Filter Banks
International Nuclear Information System (INIS)
Fernandez-Vazquez, Alfonso; Jovanovic-Dolecek, Gordana
2013-01-01
This paper considers the analysis and design of OQPSK digital modulation. We first establish the discrete time formulation, which allows us to find the equivalent redundant filter banks. It is well known that redundant filter banks are related with redundant transformation of the Frame theory. According to the Frame theory, the redundant transformations and corresponding representations are not unique. In this way, we show that the solution to the pulse shaping problem is not unique. Then we use this property to minimize the effect of the channel noise in the reconstructed symbol stream. We evaluate the performance of the digital communication using numerical examples.
A nationwide modelling approach to decommissioning - 16182
International Nuclear Information System (INIS)
Kelly, Bernard; Lowe, Andy; Mort, Paul
2009-01-01
In this paper we describe a proposed UK national approach to modelling decommissioning. For the first time, we shall have an insight into optimizing the safety and efficiency of a national decommissioning strategy. To do this we use the General Case Integrated Waste Algorithm (GIA), a universal model of decommissioning nuclear plant, power plant, waste arisings and the associated knowledge capture. The model scales from individual items of plant through cells, groups of cells, buildings, whole sites and then on up to a national scale. We describe the national vision for GIA which can be broken down into three levels: 1) the capture of the chronological order of activities that an experienced decommissioner would use to decommission any nuclear facility anywhere in the world - this is Level 1 of GIA; 2) the construction of an Operational Research (OR) model based on Level 1 to allow rapid what if scenarios to be tested quickly (Level 2); 3) the construction of a state of the art knowledge capture capability that allows future generations to learn from our current decommissioning experience (Level 3). We show the progress to date in developing GIA in levels 1 and 2. As part of level 1, GIA has assisted in the development of an IMechE professional decommissioning qualification. Furthermore, we describe GIA as the basis of a UK-Owned database of decommissioning norms for such things as costs, productivity, durations etc. From level 2, we report on a pilot study that has successfully tested the basic principles for the OR numerical simulation of the algorithm. We then highlight the advantages of applying the OR modelling approach nationally. In essence, a series of 'what if...' scenarios can be tested that will improve the safety and efficiency of decommissioning. (authors)
Modeling in transport phenomena a conceptual approach
Tosun, Ismail
2007-01-01
Modeling in Transport Phenomena, Second Edition presents and clearly explains with example problems the basic concepts and their applications to fluid flow, heat transfer, mass transfer, chemical reaction engineering and thermodynamics. A balanced approach is presented between analysis and synthesis, students will understand how to use the solution in engineering analysis. Systematic derivations of the equations and the physical significance of each term are given in detail, for students to easily understand and follow up the material. There is a strong incentive in science and engineering to
Nuclear physics for applications. A model approach
International Nuclear Information System (INIS)
Prussin, S.G.
2007-01-01
Written by a researcher and teacher with experience at top institutes in the US and Europe, this textbook provides advanced undergraduates minoring in physics with working knowledge of the principles of nuclear physics. Simplifying models and approaches reveal the essence of the principles involved, with the mathematical and quantum mechanical background integrated in the text where it is needed and not relegated to the appendices. The practicality of the book is enhanced by numerous end-of-chapter problems and solutions available on the Wiley homepage. (orig.)
Analytical Redundancy Design for Aeroengine Sensor Fault Diagnostics Based on SROS-ELM
Directory of Open Access Journals (Sweden)
Jun Zhou
2016-01-01
Full Text Available Analytical redundancy technique is of great importance to guarantee the reliability and safety of aircraft engine system. In this paper, a machine learning based aeroengine sensor analytical redundancy technique is developed and verified through hardware-in-the-loop (HIL simulation. The modified online sequential extreme learning machine, selective updating regularized online sequential extreme learning machine (SROS-ELM, is employed to train the model online and estimate sensor measurements. It selectively updates the output weights of neural networks according to the prediction accuracy and the norm of output weight vector, tackles the problems of singularity and ill-posedness by regularization, and adopts a dual activation function in the hidden nodes combing neural and wavelet theory to enhance prediction capability. The experimental results verify the good generalization performance of SROS-ELM and show that the developed analytical redundancy technique for aeroengine sensor fault diagnosis based on SROS-ELM is effective and feasible.
Yankouskaya, Alla; Booth, David A; Humphreys, Glyn
2012-11-01
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.
Robust Redundant Input Reliable Tracking Control for Omnidirectional Rehabilitative Training Walker
Directory of Open Access Journals (Sweden)
Ping Sun
2014-01-01
Full Text Available The problem of robust reliable tracking control on the omnidirectional rehabilitative training walker is examined. The new nonlinear redundant input method is proposed when one wheel actuator fault occurs. The aim of the study is to design an asymptotically stable controller that can guarantee the safety of the user and ensure tracking on a training path planned by a physical therapist. The redundant degrees of freedom safety control and the asymptotically zero state detectable concept of the walker are presented, the model of redundant degree is constructed, and the property of center of gravity constant shift is obtained. A controller that can satisfy asymptotic stability is obtained using a common Lyapunov function for admissible uncertainties resulting from an actuator fault. Simulation results confirm the effectiveness of the proposed method and verify that the walker can provide safe sequential motion when one wheel actuator is at fault.
Synergy and redundancy in the Granger causal analysis of dynamical networks
International Nuclear Information System (INIS)
Stramaglia, Sebastiano; M Cortes, Jesus; Marinazzo, Daniele
2014-01-01
We analyze, by means of Granger causality (GC), the effect of synergy and redundancy in the inference (from time series data) of the information flow between subsystems of a complex network. While we show that fully conditioned GC (CGC) is not affected by synergy, the pairwise analysis fails to prove synergetic effects. In cases when the number of samples is low, thus making the fully conditioned approach unfeasible, we show that partially conditioned GC (PCGC) is an effective approach if the set of conditioning variables is properly chosen. Here we consider two different strategies (based either on informational content for the candidate driver or on selecting the variables with highest pairwise influences) for PCGC and show that, depending on the data structure, either one or the other might be equally valid. On the other hand, we observe that fully conditioned approaches do not work well in the presence of redundancy, thus suggesting the strategy of separating the pairwise links in two subsets: those corresponding to indirect connections of the CGC (which should thus be excluded) and links that can be ascribed to redundancy effects and, together with the results from the fully connected approach, provide a better description of the causality pattern in the presence of redundancy. Finally we apply these methods to two different real datasets. First, analyzing electrophysiological data from an epileptic brain, we show that synergetic effects are dominant just before seizure occurrences. Second, our analysis applied to gene expression time series from HeLa culture shows that the underlying regulatory networks are characterized by both redundancy and synergy. (paper)
Pedagogic process modeling: Humanistic-integrative approach
Directory of Open Access Journals (Sweden)
Boritko Nikolaj M.
2007-01-01
Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .
A novel approach to pipeline tensioner modeling
Energy Technology Data Exchange (ETDEWEB)
O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)
2009-07-01
As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods
Learning contrast-invariant cancellation of redundant signals in neural systems.
Directory of Open Access Journals (Sweden)
Jorge F Mejias
Full Text Available Cancellation of redundant information is a highly desirable feature of sensory systems, since it would potentially lead to a more efficient detection of novel information. However, biologically plausible mechanisms responsible for such selective cancellation, and especially those robust to realistic variations in the intensity of the redundant signals, are mostly unknown. In this work, we study, via in vivo experimental recordings and computational models, the behavior of a cerebellar-like circuit in the weakly electric fish which is known to perform cancellation of redundant stimuli. We experimentally observe contrast invariance in the cancellation of spatially and temporally redundant stimuli in such a system. Our model, which incorporates heterogeneously-delayed feedback, bursting dynamics and burst-induced STDP, is in agreement with our in vivo observations. In addition, the model gives insight on the activity of granule cells and parallel fibers involved in the feedback pathway, and provides a strong prediction on the parallel fiber potentiation time scale. Finally, our model predicts the existence of an optimal learning contrast around 15% contrast levels, which are commonly experienced by interacting fish.
Approaches and models of intercultural education
Directory of Open Access Journals (Sweden)
Iván Manuel Sánchez Fontalvo
2013-10-01
Full Text Available Needed to be aware of the need to build an intercultural society, awareness must be assumed in all social spheres, where stands the role play education. A role of transcendental, since it must promote educational spaces to form people with virtues and powers that allow them to live together / as in multicultural contexts and social diversities (sometimes uneven in an increasingly globalized and interconnected world, and foster the development of feelings of civic belonging shared before the neighborhood, city, region and country, allowing them concern and critical judgement to marginalization, poverty, misery and inequitable distribution of wealth, causes of structural violence, but at the same time, wanting to work for the welfare and transformation of these scenarios. Since these budgets, it is important to know the approaches and models of intercultural education that have been developed so far, analysing their impact on the contexts educational where apply.
Transport modeling: An artificial immune system approach
Directory of Open Access Journals (Sweden)
Teodorović Dušan
2006-01-01
Full Text Available This paper describes an artificial immune system approach (AIS to modeling time-dependent (dynamic, real time transportation phenomenon characterized by uncertainty. The basic idea behind this research is to develop the Artificial Immune System, which generates a set of antibodies (decisions, control actions that altogether can successfully cover a wide range of potential situations. The proposed artificial immune system develops antibodies (the best control strategies for different antigens (different traffic "scenarios". This task is performed using some of the optimization or heuristics techniques. Then a set of antibodies is combined to create Artificial Immune System. The developed Artificial Immune transportation systems are able to generalize, adapt, and learn based on new knowledge and new information. Applications of the systems are considered for airline yield management, the stochastic vehicle routing, and real-time traffic control at the isolated intersection. The preliminary research results are very promising.
System approach to modeling of industrial technologies
Toropov, V. S.; Toropov, E. S.
2018-03-01
The authors presented a system of methods for modeling and improving industrial technologies. The system consists of information and software. The information part is structured information about industrial technologies. The structure has its template. The template has several essential categories used to improve the technological process and eliminate weaknesses in the process chain. The base category is the physical effect that takes place when the technical process proceeds. The programming part of the system can apply various methods of creative search to the content stored in the information part of the system. These methods pay particular attention to energy transformations in the technological process. The system application will allow us to systematize the approach to improving technologies and obtaining new technical solutions.
N + 1 redundancy on ATCA instrumentation for Nuclear Fusion
Energy Technology Data Exchange (ETDEWEB)
Correia, Miguel, E-mail: miguelfc@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal); Sousa, Jorge; Rodrigues, António P.; Batista, António J.N.; Combo, Álvaro; Carvalho, Bernardo B.; Santos, Bruno; Carvalho, Paulo F.; Gonçalves, Bruno [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal); Correia, Carlos M.B.A. [Centro de Instrumentação, Departamento de Física, Universidade de Coimbra, Coimbra (Portugal); Varandas, Carlos A.F. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico – Universidade Técnica de Lisboa, Lisboa (Portugal)
2013-10-15
Highlights: ► In Nuclear Fusion, demanding security and high-availability requirements call for redundancy to be available. ► ATCA standard features desirable redundancy features for Fusion instrumentation. ► The developed control and data acquisition hardware modules support additional redundancy schemes. ► Implementation of N + 1 redundancy of host processor and I/O data modules. -- Abstract: The role of redundancy on control and data acquisition systems has gained a significant importance in the case of Nuclear Fusion, as demanding security and high-availability requirements call for redundancy to be available. IPFN's control and data acquisition system hardware is based on an Advanced Telecommunications Computing Architecture (ATCA) set of I/O (DAC/ADC endpoints) and data/timing switch modules, which handle data and timing from all I/O endpoints. Modules communicate through Peripheral Component Interconnect Express (PCIe), established over the ATCA backplane and controlled by one or more external hosts. The developed hardware modules were designed to take advantage of ATCA specification's redundancy features, namely at the hardware management level, including support of: (i) multiple host operation with N + 1 redundancy – in which a designated failover host takes over data previously assigned to a suddenly malfunctioning host and (ii) N + 1 redundancy of I/O and data/timing switch modules. This paper briefly describes IPFN's control and data acquisition system, which is being developed for ITER fast plant system controller (FPSC), and analyses the hardware implementation of its supported redundancy features.
Pedestrian detection based on redundant wavelet transform
Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun
2016-10-01
Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.
Optimizing ISOCAM data processing using spatial redundancy
Miville-Deschênes, M.-A.; Boulanger, F.; Abergel, A.; Bernard, J.-P.
2000-11-01
Several instrumental effects of the Long Wavelength channel of ISOCAM, the camera on board the Infrared Space Observatory, degrade the processed images. We present new data-processing techniques that correct these defects, taking advantage of the fact that a position in the sky has been observed by several pixels at different times. We use this redundant information (1) to correct the long-term variation of the detector response, (2) to correct memory effects after glitches and point sources, and (3) to refine the deglitching process. As an example we have applied our processing to the gamma-ray burst observation GRB 970402. Our new data-processing techniques allow the detection of faint extended emission with contrast smaller than 1% of the zodiacal background. The data reduction corrects instrumental effects to the point where the noise in the final map is dominated by the readout and the photon noises. All raster ISOCAM observations can benefit from the data processing described here. This includes mapping of solar system extended objects (comet dust trails), nearby clouds and star forming regions, images from diffuse emission in the Galactic plane and external galaxies. These techniques could also be applied to other raster type observations (e.g. ISOPHOT). Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, The Netherlands and the UK) and with the participation of ISAS and NASA.
ECOMOD - An ecological approach to radioecological modelling
International Nuclear Information System (INIS)
Sazykina, Tatiana G.
2000-01-01
A unified methodology is proposed to simulate the dynamic processes of radionuclide migration in aquatic food chains in parallel with their stable analogue elements. The distinguishing feature of the unified radioecological/ecological approach is the description of radionuclide migration along with dynamic equations for the ecosystem. The ability of the methodology to predict the results of radioecological experiments is demonstrated by an example of radionuclide (iron group) accumulation by a laboratory culture of the algae Platymonas viridis. Based on the unified methodology, the 'ECOMOD' radioecological model was developed to simulate dynamic radioecological processes in aquatic ecosystems. It comprises three basic modules, which are operated as a set of inter-related programs. The 'ECOSYSTEM' module solves non-linear ecological equations, describing the biomass dynamics of essential ecosystem components. The 'RADIONUCLIDE DISTRIBUTION' module calculates the radionuclide distribution in abiotic and biotic components of the aquatic ecosystem. The 'DOSE ASSESSMENT' module calculates doses to aquatic biota and doses to man from aquatic food chains. The application of the ECOMOD model to reconstruct the radionuclide distribution in the Chernobyl Cooling Pond ecosystem in the early period after the accident shows good agreement with observations
Modelling Approach In Islamic Architectural Designs
Directory of Open Access Journals (Sweden)
Suhaimi Salleh
2014-06-01
Full Text Available Architectural designs contribute as one of the main factors that should be considered in minimizing negative impacts in planning and structural development in buildings such as in mosques. In this paper, the ergonomics perspective is revisited which hence focuses on the conditional factors involving organisational, psychological, social and population as a whole. This paper tries to highlight the functional and architectural integration with ecstatic elements in the form of decorative and ornamental outlay as well as incorporating the building structure such as wall, domes and gates. This paper further focuses the mathematical aspects of the architectural designs such as polar equations and the golden ratio. These designs are modelled into mathematical equations of various forms, while the golden ratio in mosque is verified using two techniques namely, the geometric construction and the numerical method. The exemplary designs are taken from theSabah Bandaraya Mosque in Likas, Kota Kinabalu and the Sarawak State Mosque in Kuching,while the Universiti Malaysia Sabah Mosque is used for the Golden Ratio. Results show thatIslamic architectural buildings and designs have long had mathematical concepts and techniques underlying its foundation, hence, a modelling approach is needed to rejuvenate these Islamic designs.
Quantifying the value of redundant measurements at GCOS Reference Upper-Air Network sites
Directory of Open Access Journals (Sweden)
F. Madonna
2014-11-01
Full Text Available The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of the integrated water vapour (IWV and water vapour mixing ratio profiles provided by five highly instrumented GRUAN (GCOS, Global Climate Observing System, Reference Upper-Air Network stations in 2010–2012. Results show that the random uncertainties on the IWV measured with radiosondes, global positioning system, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of IWV content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy with the IWV time series measured by radiosondes and therefore the highest potential to reduce the random uncertainty of the radiosondes time series. Moreover, the random uncertainty of a time series from one instrument can be reduced by ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty is achieved by conditioning Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapour measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
Directory of Open Access Journals (Sweden)
Jarmo Nurmi
2017-05-01
Full Text Available This paper addresses the energy-inefficiency problem of four-degrees-of-freedom (4-DOF hydraulic manipulators through redundancy resolution in robotic closed-loop controlled applications. Because conventional methods typically are local and have poor performance for resolving redundancy with respect to minimum hydraulic energy consumption, global energy-optimal redundancy resolution is proposed at the valve-controlled actuator and hydraulic power system interaction level. The energy consumption of the widely popular valve-controlled load-sensing (LS and constant-pressure (CP systems is effectively minimised through cost functions formulated in a discrete-time dynamic programming (DP approach with minimum state representation. A prescribed end-effector path and important actuator constraints at the position, velocity and acceleration levels are also satisfied in the solution. Extensive field experiments performed on a forestry hydraulic manipulator demonstrate the performance of the proposed solution. Approximately 15–30% greater hydraulic energy consumption was observed with the conventional methods in the LS and CP systems. These results encourage energy-optimal redundancy resolution in future robotic applications of hydraulic manipulators.
Global Patterns in Ecological Indicators of Marine Food Webs: A Modelling Approach
Heymans, Johanna Jacomina; Coll, Marta; Libralato, Simone; Morissette, Lyne; Christensen, Villy
2014-01-01
Background Ecological attributes estimated from food web models have the potential to be indicators of good environmental status given their capabilities to describe redundancy, food web changes, and sensitivity to fishing. They can be used as a baseline to show how they might be modified in the future with human impacts such as climate change, acidification, eutrophication, or overfishing. Methodology In this study ecological network analysis indicators of 105 marine food web models were tested for variation with traits such as ecosystem type, latitude, ocean basin, depth, size, time period, and exploitation state, whilst also considering structural properties of the models such as number of linkages, number of living functional groups or total number of functional groups as covariate factors. Principal findings Eight indicators were robust to model construction: relative ascendency; relative overhead; redundancy; total systems throughput (TST); primary production/TST; consumption/TST; export/TST; and total biomass of the community. Large-scale differences were seen in the ecosystems of the Atlantic and Pacific Oceans, with the Western Atlantic being more complex with an increased ability to mitigate impacts, while the Eastern Atlantic showed lower internal complexity. In addition, the Eastern Pacific was less organised than the Eastern Atlantic although both of these systems had increased primary production as eastern boundary current systems. Differences by ecosystem type highlighted coral reefs as having the largest energy flow and total biomass per unit of surface, while lagoons, estuaries, and bays had lower transfer efficiencies and higher recycling. These differences prevailed over time, although some traits changed with fishing intensity. Keystone groups were mainly higher trophic level species with mostly top-down effects, while structural/dominant groups were mainly lower trophic level groups (benthic primary producers such as seagrass and macroalgae
Global patterns in ecological indicators of marine food webs: a modelling approach.
Directory of Open Access Journals (Sweden)
Johanna Jacomina Heymans
Full Text Available BACKGROUND: Ecological attributes estimated from food web models have the potential to be indicators of good environmental status given their capabilities to describe redundancy, food web changes, and sensitivity to fishing. They can be used as a baseline to show how they might be modified in the future with human impacts such as climate change, acidification, eutrophication, or overfishing. METHODOLOGY: In this study ecological network analysis indicators of 105 marine food web models were tested for variation with traits such as ecosystem type, latitude, ocean basin, depth, size, time period, and exploitation state, whilst also considering structural properties of the models such as number of linkages, number of living functional groups or total number of functional groups as covariate factors. PRINCIPAL FINDINGS: Eight indicators were robust to model construction: relative ascendency; relative overhead; redundancy; total systems throughput (TST; primary production/TST; consumption/TST; export/TST; and total biomass of the community. Large-scale differences were seen in the ecosystems of the Atlantic and Pacific Oceans, with the Western Atlantic being more complex with an increased ability to mitigate impacts, while the Eastern Atlantic showed lower internal complexity. In addition, the Eastern Pacific was less organised than the Eastern Atlantic although both of these systems had increased primary production as eastern boundary current systems. Differences by ecosystem type highlighted coral reefs as having the largest energy flow and total biomass per unit of surface, while lagoons, estuaries, and bays had lower transfer efficiencies and higher recycling. These differences prevailed over time, although some traits changed with fishing intensity. Keystone groups were mainly higher trophic level species with mostly top-down effects, while structural/dominant groups were mainly lower trophic level groups (benthic primary producers such as
An integrated approach to permeability modeling using micro-models
Energy Technology Data Exchange (ETDEWEB)
Hosseini, A.H.; Leuangthong, O.; Deutsch, C.V. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Alberta Univ., Edmonton, AB (Canada)
2008-10-15
An important factor in predicting the performance of steam assisted gravity drainage (SAGD) well pairs is the spatial distribution of permeability. Complications that make the inference of a reliable porosity-permeability relationship impossible include the presence of short-scale variability in sand/shale sequences; preferential sampling of core data; and uncertainty in upscaling parameters. Micro-modelling is a simple and effective method for overcoming these complications. This paper proposed a micro-modeling approach to account for sampling bias, small laminated features with high permeability contrast, and uncertainty in upscaling parameters. The paper described the steps and challenges of micro-modeling and discussed the construction of binary mixture geo-blocks; flow simulation and upscaling; extended power law formalism (EPLF); and the application of micro-modeling and EPLF. An extended power-law formalism to account for changes in clean sand permeability as a function of macroscopic shale content was also proposed and tested against flow simulation results. There was close agreement between the model and simulation results. The proposed methodology was also applied to build the porosity-permeability relationship for laminated and brecciated facies of McMurray oil sands. Experimental data was in good agreement with the experimental data. 8 refs., 17 figs.
Triple Modular Redundancy verification via heuristic netlist analysis
Directory of Open Access Journals (Sweden)
Giovanni Beltrame
2015-08-01
Full Text Available Triple Modular Redundancy (TMR is a common technique to protect memory elements for digital processing systems subject to radiation effects (such as in space, high-altitude, or near nuclear sources. This paper presents an approach to verify the correct implementation of TMR for the memory elements of a given netlist (i.e., a digital circuit specification using heuristic analysis. The purpose is detecting any issues that might incur during the use of automatic tools for TMR insertion, optimization, place and route, etc. Our analysis does not require a testbench and can perform full, exhaustive coverage within less than an hour even for large designs. This is achieved by applying a divide et impera approach, splitting the circuit into smaller submodules without loss of generality, instead of applying formal verification to the whole netlist at once. The methodology has been applied to a production netlist of the LEON2-FT processor that had reported errors during radiation testing, successfully showing a number of unprotected memory elements, namely 351 flip-flops.
The issue of redundant places of worship
Directory of Open Access Journals (Sweden)
Paolo Cavana
2012-03-01
Abstract: Nowadays one of the major issues concerning ecclesiastical, or religious, property in Europe, as elsewhere, consists of deciding what to do with redundant churches and places of worship of traditional Christian denominations, all of which have lost their original use, either due to a formal decision of the ecclesiastical authorities or to simple closure to the public. This might have been caused by a series of events, like a significant decrease in the church attendance, limited public resources, new urban planning projects and the fall in religious vocation. For these places there is either the prospect of a new use, or a slow process of decay which can ultimately end up in a sale, or demolition. This problem is faced today with singular urgency in Europe, where it determines an increased risk of decay of much of the historical-artistic heritage, together with the abandonment of the countryside and mountain locations and a progressive desertion of historical town centres. Consequently, the issue concerns not only the religious community but also the civil authorities and public opinion, more and more sensitive to the protection of cultural heritage and the historical memory of local communities. This paper examines many aspects of the issue in some countries (Italy, Germany, Switzerland, France, Québec, the United States, comparing the different legal frameworks and the documents of some national episcopal assemblies on the subject, especially about the change in the use of churches. Finally it concentrates on the situation in Italy, where the legal framework on this subject is strictly connected with the system of church-state relations, making some concluding remarks about future prospects and possible solutions to some of the more serious aspects of the issue.
Processing bimodal stimulus information under alcohol: is there a risk to being redundant?
Fillmore, Mark T
2010-10-01
The impairing effects of alcohol are especially pronounced in environments that involve dividing attention across two or more stimuli. However, studies in cognitive psychology have identified circumstances in which the presentation of multiple stimuli can actually facilitate performance. The "redundant signal effect" (RSE) refers to the observation that individuals respond more quickly when information is presented as redundant, bimodal stimuli (e.g., aurally and visually), rather than as a single stimulus presented to either modality alone. The present study tested the hypothesis that the response facilitation attributed to RSE could reduce the degree to which alcohol slows information processing. Two experiments are reported. Experiment 1 demonstrated the validity of a reaction time model of RSE by showing that adults (N = 15) responded more quickly to redundant, bimodal stimuli (visual + aural) versus either stimuli presented individually. Experiment 2 used the RSE model to test the reaction time performance of 20 adults following three alcohol doses (0.0 g/kg, 0.45 g/kg, and 0.65 g/kg). Results showed that alcohol slowed reaction time in a general dose-dependent manner in all three stimulus conditions with the reaction time (RT) speed-advantage of the redundant signal being maintained, even under the highest dose of alcohol. Evidence for an RT advantage to bimodal stimuli under alcohol challenges the general assumption that alcohol impairment is intensified in multistimulus environments. The current study provides a useful model to investigate how drug effects on behavior might be altered in contexts that involve redundant response signals.
Risk communication: a mental models approach
National Research Council Canada - National Science Library
Morgan, M. Granger (Millett Granger)
2002-01-01
... information about risks. The procedure uses approaches from risk and decision analysis to identify the most relevant information; it also uses approaches from psychology and communication theory to ensure that its message is understood. This book is written in nontechnical terms, designed to make the approach feasible for anyone willing to try it. It is illustrat...
Skew redundant MEMS IMU calibration using a Kalman filter
International Nuclear Information System (INIS)
Jafari, M; Sahebjameyan, M; Moshiri, B; Najafabadi, T A
2015-01-01
In this paper, a novel calibration procedure for skew redundant inertial measurement units (SRIMUs) based on micro-electro mechanical systems (MEMS) is proposed. A general model of the SRIMU measurements is derived which contains the effects of bias, scale factor error and misalignments. For more accuracy, the effect of lever arms of the accelerometers to the center of the table are modeled and compensated in the calibration procedure. Two separate Kalman filters (KFs) are proposed to perform the estimation of error parameters for gyroscopes and accelerometers. The predictive error minimization (PEM) stochastic modeling method is used to simultaneously model the effect of bias instability and random walk noise on the calibration Kalman filters to diminish the biased estimations. The proposed procedure is simulated numerically and has expected experimental results. The calibration maneuvers are applied using a two-axis angle turntable in a way that the persistency of excitation (PE) condition for parameter estimation is met. For this purpose, a trapezoidal calibration profile is utilized to excite different deterministic error parameters of the accelerometers and a pulse profile is used for the gyroscopes. Furthermore, to evaluate the performance of the proposed KF calibration method, a conventional least squares (LS) calibration procedure is derived for the SRIMUs and the simulation and experimental results compare the functionality of the two proposed methods with each other. (paper)
International Nuclear Information System (INIS)
Son, Kwang Seop; Kim, Dong Hoon; Park, Gee Yong; Kang, Hyun Gook
2018-01-01
Highlights: •The multiple redundant controller, SPLC is configured as the combination of DMR and TMR architecture. •We construct the Markov model of SPLC using the concept of the system unavailability rate. •To satisfy the availability requirement of safety grade controller, the fault coverage factor (FCF) should be ≥0.8 and the MTTR of each module should be ≤100 h when FCF is 0.9. •The availability of SPLC is better than that of PLC having iTMR architecture however it is poorer than iTMR considering the off-line test and inspection on the assumption that MTTR of each module is ≤200 h. -- Abstract: We analyze the availability of the Safety Programmable Logic Controller (SPLC) having multiple redundant architectures. In the SPLC, input/output and processor module are configured as triple modular redundancy (TMR), and backplane bus, power and communication modules are configured as dual modular redundancy (DMR). The voting logics for redundant architectures are based on the forwarding error detection. It means that the receivers perform the voting logics based on the status information of transmitters. To analyze the availability of SPLC, we construct the Markov model and simplify the model adopting the system unavailability rate. The results show that the fault coverage factor should be ≥0.8 and Mean Time To Repair (MTTR) should be ≤100 h in order to satisfy the requirement that the availability of the safety grade PLC should be ≥0.995. Also we evaluate the availability of SPLC comparing to other PLCs such as simplex, processor DMR (pDMR) and independent TMR (iTMR) PLCs used in the existing nuclear safety systems. The availability of SPLC is higher than those of the simplex, pDMR but is lower than that of iTMR for one month which is the periodic off-line test and inspection. That’s why the number of redundant modules used in PLC is more dominant to increasing the availability than the number of fault masking methods such as voting logics used
A Multi-Model Approach for System Diagnosis
DEFF Research Database (Denmark)
Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur
2007-01-01
A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....
The theory of diversity and redundancy in information system security : LDRD final report.
Energy Technology Data Exchange (ETDEWEB)
Mayo, Jackson R. (Sandia National Laboratories, Livermore, CA); Torgerson, Mark Dolan; Walker, Andrea Mae; Armstrong, Robert C. (Sandia National Laboratories, Livermore, CA); Allan, Benjamin A. (Sandia National Laboratories, Livermore, CA); Pierson, Lyndon George
2010-10-01
The goal of this research was to explore first principles associated with mixing of diverse implementations in a redundant fashion to increase the security and/or reliability of information systems. Inspired by basic results in computer science on the undecidable behavior of programs and by previous work on fault tolerance in hardware and software, we have investigated the problem and solution space for addressing potentially unknown and unknowable vulnerabilities via ensembles of implementations. We have obtained theoretical results on the degree of security and reliability benefits from particular diverse system designs, and mapped promising approaches for generating and measuring diversity. We have also empirically studied some vulnerabilities in common implementations of the Linux operating system and demonstrated the potential for diversity to mitigate these vulnerabilities. Our results provide foundational insights for further research on diversity and redundancy approaches for information systems.
International Nuclear Information System (INIS)
Sadjadi, Seyed Jafar; Soltani, R.
2009-01-01
We present a heuristic approach to solve a general framework of serial-parallel redundancy problem where the reliability of the system is maximized subject to some general linear constraints. The complexity of the redundancy problem is generally considered to be NP-Hard and the optimal solution is not normally available. Therefore, to evaluate the performance of the proposed method, a hybrid genetic algorithm is also implemented whose parameters are calibrated via Taguchi's robust design method. Then, various test problems are solved and the computational results indicate that the proposed heuristic approach could provide us some promising reliabilities, which are fairly close to optimal solutions in a reasonable amount of time.
Sibling rivalry: related bacterial small RNAs and their redundant and non-redundant roles.
Caswell, Clayton C; Oglesby-Sherrouse, Amanda G; Murphy, Erin R
2014-01-01
Small RNA molecules (sRNAs) are now recognized as key regulators controlling bacterial gene expression, as sRNAs provide a quick and efficient means of positively or negatively altering the expression of specific genes. To date, numerous sRNAs have been identified and characterized in a myriad of bacterial species, but more recently, a theme in bacterial sRNAs has emerged: the presence of more than one highly related sRNAs produced by a given bacterium, here termed sibling sRNAs. Sibling sRNAs are those that are highly similar at the nucleotide level, and while it might be expected that sibling sRNAs exert identical regulatory functions on the expression of target genes based on their high degree of relatedness, emerging evidence is demonstrating that this is not always the case. Indeed, there are several examples of bacterial sibling sRNAs with non-redundant regulatory functions, but there are also instances of apparent regulatory redundancy between sibling sRNAs. This review provides a comprehensive overview of the current knowledge of bacterial sibling sRNAs, and also discusses important questions about the significance and evolutionary implications of this emerging class of regulators.
Sibling rivalry: Related bacterial small RNAs and their redundant and non-redundant roles
Directory of Open Access Journals (Sweden)
Clayton eCaswell
2014-10-01
Full Text Available Small RNA molecules (sRNAs are now recognized as key regulators controlling bacterial gene expression, as sRNAs provide a quick and efficient means of positively or negatively altering the expression of specific genes. To date, numerous sRNAs have been identified and characterized in a myriad of bacterial species, but more recently, a theme in bacterial sRNAs has emerged: the presence of more than one highly related sRNAs produced by a given bacterium, here termed sibling sRNAs. Sibling sRNAs are those that are highly similar at the nucleotide level, and while it might be expected that sibling sRNAs exert identical regulatory functions on the expression of target genes based on their high degree of relatedness, emerging evidence is demonstrating that this is not always the case. Indeed, there are several examples of bacterial sibling sRNAs with non-redundant regulatory functions, but there are also instances of apparent regulatory redundancy between sibling sRNAs. This review provides a comprehensive overview of the current knowledge of bacterial sibling sRNAs, and also discusses important questions about the significance and evolutionary implications of this emerging class of regulators.
Redundancy Factors for the Seismic Design of Ductile Reinforced Concrete Chevron Braced Frames
Directory of Open Access Journals (Sweden)
Eber Alberto Godínez-Domínguez
Full Text Available Abstract In this paper the authors summarize the results of a study devoted to assess, using nonlinear static analyses, the impact of increasing the structural redundancy in ductile moment-resisting reinforced concrete concentric braced frames structures (RC-MRCBFs. Among the studied variables were the number of stories and the number of bays. Results obtained were compared with the currently proposed values in the Manual of Civil Structures (MOC-08, a model code of Mexico. The studied frames have 4, 8, 12 and 16-story with a story height h=3.5 m. and a fixed length L=12 m., where 1, 2, 3 or 4 bays have to be located. RC-MRCBFs were assumed to be located in soft soil conditions in Mexico City and were designed using a capacity design methodology adapted to general requirements of the seismic, reinforced concrete and steel guidelines of Mexican Codes. From the results obtained in this study it is possible to conclude that a different effect is observed in overstrength redundancy factors respect to ductility redundancy factors due to an increase of the bay number considered. Also, the structural redundancy factors obtained for this particular structural system varies respect to the currently proposed in MOC-08.
Generalized Friedmann-Robertson-Walker metric and redundancy in the generalized Einstein equations
International Nuclear Information System (INIS)
Kao, W.F.; Pen, U.
1991-01-01
A nontrivial redundancy relation, due to the differential structure of the gravitational Bianchi identity as well as the symmetry of the Friedmann-Robertson-Walker metric, in the gravitational field equation is clarified. A generalized Friedmann-Robertson-Walker metric is introduced in order to properly define a one-dimensional reduced problem which offers an alternative approach to obtain the gravitational field equations on Friedmann-Robertson-Walker spaces
Juhasz, Albert J.; Bloomfield, Harvey S.
1987-01-01
A combinatorial reliability approach was used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis was also performed, specifically for a 100-kWe nuclear Brayton power conversion system with parallel redundancy. Although this study was done for a reactor outlet temperature of 1100 K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.
Juhasz, A. J.; Bloomfield, H. S.
1985-01-01
A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.
Design of redundant array of independent DVD libraries based on iSCSI
Chen, Yupeng; Pan, Longfa
2003-04-01
This paper presents a new approach to realize the redundant array of independent DVD libraries (RAID-LoIP) by using the iSCSI technology and traditional RAID algorithms. Our design reaches the high performance of optical storage system with following features: large storage size, highly accessing rate, random access, long distance of DVD libraries, block I/O storage, long storage life. Our RAID-LoIP system can be a good solution for broadcasting media asset storage system.
Real-time instrument-failure detection in the LOFT pressurizer using functional redundancy
International Nuclear Information System (INIS)
Tylee, J.L.
1982-07-01
The functional redundancy approach to detecting instrument failures in a pressurized water reactor (PWR) pressurizer is described and evaluated. This real-time method uses a bank of Kalman filters (one for each instrument) to generate optimal estimates of the pressurizer state. By performing consistency checks between the output of each filter, failed instruments can be identified. Simulation results and actual pressurizer data are used to demonstrate the capabilities of the technique
Computational and Game-Theoretic Approaches for Modeling Bounded Rationality
L. Waltman (Ludo)
2011-01-01
textabstractThis thesis studies various computational and game-theoretic approaches to economic modeling. Unlike traditional approaches to economic modeling, the approaches studied in this thesis do not rely on the assumption that economic agents behave in a fully rational way. Instead, economic
International Nuclear Information System (INIS)
Mikadze, I.; Namchevadze, T.; Gobiani, I.
2007-01-01
There is proposed a generalized mathematical model of the queuing system with time redundancy without preliminary checking of the queuing system at transition from the free state into the engaged one. The model accounts for various failures of the queuing system detected by continuous instrument control, periodic control, control during recovery and the failures revealed immediately after accumulation of a certain number of failures. The generating function of queue length in both stationary and nonstationary modes was determined. (author)
Redundancy proves its worth in FR Germany [emergency power supplies
International Nuclear Information System (INIS)
Simon, M.
1987-01-01
An analysis of loss of power events at nuclear power stations in FR Germany has confirmed the data used in the German risk study and underlined the advantages of providing a high degree of redundancy in emergency power supplies. (author)
Retraction: Redundant Publication of the article Dental caries and ...
African Journals Online (AJOL)
Retraction: Redundant Publication of the article Dental caries and oral health practices among 12 year old children in Nairobi West and Mathira West Districts, Kenya. Gladwell Gathecha et al. The Pan African Medical Journal. 2012;12:42.
Selective Redundancy Removal: A Framework for Data Hiding
Directory of Open Access Journals (Sweden)
Ugo Fiore
2010-02-01
Full Text Available Data hiding techniques have so far concentrated on adding or modifying irrelevant information in order to hide a message. However, files in widespread use, such as HTML documents, usually exhibit high redundancy levels, caused by code-generation programs. Such redundancy may be removed by means of optimization software. Redundancy removal, if applied selectively, enables information hiding. This work introduces Selective Redundancy Removal (SRR as a framework for hiding data. An example application of the framework is given in terms of hiding information in HTML documents. Non-uniformity across documents may raise alarms. Nevertheless, selective application of optimization techniques might be due to the legitimate use of optimization software not supporting all the optimization methods, or configured to not use all of them.
Triple3 Redundant Spacecraft Subsystems (T3RSS), Phase I
National Aeronautics and Space Administration — Redefine Technologies, along with researchers at the University of Colorado, will use three redundancy methods to decrease the susceptibility of a spacecraft, on a...
Resolving Actuator Redundancy - Control Allocation vs. Linear Quadratic Control
Härkegård, Ola
2004-01-01
When designing control laws for systems with more inputs than controlled variables, one issue to consider is how to deal with actuator redundancy. Two tools for distributing the control effort among a redundant set of actuators are control allocation and linear quadratic control design. In this paper, we investigate the relationship between these two design tools when a quadratic performance index is used for control allocation. We show that for a particular class of linear systems, they give...
Kinematic control of redundant robots and the motion optimizability measure.
Li, L; Gruver, W A; Zhang, Q; Yang, Z
2001-01-01
This paper treats the kinematic control of manipulators with redundant degrees of freedom. We derive an analytical solution for the inverse kinematics that provides a means for accommodating joint velocity constraints in real time. We define the motion optimizability measure and use it to develop an efficient method for the optimization of joint trajectories subject to multiple criteria. An implementation of the method for a 7-dof experimental redundant robot is present.
A Discrete Monetary Economic Growth Model with the MIU Approach
Directory of Open Access Journals (Sweden)
Wei-Bin Zhang
2008-01-01
Full Text Available This paper proposes an alternative approach to economic growth with money. The production side is the same as the Solow model, the Ramsey model, and the Tobin model. But we deal with behavior of consumers differently from the traditional approaches. The model is influenced by the money-in-the-utility (MIU approach in monetary economics. It provides a mechanism of endogenous saving which the Solow model lacks and avoids the assumption of adding up utility over a period of time upon which the Ramsey approach is based.
Video rate morphological processor based on a redundant number representation
Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.
1992-03-01
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
State Estimation for Robots with Complementary Redundant Sensors
Directory of Open Access Journals (Sweden)
Daniele Carnevale
2015-10-01
Full Text Available In this paper, robots equipped with two complementary typologies of redundant sensors are considered: one typology provides sharp measures of some geometrical entity related to the robot pose (e.g., distance or angle but is not univocally associated with this quantity; the other typology is univocal but is characterized by a low level of precision. A technique is proposed to properly combine these two kinds of measurement both in a stochastic and in a deterministic context. This framework may occur in robotics, for example, when the distance from a known landmark is detected by two different sensors, one based on the signal strength or time of flight of the signal, while the other one measures the phase-shift of the signal, which has a sharp but periodical dependence on the robot-landmark distance. In the stochastic case, an effective solution is a two-stage extended Kalman filter (EKF which exploits the precise periodic signal only when the estimate of the robot position is sufficiently precise. In the deterministic setting, an approach based on a switching hybrid observer is proposed, and results are analyzed via simulation examples.
Neural redundancy applied to the parity space for signal validation
International Nuclear Information System (INIS)
Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu; Martinez, Aquilino Senra
2005-01-01
The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)
Neural redundancy applied to the parity space for signal validation
Energy Technology Data Exchange (ETDEWEB)
Mol, Antonio Carlos de Abreu; Pereira, Claudio Marcio Nascimento Abreu [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)]. E-mail: cmnap@ien.gov.br; Martinez, Aquilino Senra [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia]. E-mail: aquilino@lmp.br
2005-07-01
The objective of signal validation is to provide more reliable information from the plant sensor data The method presented in this work introduces the concept of neural redundancy and applies it to the space parity method [1] to overcome an inherent deficiency of this method - the determination of the best estimative of the redundant measures when they are inconsistent. The concept of neural redundancy consists on the calculation of a redundancy through neural networks based on the time series of the own state variable. Therefore, neural networks, dynamically trained with the time series, will estimate the current value of the own measure, which will be used as referee of the redundant measures in the parity space. For this purpose the neural network should have the capacity to supply the neural redundancy in real time and with maximum error corresponding to the group deviation. The historical series should be enough to allow the estimate of the next value, during transients and at the same time, it should be optimized to facilitate the retraining of the neural network to each acquisition. In order to have the capacity to reproduce the tendency of the time series even under accident condition, the dynamic training of the neural network privileges the recent points of the time series. The tests accomplished with simulated data of a nuclear plant, demonstrated that this method applied on the parity space method improves the signal validation process. (author)
Reliability optimization of a redundant system with failure dependencies
Energy Technology Data Exchange (ETDEWEB)
Yu Haiyang [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)]. E-mail: Haiyang.YU@utt.fr; Chu Chengbin [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Management School, Hefei University of Technology, 193 Tunxi Road, Hefei (China); Chatelet, Eric [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France); Yalaoui, Farouk [Institute Charles Delaunay (ICD, FRE CNRS 2848), Troyes University of Technology, Rue Marie Curie, BP 2060, 10010 Troyes (France)
2007-12-15
In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems.
Reliability optimization of a redundant system with failure dependencies
International Nuclear Information System (INIS)
Yu Haiyang; Chu Chengbin; Chatelet, Eric; Yalaoui, Farouk
2007-01-01
In a multi-component system, the failure of one component can reduce the system reliability in two aspects: loss of the reliability contribution of this failed component, and the reconfiguration of the system, e.g., the redistribution of the system loading. The system reconfiguration can be triggered by the component failures as well as by adding redundancies. Hence, dependency is essential for the design of a multi-component system. In this paper, we study the design of a redundant system with the consideration of a specific kind of failure dependency, i.e., the redundant dependency. The dependence function is introduced to quantify the redundant dependency. With the dependence function, the redundant dependencies are further classified as independence, weak, linear, and strong dependencies. In addition, this classification is useful in that it facilitates the optimization resolution of the system design. Finally, an example is presented to illustrate the concept of redundant dependency and its application in system design. This paper thus conveys the significance of failure dependencies in the reliability optimization of systems
Mathematical Modelling Approach in Mathematics Education
Arseven, Ayla
2015-01-01
The topic of models and modeling has come to be important for science and mathematics education in recent years. The topic of "Modeling" topic is especially important for examinations such as PISA which is conducted at an international level and measures a student's success in mathematics. Mathematical modeling can be defined as using…
A Multivariate Approach to Functional Neuro Modeling
DEFF Research Database (Denmark)
Mørch, Niels J.S.
1998-01-01
by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...
Rival approaches to mathematical modelling in immunology
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Stabilization Methods for the Integration of DAE in the Presence of Redundant Constraints
International Nuclear Information System (INIS)
Neto, Maria Augusta; Ambrosio, Jorge
2003-01-01
The use of multibody formulations based on Cartesian or natural coordinates lead to sets of differential-algebraic equations that have to be solved. The difficulty in providing compatible initial positions and velocities for a general spatial multibody model and the finite precision of such data result in initial errors that must be corrected during the forward dynamic solution of the system equations of motion.As the position and velocity constraint equations are not explicitly involved in the solution procedure, any integration error leads to the violation of these equations in the long run. Another problem that is very often impossible to avoid is the presence of redundant constraints.Even with no initial redundancy it is possible for some systems to achieve singular configurations in which kinematic constraints become temporarily redundant. In this work several procedures to stabilize the solution of the equations of motion and to handle redundant constraints are revisited. The Baumgarte stabilization, augmented Lagrangian and coordinate partitioning methods are discussed in terms of their efficiency and computational costs. The LU factorization with full pivoting of the Jacobian matrix directs the choice of the set of independent coordinates, required by the coordinate partitioning method.Even when no particular stabilization method is used, a Newton-Raphson iterative procedure is still required in the initial time step to correct the initial positions and velocities, thus requiring the selection of the independent coordinates. However, this initial selection does not guarantee that during the motion of the system other constraints do not become redundant. Two procedures based on the single value decomposition and Gram-Schmidt orthogonalization are revisited for the purpose. The advantages and drawbacks of the different procedures,used separately or in conjunction with each other and their computational costs are finally discussed
A hybrid agent-based approach for modeling microbiological systems.
Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing
2008-11-21
Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.
Directory of Open Access Journals (Sweden)
Daniel Segura
2008-02-01
Full Text Available Previous model-based analysis of the metabolic network of Geobacter sulfurreducens suggested the existence of several redundant pathways. Here, we identified eight sets of redundant pathways that included redundancy for the assimilation of acetate, and for the conversion of pyruvate into acetyl-CoA. These equivalent pathways and two other sub-optimal pathways were studied using 5 single-gene deletion mutants in those pathways for the evaluation of the predictive capacity of the model. The growth phenotypes of these mutants were studied under 12 different conditions of electron donor and acceptor availability. The comparison of the model predictions with the resulting experimental phenotypes indicated that pyruvate ferredoxin oxidoreductase is the only activity able to convert pyruvate into acetyl-CoA. However, the results and the modeling showed that the two acetate activation pathways present are not only active, but needed due to the additional role of the acetyl-CoA transferase in the TCA cycle, probably reflecting the adaptation of these bacteria to acetate utilization. In other cases, the data reconciliation suggested additional capacity constraints that were confirmed with biochemical assays. The results demonstrate the need to experimentally verify the activity of key enzymes when developing in silico models of microbial physiology based on sequence-based reconstruction of metabolic networks.
Numerical modelling approach for mine backfill
Indian Academy of Sciences (India)
Muhammad Zaka Emad
2017-07-24
Jul 24, 2017 ... conditions. This paper discusses a numerical modelling strategy for modelling mine backfill material. The .... placed in an ore pass that leads the ore to the ore bin and crusher, from ... 1 year, depending on the mine plan.
Uncertainty in biology a computational modeling approach
Gomez-Cabrero, David
2016-01-01
Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies. Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process. This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples. This book is intended for graduate stude...
OILMAP: A global approach to spill modeling
International Nuclear Information System (INIS)
Spaulding, M.L.; Howlett, E.; Anderson, E.; Jayko, K.
1992-01-01
OILMAP is an oil spill model system suitable for use in both rapid response mode and long-range contingency planning. It was developed for a personal computer and employs full-color graphics to enter data, set up spill scenarios, and view model predictions. The major components of OILMAP include environmental data entry and viewing capabilities, the oil spill models, and model prediction display capabilities. Graphic routines are provided for entering wind data, currents, and any type of geographically referenced data. Several modes of the spill model are available. The surface trajectory mode is intended for quick spill response. The weathering model includes the spreading, evaporation, entrainment, emulsification, and shoreline interaction of oil. The stochastic and receptor models simulate a large number of trajectories from a single site for generating probability statistics. Each model and the algorithms they use are described. Several additional capabilities are planned for OILMAP, including simulation of tactical spill response and subsurface oil transport. 8 refs
Relaxed memory models: an operational approach
Boudol , Gérard; Petri , Gustavo
2009-01-01
International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...
Modeling composting kinetics: A review of approaches
Hamelers, H.V.M.
2004-01-01
Composting kinetics modeling is necessary to design and operate composting facilities that comply with strict market demands and tight environmental legislation. Current composting kinetics modeling can be characterized as inductive, i.e. the data are the starting point of the modeling process and
Conformally invariant models: A new approach
International Nuclear Information System (INIS)
Fradkin, E.S.; Palchik, M.Ya.; Zaikin, V.N.
1996-02-01
A pair of mathematical models of quantum field theory in D dimensions is analyzed, particularly, a model of a charged scalar field defined by two generations of secondary fields in the space of even dimensions D>=4 and a model of a neutral scalar field defined by two generations of secondary fields in two-dimensional space. 6 refs
Filtering Redundant Data from RFID Data Streams
Directory of Open Access Journals (Sweden)
Hazalila Kamaludin
2016-01-01
Full Text Available Radio Frequency Identification (RFID enabled systems are evolving in many applications that need to know the physical location of objects such as supply chain management. Naturally, RFID systems create large volumes of duplicate data. As the duplicate data wastes communication, processing, and storage resources as well as delaying decision-making, filtering duplicate data from RFID data stream is an important and challenging problem. Existing Bloom Filter-based approaches for filtering duplicate RFID data streams are complex and slow as they use multiple hash functions. In this paper, we propose an approach for filtering duplicate data from RFID data streams. The proposed approach is based on modified Bloom Filter and uses only a single hash function. We performed extensive empirical study of the proposed approach and compared it against the Bloom Filter, d-Left Time Bloom Filter, and the Count Bloom Filter approaches. The results show that the proposed approach outperforms the baseline approaches in terms of false positive rate, execution time, and true positive rate.
Institute of Scientific and Technical Information of China (English)
Guo Jiansheng; Wang Zutong; Zheng Mingfa; Wang Ying
2014-01-01
Based on the uncertainty theory, this paper is devoted to the redundancy allocation problem in repairable parallel-series systems with uncertain factors, where the failure rate, repair rate and other relative coefficients involved are considered as uncertain variables. The availability of the system and the corresponding designing cost are considered as two optimization objectives. A crisp multiobjective optimization formulation is presented on the basis of uncertainty theory to solve this resultant problem. For solving this problem efficiently, a new multiobjective artificial bee colony algorithm is proposed to search the Pareto efficient set, which introduces rank value and crowding distance in the greedy selection strategy, applies fast non-dominated sort procedure in the exploitation search and inserts tournament selection in the onlooker bee phase. It shows that the proposed algorithm outperforms NSGA-II greatly and can solve multiobjective redundancy allocation problem efficiently. Finally, a numerical example is provided to illustrate this approach.
Quantum Darwinism in an Everyday Environment: Huge Redundancy in Scattered Photons
Riedel, Charles; Zurek, Wojciech
2011-03-01
We study quantum Darwinism---the redundant recording of information about the preferred states of a decohering system by its environment---for an object illuminated by a blackbody. In the cases of point-source, small disk, and isotropic illumination, we calculate the quantum mutual information between the object and its photon environment. We demonstrate that this realistic model exhibits fast and extensive proliferation of information about the object into the environment and results in redundancies orders of magnitude larger than the exactly soluble models considered to date. We also demonstrate a reduced ability to create records as initial environmental mixedness increases, in agreement with previous studies. This research is supported by the U.S. Department of Energy through the LANL/LDRD program and, in part, by the Foundational Questions Institute (FQXi).
Human genetics of infectious diseases: Unique insights into immunological redundancy.
Casanova, Jean-Laurent; Abel, Laurent
2018-04-01
For almost any given human-tropic virus, bacterium, fungus, or parasite, the clinical outcome of primary infection is enormously variable, ranging from asymptomatic to lethal infection. This variability has long been thought to be largely determined by the germline genetics of the human host, and this is increasingly being demonstrated to be the case. The number and diversity of known inborn errors of immunity is continually increasing, and we focus here on autosomal and X-linked recessive traits underlying complete deficiencies of the encoded protein. Schematically, four types of infectious phenotype have been observed in individuals with such deficiencies, each providing information about the redundancy of the corresponding human gene, in terms of host defense in natural conditions. The lack of a protein can confer vulnerability to a broad range of microbes in most, if not all patients, through the disruption of a key immunological component. In such cases, the gene concerned is of low redundancy. However, the lack of a protein may also confer vulnerability to a narrow range of microbes, sometimes a single pathogen, and not necessarily in all patients. In such cases, the gene concerned is highly redundant. Conversely, the deficiency may be apparently neutral, conferring no detectable predisposition to infection in any individual. In such cases, the gene concerned is completely redundant. Finally, the lack of a protein may, paradoxically, be advantageous to the host, conferring resistance to one or more infections. In such cases, the gene is considered to display beneficial redundancy. These findings reflect the current state of evolution of humans and microbes, and should not be considered predictive of redundancy, or of a lack of redundancy, in the distant future. Nevertheless, these observations are of potential interest to present-day biologists testing immunological hypotheses experimentally and physicians managing patients with immunological or infectious
The cellular robustness by genetic redundancy in budding yeast.
Directory of Open Access Journals (Sweden)
Jingjing Li
2010-11-01
Full Text Available The frequent dispensability of duplicated genes in budding yeast is heralded as a hallmark of genetic robustness contributed by genetic redundancy. However, theoretical predictions suggest such backup by redundancy is evolutionarily unstable, and the extent of genetic robustness contributed from redundancy remains controversial. It is anticipated that, to achieve mutual buffering, the duplicated paralogs must at least share some functional overlap. However, counter-intuitively, several recent studies reported little functional redundancy between these buffering duplicates. The large yeast genetic interactions released recently allowed us to address these issues on a genome-wide scale. We herein characterized the synthetic genetic interactions for ∼500 pairs of yeast duplicated genes originated from either whole-genome duplication (WGD or small-scale duplication (SSD events. We established that functional redundancy between duplicates is a pre-requisite and thus is highly predictive of their backup capacity. This observation was particularly pronounced with the use of a newly introduced metric in scoring functional overlap between paralogs on the basis of gene ontology annotations. Even though mutual buffering was observed to be prevalent among duplicated genes, we showed that the observed backup capacity is largely an evolutionarily transient state. The loss of backup capacity generally follows a neutral mode, with the buffering strength decreasing in proportion to divergence time, and the vast majority of the paralogs have already lost their backup capacity. These observations validated previous theoretic predictions about instability of genetic redundancy. However, departing from the general neutral mode, intriguingly, our analysis revealed the presence of natural selection in stabilizing functional overlap between SSD pairs. These selected pairs, both WGD and SSD, tend to have decelerated functional evolution, have higher propensities of co
Redundant nerve roots of the cauda equina : MR findings
International Nuclear Information System (INIS)
Oh, Kyu Hyen; Lee, Jung Man; Jung, Hak Young; Lee, Young Hwan; Sung, Nak Kwan; Chung, Duck Soo; Kim, Ok Dong; Lee, Sang Kwon; Suh, Kyung Jin
1997-01-01
To evaluate MR findings of redundant nerve roots (RNR) of the cauda equina. 17 patients with RNR were studied; eight were men and nine were women, and their ages ranged from 46 to 82 (mean 63) years. Diagroses were established on the basis of T2-weighted sagittal and coronal MRI, which showed a tortuous or coiled configuration of the nerve roots of the cauda equina. MR findings were reviewed for location, magnitude, and signal intensity of redundant nerve roots, and the relationship between magnitude of redundancy and severity of lumbar spinal canal stenosis (LSCS) was evaluated. In all 17 patients, MR showed moderate or severe LSCS caused by herniation or bulging of an intervertebral disc, osteophyte from the vertebral body or facet joint, thickening of the ligamentum flavum, degenerative spondylolisthesis, or a combination of these. T2-weighted sagittal and coronal MR images well clearly showed the location of RNR of the cauda equina;in 16 patients(94%), these were seen above the level of constriction of the spinal canal, and in one case, they were observed below the level of constriction. T2-weighted axial images showed the thecal sac filled with numerous nerve roots. The magnitude of RNR was mild in six cases (35%), moderate in five cases (30%), and severe in six cases (35%). Compared with normal nerve roots, the RNR signal on T2-weighted images was iso-intense. All patients with severe redundancy showed severe LSCS, but not all cases with severe LSCS showed severe redundancy. Redundant nerve roots of cauda equina were seen in relatively older patients with moderate or severe LSCS and T2-weighted MR images were accurate in identifying redundancy of nerve roots and evaluating their magnitude and location
Redundancy and divergence in the amyloid precursor protein family.
Shariati, S Ali M; De Strooper, Bart
2013-06-27
Gene duplication provides genetic material required for functional diversification. An interesting example is the amyloid precursor protein (APP) protein family. The APP gene family has experienced both expansion and contraction during evolution. The three mammalian members have been studied quite extensively in combined knock out models. The underlying assumption is that APP, amyloid precursor like protein 1 and 2 (APLP1, APLP2) are functionally redundant. This assumption is primarily supported by the similarities in biochemical processing of APP and APLPs and on the fact that the different APP genes appear to genetically interact at the level of the phenotype in combined knockout mice. However, unique features in each member of the APP family possibly contribute to specification of their function. In the current review, we discuss the evolution and the biology of the APP protein family with special attention to the distinct properties of each homologue. We propose that the functions of APP, APLP1 and APLP2 have diverged after duplication to contribute distinctly to different neuronal events. Our analysis reveals that APLP2 is significantly diverged from APP and APLP1. Copyright © 2013 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Reliability analysis of a complex standby redundant systems
International Nuclear Information System (INIS)
Subramanian, R.; Anantharaman, V.
1995-01-01
In any redundant system, the state of the standby unit is usually taken to be hot, warm or cold. In this paper, we present a new model of a two unit standby system wherein the standby unit is put in cold state for a certain amount of time before it is allowed to become warm. Upon failure of the online unit, the standby unit, if in warm state, instantaneously starts operating online; if it is in cold state, an emergency switching is made which takes it to warm state (and hence online) either instantaneously or non-instantaneously--each with some probability; if it is under repair, the system breaks down. Assuming all the associated distributions to be general except that of the life time of the standby unit in the warm state, various reliability characteristics that are of interest to reliability engineers and system designers are derived. A comprehensive cost function is also constructed and is then optimized with respect to three different control parameters numerically. In addition numerical results are presented to illustrate the behaviour of the various reliability characteristics derived
International Nuclear Information System (INIS)
Lai, Chyh-Ming; Yeh, Wei-Chang
2016-01-01
The redundancy allocation problem involves configuring an optimal system structure with high reliability and low cost, either by alternating the elements with more reliable elements and/or by forming them redundantly. The multi-state bridge system is a special redundancy allocation problem and is commonly used in various engineering systems for load balancing and control. Traditional methods for redundancy allocation problem cannot solve multi-state bridge systems efficiently because it is impossible to transfer and reduce a multi-state bridge system to series and parallel combinations. Hence, a swarm-based approach called two-stage simplified swarm optimization is proposed in this work to effectively and efficiently solve the redundancy allocation problem in a multi-state bridge system. For validating the proposed method, two experiments are implemented. The computational results indicate the advantages of the proposed method in terms of solution quality and computational efficiency. - Highlights: • Propose two-stage SSO (SSO_T_S) to deal with RAP in multi-state bridge system. • Dynamic upper bound enhances the efficiency of searching near-optimal solution. • Vector-update stages reduces the problem dimensions. • Statistical results indicate SSO_T_S is robust both in solution quality and runtime.
International Nuclear Information System (INIS)
Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; Lucas, Robert F.
2017-01-01
In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of complete redundancy incurs significant overhead to the application performance.
A systemic approach to modelling of radiobiological effects
International Nuclear Information System (INIS)
Obaturov, G.M.
1988-01-01
Basic principles of the systemic approach to modelling of the radiobiological effects at different levels of cell organization have been formulated. The methodology is proposed for theoretical modelling of the effects at these levels
Serpentinization reaction pathways: implications for modeling approach
Energy Technology Data Exchange (ETDEWEB)
Janecky, D.R.
1986-01-01
Experimental seawater-peridotite reaction pathways to form serpentinites at 300/sup 0/C, 500 bars, can be accurately modeled using the EQ3/6 codes in conjunction with thermodynamic and kinetic data from the literature and unpublished compilations. These models provide both confirmation of experimental interpretations and more detailed insight into hydrothermal reaction processes within the oceanic crust. The accuracy of these models depends on careful evaluation of the aqueous speciation model, use of mineral compositions that closely reproduce compositions in the experiments, and definition of realistic reactive components in terms of composition, thermodynamic data, and reaction rates.
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition
Bruder, H.; Raupach, R.; Sunnegardh, J.; Allmendinger, T.; Klotz, E.; Stierstorfer, K.; Flohr, T.
2015-11-01
In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta. Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high. In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution. It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J). We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR). Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover
Novel iterative reconstruction method with optimal dose usage for partially redundant CT-acquisition
International Nuclear Information System (INIS)
Bruder, H; Raupach, R; Sunnegardh, J; Allmendinger, T; Klotz, E; Stierstorfer, K; Flohr, T
2015-01-01
In CT imaging, a variety of applications exist which are strongly SNR limited. However, in some cases redundant data of the same body region provide additional quanta.Examples: in dual energy CT, the spatial resolution has to be compromised to provide good SNR for material decomposition. However, the respective spectral dataset of the same body region provides additional quanta which might be utilized to improve SNR of each spectral component. Perfusion CT is a high dose application, and dose reduction is highly desirable. However, a meaningful evaluation of perfusion parameters might be impaired by noisy time frames. On the other hand, the SNR of the average of all time frames is extremely high.In redundant CT acquisitions, multiple image datasets can be reconstructed and averaged to composite image data. These composite image data, however, might be compromised with respect to contrast resolution and/or spatial resolution and/or temporal resolution. These observations bring us to the idea of transferring high SNR of composite image data to low SNR ‘source’ image data, while maintaining their resolution.It has been shown that the noise characteristics of CT image data can be improved by iterative reconstruction (Popescu et al 2012 Book of Abstracts, 2nd CT Meeting (Salt Lake City, UT) p 148). In case of data dependent Gaussian noise it can be modelled with image-based iterative reconstruction at least in an approximate manner (Bruder et al 2011 Proc. SPIE 7961 79610J).We present a generalized update equation in image space, consisting of a linear combination of the previous update, a correction term which is constrained by the source image data, and a regularization prior, which is initialized by the composite image data. This iterative reconstruction approach we call bimodal reconstruction (BMR).Based on simulation data it is shown that BMR can improve low contrast detectability, substantially reduces the noise power and has the potential to recover spatial
A visual approach for modeling spatiotemporal relations
R.L. Guimarães (Rodrigo); C.S.S. Neto; L.F.G. Soares
2008-01-01
htmlabstractTextual programming languages have proven to be difficult to learn and to use effectively for many people. For this sake, visual tools can be useful to abstract the complexity of such textual languages, minimizing the specification efforts. In this paper we present a visual approach for
PRODUCT TRIAL PROCESSING (PTP): A MODEL APPROACH ...
African Journals Online (AJOL)
Admin
This study is a theoretical approach to consumer's processing of product trail, and equally explored ... consumer's first usage experience with a company's brand or product that is most important in determining ... product, what it is really marketing is the expected ..... confidence, thus there is a positive relationship between ...
Distributed continuous media streaming - Using redundant hierarchy (RED-Hi) servers
Shah, Mohammad Ahmed
2014-01-01
ABSTRACT: The first part of this thesis provides a survey of continuous media serves, including discussions on streaming protocols, models and techniques. In the second part, a novel distributed media streaming system is introduced. In order to manage the traffic in a fault tolerant and effective manner a hierarchical topology, so called redundant hierarchy (RED-Hi) is used. The proposed system works in three steps, namely, object location, path reservation and object delivery. Simulations ar...
Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli
DEFF Research Database (Denmark)
Minakata, Katsumi; Gondan, Matthias
2018-01-01
When participants respond to stimuli of two sources, response times (RT) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect, RSE). Race models and coactivation models can explain the RSE. In race models......, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does...
Nonlinear Modeling of the PEMFC Based On NNARX Approach
Shan-Jen Cheng; Te-Jen Chang; Kuang-Hsiung Tan; Shou-Ling Kuo
2015-01-01
Polymer Electrolyte Membrane Fuel Cell (PEMFC) is such a time-vary nonlinear dynamic system. The traditional linear modeling approach is hard to estimate structure correctly of PEMFC system. From this reason, this paper presents a nonlinear modeling of the PEMFC using Neural Network Auto-regressive model with eXogenous inputs (NNARX) approach. The multilayer perception (MLP) network is applied to evaluate the structure of the NNARX model of PEMFC. The validity and accurac...
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account
Comparison of two novel approaches to model fibre reinforced concrete
Radtke, F.K.F.; Simone, A.; Sluys, L.J.
2009-01-01
We present two approaches to model fibre reinforced concrete. In both approaches, discrete fibre distributions and the behaviour of the fibre-matrix interface are explicitly considered. One approach employs the reaction forces from fibre to matrix while the other is based on the partition of unity
Radiation-Tolerance Assessment of a Redundant Wireless Device
Huang, Q.; Jiang, J.
2018-01-01
This paper presents a method to evaluate radiation-tolerance without physical tests for a commercial off-the-shelf (COTS)-based monitoring device for high level radiation fields, such as those found in post-accident conditions in a nuclear power plant (NPP). This paper specifically describes the analysis of radiation environment in a severe accident, radiation damages in electronics, and the redundant solution used to prolong the life of the system, as well as the evaluation method for radiation protection and the analysis method of system reliability. As a case study, a wireless monitoring device with redundant and diversified channels is evaluated by using the developed method. The study results and system assessment data show that, under the given radiation condition, performance of the redundant device is more reliable and more robust than those non-redundant devices. The developed redundant wireless monitoring device is therefore able to apply in those conditions (up to 10 M Rad (Si)) during a severe accident in a NPP.
Modeling thrombin generation: plasma composition based approach.
Brummel-Ziedins, Kathleen E; Everse, Stephen J; Mann, Kenneth G; Orfeo, Thomas
2014-01-01
Thrombin has multiple functions in blood coagulation and its regulation is central to maintaining the balance between hemorrhage and thrombosis. Empirical and computational methods that capture thrombin generation can provide advancements to current clinical screening of the hemostatic balance at the level of the individual. In any individual, procoagulant and anticoagulant factor levels together act to generate a unique coagulation phenotype (net balance) that is reflective of the sum of its developmental, environmental, genetic, nutritional and pharmacological influences. Defining such thrombin phenotypes may provide a means to track disease progression pre-crisis. In this review we briefly describe thrombin function, methods for assessing thrombin dynamics as a phenotypic marker, computationally derived thrombin phenotypes versus determined clinical phenotypes, the boundaries of normal range thrombin generation using plasma composition based approaches and the feasibility of these approaches for predicting risk.
A simple approach to modeling ductile failure.
Energy Technology Data Exchange (ETDEWEB)
Wellman, Gerald William
2012-06-01
Sandia National Laboratories has the need to predict the behavior of structures after the occurrence of an initial failure. In some cases determining the extent of failure, beyond initiation, is required, while in a few cases the initial failure is a design feature used to tailor the subsequent load paths. In either case, the ability to numerically simulate the initiation and propagation of failures is a highly desired capability. This document describes one approach to the simulation of failure initiation and propagation.
A new approach for modeling composite materials
Alcaraz de la Osa, R.; Moreno, F.; Saiz, J. M.
2013-03-01
The increasing use of composite materials is due to their ability to tailor materials for special purposes, with applications evolving day by day. This is why predicting the properties of these systems from their constituents, or phases, has become so important. However, assigning macroscopical optical properties for these materials from the bulk properties of their constituents is not a straightforward task. In this research, we present a spectral analysis of three-dimensional random composite typical nanostructures using an Extension of the Discrete Dipole Approximation (E-DDA code), comparing different approaches and emphasizing the influences of optical properties of constituents and their concentration. In particular, we hypothesize a new approach that preserves the individual nature of the constituents introducing at the same time a variation in the optical properties of each discrete element that is driven by the surrounding medium. The results obtained with this new approach compare more favorably with the experiment than previous ones. We have also applied it to a non-conventional material composed of a metamaterial embedded in a dielectric matrix. Our version of the Discrete Dipole Approximation code, the EDDA code, has been formulated specifically to tackle this kind of problem, including materials with either magnetic and tensor properties.
An Integrated Approach to Modeling Evacuation Behavior
2011-02-01
A spate of recent hurricanes and other natural disasters have drawn a lot of attention to the evacuation decision of individuals. Here we focus on evacuation models that incorporate two economic phenomena that seem to be increasingly important in exp...
Infectious disease modeling a hybrid system approach
Liu, Xinzhi
2017-01-01
This volume presents infectious diseases modeled mathematically, taking seasonality and changes in population behavior into account, using a switched and hybrid systems framework. The scope of coverage includes background on mathematical epidemiology, including classical formulations and results; a motivation for seasonal effects and changes in population behavior, an investigation into term-time forced epidemic models with switching parameters, and a detailed account of several different control strategies. The main goal is to study these models theoretically and to establish conditions under which eradication or persistence of the disease is guaranteed. In doing so, the long-term behavior of the models is determined through mathematical techniques from switched systems theory. Numerical simulations are also given to augment and illustrate the theoretical results and to help study the efficacy of the control schemes.
On Combining Language Models: Oracle Approach
National Research Council Canada - National Science Library
Hacioglu, Kadri; Ward, Wayne
2001-01-01
In this paper, we address the of combining several language models (LMs). We find that simple interpolation methods, like log-linear and linear interpolation, improve the performance but fall short of the performance of an oracle...
Distributed redundancy and robustness in complex systems
Randles, Martin; Lamb, David J.; Odat, Enas M.; Taleb-Bendiab, Azzelarabe
2011-01-01
that emerges in complex biological and natural systems. However, in order to promote an evolutionary approach, through emergent self-organisation, it is necessary to specify the systems in an 'open-ended' manner where not all states of the system are prescribed
Advanced language modeling approaches, case study: Expert search
Hiemstra, Djoerd
2008-01-01
This tutorial gives a clear and detailed overview of advanced language modeling approaches and tools, including the use of document priors, translation models, relevance models, parsimonious models and expectation maximization training. Expert search will be used as a case study to explain the
Approaches to modelling hydrology and ecosystem interactions
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
Constructing a justice model based on Sen's capability approach
Yüksel, Sevgi; Yuksel, Sevgi
2008-01-01
The thesis provides a possible justice model based on Sen's capability approach. For this goal, we first analyze the general structure of a theory of justice, identifying the main variables and issues. Furthermore, based on Sen (2006) and Kolm (1998), we look at 'transcendental' and 'comparative' approaches to justice and concentrate on the sufficiency condition for the comparative approach. Then, taking Rawls' theory of justice as a starting point, we present how Sen's capability approach em...
Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints
Sun, Qipeng; Li, Gangyan
2018-03-01
Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.
Challenges and opportunities for integrating lake ecosystem modelling approaches
Mooij, Wolf M.; Trolle, Dennis; Jeppesen, Erik; Arhonditsis, George; Belolipetsky, Pavel V.; Chitamwebwa, Deonatus B.R.; Degermendzhy, Andrey G.; DeAngelis, Donald L.; Domis, Lisette N. De Senerpont; Downing, Andrea S.; Elliott, J. Alex; Ruberto, Carlos Ruberto; Gaedke, Ursula; Genova, Svetlana N.; Gulati, Ramesh D.; Hakanson, Lars; Hamilton, David P.; Hipsey, Matthew R.; Hoen, Jochem 't; Hulsmann, Stephan; Los, F. Hans; Makler-Pick, Vardit; Petzoldt, Thomas; Prokopkin, Igor G.; Rinke, Karsten; Schep, Sebastiaan A.; Tominaga, Koji; Van Dam, Anne A.; Van Nes, Egbert H.; Wells, Scott A.; Janse, Jan H.
2010-01-01
A large number and wide variety of lake ecosystem models have been developed and published during the past four decades. We identify two challenges for making further progress in this field. One such challenge is to avoid developing more models largely following the concept of others ('reinventing the wheel'). The other challenge is to avoid focusing on only one type of model, while ignoring new and diverse approaches that have become available ('having tunnel vision'). In this paper, we aim at improving the awareness of existing models and knowledge of concurrent approaches in lake ecosystem modelling, without covering all possible model tools and avenues. First, we present a broad variety of modelling approaches. To illustrate these approaches, we give brief descriptions of rather arbitrarily selected sets of specific models. We deal with static models (steady state and regression models), complex dynamic models (CAEDYM, CE-QUAL-W2, Delft 3D-ECO, LakeMab, LakeWeb, MyLake, PCLake, PROTECH, SALMO), structurally dynamic models and minimal dynamic models. We also discuss a group of approaches that could all be classified as individual based: super-individual models (Piscator, Charisma), physiologically structured models, stage-structured models and trait-based models. We briefly mention genetic algorithms, neural networks, Kalman filters and fuzzy logic. Thereafter, we zoom in, as an in-depth example, on the multi-decadal development and application of the lake ecosystem model PCLake and related models (PCLake Metamodel, Lake Shira Model, IPH-TRIM3D-PCLake). In the discussion, we argue that while the historical development of each approach and model is understandable given its 'leading principle', there are many opportunities for combining approaches. We take the point of view that a single 'right' approach does not exist and should not be strived for. Instead, multiple modelling approaches, applied concurrently to a given problem, can help develop an integrative
An ontology-based approach for modelling architectural styles
Pahl, Claus; Giesecke, Simon; Hasselbring, Wilhelm
2007-01-01
peer-reviewed The conceptual modelling of software architectures is of central importance for the quality of a software system. A rich modelling language is required to integrate the different aspects of architecture modelling, such as architectural styles, structural and behavioural modelling, into a coherent framework.We propose an ontological approach for architectural style modelling based on description logic as an abstract, meta-level modelling instrument. Architect...
Mathematical modelling a case studies approach
Illner, Reinhard; McCollum, Samantha; Roode, Thea van
2004-01-01
Mathematical modelling is a subject without boundaries. It is the means by which mathematics becomes useful to virtually any subject. Moreover, modelling has been and continues to be a driving force for the development of mathematics itself. This book explains the process of modelling real situations to obtain mathematical problems that can be analyzed, thus solving the original problem. The presentation is in the form of case studies, which are developed much as they would be in true applications. In many cases, an initial model is created, then modified along the way. Some cases are familiar, such as the evaluation of an annuity. Others are unique, such as the fascinating situation in which an engineer, armed only with a slide rule, had 24 hours to compute whether a valve would hold when a temporary rock plug was removed from a water tunnel. Each chapter ends with a set of exercises and some suggestions for class projects. Some projects are extensive, as with the explorations of the predator-prey model; oth...
Directory of Open Access Journals (Sweden)
R. Al-Haddad
2011-01-01
Full Text Available As reconfigurable devices' capacities and the complexity of applications that use them increase, the need for self-reliance of deployed systems becomes increasingly prominent. Organic computing paradigms have been proposed for fault-tolerant systems because they promote behaviors that allow complex digital systems to adapt and survive in demanding environments. In this paper, we develop a sustainable modular adaptive redundancy technique (SMART composed of a two-layered organic system. The hardware layer is implemented on a Xilinx Virtex-4 Field Programmable Gate Array (FPGA to provide self-repair using a novel approach called reconfigurable adaptive redundancy system (RARS. The software layer supervises the organic activities on the FPGA and extends the self-healing capabilities through application-independent, intrinsic, and evolutionary repair techniques that leverage the benefits of dynamic partial reconfiguration (PR. SMART was evaluated using a Sobel edge-detection application and was shown to tolerate stressful sequences of injected transient and permanent faults while reducing dynamic power consumption by 30% compared to conventional triple modular redundancy (TMR techniques, with nominal impact on the fault-tolerance capabilities. Moreover, PR is employed to keep the system on line while under repair and also to reduce repair time. Experiments have shown a 27.48% decrease in repair time when PR is employed compared to the full bitstream configuration case.
The simplified models approach to constraining supersymmetry
Energy Technology Data Exchange (ETDEWEB)
Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)
2015-07-01
The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.
Lightweight approach to model traceability in a CASE tool
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
New approaches for modeling type Ia supernovae
International Nuclear Information System (INIS)
Zingale, Michael; Almgren, Ann S.; Bell, John B.; Day, Marcus S.; Rendleman, Charles A.; Woosley, Stan
2007-01-01
Type Ia supernovae (SNe Ia) are the largest thermonuclear explosions in the Universe. Their light output can be seen across great distances and has led to the discovery that the expansion rate of the Universe is accelerating. Despite the significance of SNe Ia, there are still a large number of uncertainties in current theoretical models. Computational modeling offers the promise to help answer the outstanding questions. However, even with today's supercomputers, such calculations are extremely challenging because of the wide range of length and timescales. In this paper, we discuss several new algorithms for simulations of SNe Ia and demonstrate some of their successes
Chancroid transmission dynamics: a mathematical modeling approach.
Bhunu, C P; Mushayabasa, S
2011-12-01
Mathematical models have long been used to better understand disease transmission dynamics and how to effectively control them. Here, a chancroid infection model is presented and analyzed. The disease-free equilibrium is shown to be globally asymptotically stable when the reproduction number is less than unity. High levels of treatment are shown to reduce the reproduction number suggesting that treatment has the potential to control chancroid infections in any given community. This result is also supported by numerical simulations which show a decline in chancroid cases whenever the reproduction number is less than unity.
A kinetic approach to magnetospheric modeling
International Nuclear Information System (INIS)
Whipple, E.C. Jr.
1979-01-01
The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole
A kinetic approach to magnetospheric modeling
Whipple, E. C., Jr.
1979-01-01
The earth's magnetosphere is caused by the interaction between the flowing solar wind and the earth's magnetic dipole, with the distorted magnetic field in the outer parts of the magnetosphere due to the current systems resulting from this interaction. It is surprising that even the conceptually simple problem of the collisionless interaction of a flowing plasma with a dipole magnetic field has not been solved. A kinetic approach is essential if one is to take into account the dispersion of particles with different energies and pitch angles and the fact that particles on different trajectories have different histories and may come from different sources. Solving the interaction problem involves finding the various types of possible trajectories, populating them with particles appropriately, and then treating the electric and magnetic fields self-consistently with the resulting particle densities and currents. This approach is illustrated by formulating a procedure for solving the collisionless interaction problem on open field lines in the case of a slowly flowing magnetized plasma interacting with a magnetic dipole.
Fractal approach to computer-analytical modelling of tree crown
International Nuclear Information System (INIS)
Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.
1993-09-01
In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
A novel approach to modeling atmospheric convection
Goodman, A.
2016-12-01
The inadequate representation of clouds continues to be a large source of uncertainty in the projections from global climate models (GCMs). With continuous advances in computational power, however, the ability for GCMs to explicitly resolve cumulus convection will soon be realized. For this purpose, Jung and Arakawa (2008) proposed the Vector Vorticity Model (VVM), in which vorticity is the predicted variable instead of momentum. This has the advantage of eliminating the pressure gradient force within the framework of an anelastic system. However, the VVM was designed for use on a planar quadrilateral grid, making it unsuitable for implementation in global models discretized on the sphere. Here we have proposed a modification to the VVM where instead the curl of the horizontal vorticity is the primary predicted variable. This allows us to maintain the benefits of the original VVM while working within the constraints of a non-quadrilateral mesh. We found that our proposed model produced results from a warm bubble simulation that were consistent with the VVM. Further improvements that can be made to the VVM are also discussed.
INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...
Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
A new approach to model mixed hydrates
Czech Academy of Sciences Publication Activity Database
Hielscher, S.; Vinš, Václav; Jäger, A.; Hrubý, Jan; Breitkopf, C.; Span, R.
2018-01-01
Roč. 459, March (2018), s. 170-185 ISSN 0378-3812 R&D Projects: GA ČR(CZ) GA17-08218S Institutional support: RVO:61388998 Keywords : gas hydrate * mixture * modeling Subject RIV: BJ - Thermodynamics Impact factor: 2.473, year: 2016 https://www.sciencedirect.com/science/article/pii/S0378381217304983
Energy and development : A modelling approach
van Ruijven, B.J.|info:eu-repo/dai/nl/304834521
2008-01-01
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used explore
Modeling Approaches for Describing Microbial Population Heterogeneity
DEFF Research Database (Denmark)
Lencastre Fernandes, Rita
environmental conditions. Three cases are presented and discussed in this thesis. Common to all is the use of S. cerevisiae as model organism, and the use of cell size and cell cycle position as single-cell descriptors. The first case focuses on the experimental and mathematical description of a yeast...
Energy and Development. A Modelling Approach
International Nuclear Information System (INIS)
Van Ruijven, B.J.
2008-01-01
Rapid economic growth of developing countries like India and China implies that these countries become important actors in the global energy system. Examples of this impact are the present day oil shortages and rapidly increasing emissions of greenhouse gases. Global energy models are used to explore possible future developments of the global energy system and identify policies to prevent potential problems. Such estimations of future energy use in developing countries are very uncertain. Crucial factors in the future energy use of these regions are electrification, urbanisation and income distribution, issues that are generally not included in present day global energy models. Model simulations in this thesis show that current insight in developments in low-income regions lead to a wide range of expected energy use in 2030 of the residential and transport sectors. This is mainly caused by many different model calibration options that result from the limited data availability for model development and calibration. We developed a method to identify the impact of model calibration uncertainty on future projections. We developed a new model for residential energy use in India, in collaboration with the Indian Institute of Science. Experiments with this model show that the impact of electrification and income distribution is less univocal than often assumed. The use of fuelwood, with related health risks, can decrease rapidly if the income of poor groups increases. However, there is a trade off in terms of CO2 emissions because these groups gain access to electricity and the ownership of appliances increases. Another issue is the potential role of new technologies in developing countries: will they use the opportunities of leapfrogging? We explored the potential role of hydrogen, an energy carrier that might play a central role in a sustainable energy system. We found that hydrogen only plays a role before 2050 under very optimistic assumptions. Regional energy
A REDUNDANT GNSS-INS LOW-COST UAV NAVIGATION SOLUTION FOR PROFESSIONAL APPLICATIONS
Directory of Open Access Journals (Sweden)
J. Navarro
2015-08-01
Full Text Available This paper presents the current results for the FP7 GINSEC project. Its goal is to build a pre-commercial prototype of a low-cost, accurate and reliable system for the professional UAV market. Low-cost, in this context, stands for the use of sensors in the most affordable segment of the market, especially MEMS IMUs and GNSS receivers. Reliability applies to the ability of the autopilot to cope with situations where unfavourable GNSS reception conditions or strong electromagnetic fields make the computation of the position and / or attitude of the UAV difficult. Professional and accurate mean that, at least using post-processing techniques as PPP, it will be possible to reach cm-level precisions that open the door to a range of applications demanding high levels of quality in positioning, as precision agriculture or mapping. To achieve such goal, a rigorous sensor error modelling approach, the use of redundant IMUs and a dual-GNSS receiver setup, together with close-coupling techniques and an extended Kalman filter with self-analysis capabilities have been used. Although the project is not yet complete, the results obtained up to now prove the feasibility of the aforementioned goal, especially in those aspects related to position determination. Research work is still undergoing to estimate the heading using a dual-GNNS receiver setup; preliminary results prove the validity of this approach for relatively long baselines, although positive results are expected when these are shorter than 1 m – which is a necessary requisite for small-sized UAVs.
a Redundant Gnss-Ins Low-Cost Uav Navigation Solution for Professional Applications
Navarro, J.; Parés, M. E.; Colomina, I.; Bianchi, G.; Pluchino, S.; Baddour, R.; Consoli, A.; Ayadi, J.; Gameiro, A.; Sekkas, O.; Tsetsos, V.; Gatsos, T.; Navoni, R.
2015-08-01
This paper presents the current results for the FP7 GINSEC project. Its goal is to build a pre-commercial prototype of a low-cost, accurate and reliable system for the professional UAV market. Low-cost, in this context, stands for the use of sensors in the most affordable segment of the market, especially MEMS IMUs and GNSS receivers. Reliability applies to the ability of the autopilot to cope with situations where unfavourable GNSS reception conditions or strong electromagnetic fields make the computation of the position and / or attitude of the UAV difficult. Professional and accurate mean that, at least using post-processing techniques as PPP, it will be possible to reach cm-level precisions that open the door to a range of applications demanding high levels of quality in positioning, as precision agriculture or mapping. To achieve such goal, a rigorous sensor error modelling approach, the use of redundant IMUs and a dual-GNSS receiver setup, together with close-coupling techniques and an extended Kalman filter with self-analysis capabilities have been used. Although the project is not yet complete, the results obtained up to now prove the feasibility of the aforementioned goal, especially in those aspects related to position determination. Research work is still undergoing to estimate the heading using a dual-GNNS receiver setup; preliminary results prove the validity of this approach for relatively long baselines, although positive results are expected when these are shorter than 1 m - which is a necessary requisite for small-sized UAVs.
Multisensory processing in the redundant-target effect
DEFF Research Database (Denmark)
Gondan, Matthias; Niederhaus, Birgit; Rösler, Frank
2005-01-01
Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials...... (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations...
Information filtering based on corrected redundancy-eliminating mass diffusion.
Zhu, Xuzhen; Yang, Yujie; Chen, Guilin; Medo, Matus; Tian, Hui; Cai, Shi-Min
2017-01-01
Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE) which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.
Redundant information from thermal illumination: quantum Darwinism in scattered photons
Energy Technology Data Exchange (ETDEWEB)
Jess Riedel, C; Zurek, Wojciech H, E-mail: criedel@physics.ucsb.edu [Theory Division, LANL, Los Alamos, NM 87545 (United States)
2011-07-15
We study quantum Darwinism, the redundant recording of information about the preferred states of a decohering system by its environment, for an object illuminated by a blackbody. We calculate the quantum mutual information between the object and its photon environment for blackbodies that cover an arbitrary section of the sky. In particular, we demonstrate that more extended sources have a reduced ability to create redundant information about the system, in agreement with previous evidence that initial mixedness of an environment slows-but does not stop-the production of records. We also show that the qualitative results are robust for more general initial states of the system.
Redundant information from thermal illumination: quantum Darwinism in scattered photons
Jess Riedel, C.; Zurek, Wojciech H.
2011-07-01
We study quantum Darwinism, the redundant recording of information about the preferred states of a decohering system by its environment, for an object illuminated by a blackbody. We calculate the quantum mutual information between the object and its photon environment for blackbodies that cover an arbitrary section of the sky. In particular, we demonstrate that more extended sources have a reduced ability to create redundant information about the system, in agreement with previous evidence that initial mixedness of an environment slows—but does not stop—the production of records. We also show that the qualitative results are robust for more general initial states of the system.
Integration models: multicultural and liberal approaches confronted
Janicki, Wojciech
2012-01-01
European societies have been shaped by their Christian past, upsurge of international migration, democratic rule and liberal tradition rooted in religious tolerance. Boosting globalization processes impose new challenges on European societies, striving to protect their diversity. This struggle is especially clearly visible in case of minorities trying to resist melting into mainstream culture. European countries' legal systems and cultural policies respond to these efforts in many ways. Respecting identity politics-driven group rights seems to be the most common approach, resulting in creation of a multicultural society. However, the outcome of respecting group rights may be remarkably contradictory to both individual rights growing out from liberal tradition, and to reinforced concept of integration of immigrants into host societies. The hereby paper discusses identity politics upturn in the context of both individual rights and integration of European societies.
Modelling thermal plume impacts - Kalpakkam approach
International Nuclear Information System (INIS)
Rao, T.S.; Anup Kumar, B.; Narasimhan, S.V.
2002-01-01
A good understanding of temperature patterns in the receiving waters is essential to know the heat dissipation from thermal plumes originating from coastal power plants. The seasonal temperature profiles of the Kalpakkam coast near Madras Atomic Power Station (MAPS) thermal out fall site are determined and analysed. It is observed that the seasonal current reversal in the near shore zone is one of the major mechanisms for the transport of effluents away from the point of mixing. To further refine our understanding of the mixing and dilution processes, it is necessary to numerically simulate the coastal ocean processes by parameterising the key factors concerned. In this paper, we outline the experimental approach to achieve this objective. (author)
Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach
Energy Technology Data Exchange (ETDEWEB)
Liao, James C. [Univ. of California, Los Angeles, CA (United States)
2016-10-01
Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.
Nuclear security assessment with Markov model approach
International Nuclear Information System (INIS)
Suzuki, Mitsutoshi; Terao, Norichika
2013-01-01
Nuclear security risk assessment with the Markov model based on random event is performed to explore evaluation methodology for physical protection in nuclear facilities. Because the security incidences are initiated by malicious and intentional acts, expert judgment and Bayes updating are used to estimate scenario and initiation likelihood, and it is assumed that the Markov model derived from stochastic process can be applied to incidence sequence. Both an unauthorized intrusion as Design Based Threat (DBT) and a stand-off attack as beyond-DBT are assumed to hypothetical facilities, and performance of physical protection and mitigation and minimization of consequence are investigated to develop the assessment methodology in a semi-quantitative manner. It is shown that cooperation between facility operator and security authority is important to respond to the beyond-DBT incidence. (author)
An Approach for Modeling Supplier Resilience
2016-04-30
interests include resilience modeling of supply chains, reliability engineering, and meta- heuristic optimization. [m.hosseini@ou.edu] Abstract...be availability , or the extent to which the products produced by the supply chain are available for use (measured as a ratio of uptime to total time...of the use of the product). Available systems are important in many industries, particularly in the Department of Defense, where weapons systems
Tumour resistance to cisplatin: a modelling approach
International Nuclear Information System (INIS)
Marcu, L; Bezak, E; Olver, I; Doorn, T van
2005-01-01
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure
Tumour resistance to cisplatin: a modelling approach
Energy Technology Data Exchange (ETDEWEB)
Marcu, L [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Bezak, E [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia); Olver, I [Faculty of Medicine, University of Adelaide, North Terrace, SA 5000 (Australia); Doorn, T van [School of Chemistry and Physics, University of Adelaide, North Terrace, SA 5000 (Australia)
2005-01-07
Although chemotherapy has revolutionized the treatment of haematological tumours, in many common solid tumours the success has been limited. Some of the reasons for the limitations are: the timing of drug delivery, resistance to the drug, repopulation between cycles of chemotherapy and the lack of complete understanding of the pharmacokinetics and pharmacodynamics of a specific agent. Cisplatin is among the most effective cytotoxic agents used in head and neck cancer treatments. When modelling cisplatin as a single agent, the properties of cisplatin only have to be taken into account, reducing the number of assumptions that are considered in the generalized chemotherapy models. The aim of the present paper is to model the biological effect of cisplatin and to simulate the consequence of cisplatin resistance on tumour control. The 'treated' tumour is a squamous cell carcinoma of the head and neck, previously grown by computer-based Monte Carlo techniques. The model maintained the biological constitution of a tumour through the generation of stem cells, proliferating cells and non-proliferating cells. Cell kinetic parameters (mean cell cycle time, cell loss factor, thymidine labelling index) were also consistent with the literature. A sensitivity study on the contribution of various mechanisms leading to drug resistance is undertaken. To quantify the extent of drug resistance, the cisplatin resistance factor (CRF) is defined as the ratio between the number of surviving cells of the resistant population and the number of surviving cells of the sensitive population, determined after the same treatment time. It is shown that there is a supra-linear dependence of CRF on the percentage of cisplatin-DNA adducts formed, and a sigmoid-like dependence between CRF and the percentage of cells killed in resistant tumours. Drug resistance is shown to be a cumulative process which eventually can overcome tumour regression leading to treatment failure.
ISM Approach to Model Offshore Outsourcing Risks
Directory of Open Access Journals (Sweden)
Sunand Kumar
2014-07-01
Full Text Available In an effort to achieve a competitive advantage via cost reductions and improved market responsiveness, organizations are increasingly employing offshore outsourcing as a major component of their supply chain strategies. But as evident from literature number of risks such as Political risk, Risk due to cultural differences, Compliance and regulatory risk, Opportunistic risk and Organization structural risk, which adversely affect the performance of offshore outsourcing in a supply chain network. This also leads to dissatisfaction among different stake holders. The main objective of this paper is to identify and understand the mutual interaction among various risks which affect the performance of offshore outsourcing. To this effect, authors have identified various risks through extant review of literature. From this information, an integrated model using interpretive structural modelling (ISM for risks affecting offshore outsourcing is developed and the structural relationships between these risks are modeled. Further, MICMAC analysis is done to analyze the driving power and dependency of risks which shall be helpful to managers to identify and classify important criterions and to reveal the direct and indirect effects of each criterion on offshore outsourcing. Results show that political risk and risk due to cultural differences are act as strong drivers.
Remote sensing approach to structural modelling
International Nuclear Information System (INIS)
El Ghawaby, M.A.
1989-01-01
Remote sensing techniques are quite dependable tools in investigating geologic problems, specially those related to structural aspects. The Landsat imagery provides discrimination between rock units, detection of large scale structures as folds and faults, as well as small scale fabric elements such as foliation and banding. In order to fulfill the aim of geologic application of remote sensing, some essential surveying maps might be done from images prior to the structural interpretation: land-use, land-form drainage pattern, lithological unit and structural lineament maps. Afterwards, the field verification should lead to interpretation of a comprehensive structural model of the study area to apply for the target problem. To deduce such a model, there are two ways of analysis the interpreter may go through: the direct and the indirect methods. The direct one is needed in cases where the resources or the targets are controlled by an obvious or exposed structural element or pattern. The indirect way is necessary for areas where the target is governed by a complicated structural pattern. Some case histories of structural modelling methods applied successfully for exploration of radioactive minerals, iron deposits and groundwater aquifers in Egypt are presented. The progress in imagery, enhancement and integration of remote sensing data with the other geophysical and geochemical data allow a geologic interpretation to be carried out which become better than that achieved with either of the individual data sets. 9 refs
A moving approach for the Vector Hysteron Model
Energy Technology Data Exchange (ETDEWEB)
Cardelli, E. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Faba, A., E-mail: antonio.faba@unipg.it [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Laudani, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy); Quondam Antonio, S. [Department of Engineering, University of Perugia, Via G. Duranti 93, 06125 Perugia (Italy); Riganti Fulginei, F.; Salvini, A. [Department of Engineering, Roma Tre University, Via V. Volterra 62, 00146 Rome (Italy)
2016-04-01
A moving approach for the VHM (Vector Hysteron Model) is here described, to reconstruct both scalar and rotational magnetization of electrical steels with weak anisotropy, such as the non oriented grain Silicon steel. The hysterons distribution is postulated to be function of the magnetization state of the material, in order to overcome the practical limitation of the congruency property of the standard VHM approach. By using this formulation and a suitable accommodation procedure, the results obtained indicate that the model is accurate, in particular in reproducing the experimental behavior approaching to the saturation region, allowing a real improvement respect to the previous approach.
SNP markers retrieval for a non-model species: a practical approach
Directory of Open Access Journals (Sweden)
Shahin Arwa
2012-01-01
Full Text Available Abstract Background SNP (Single Nucleotide Polymorphism markers are rapidly becoming the markers of choice for applications in breeding because of next generation sequencing technology developments. For SNP development by NGS technologies, correct assembly of the huge amounts of sequence data generated is essential. Little is known about assembler's performance, especially when dealing with highly heterogeneous species that show a high genome complexity and what the possible consequences are of differences in assemblies on SNP retrieval. This study tested two assemblers (CAP3 and CLC on 454 data from four lily genotypes and compared results with respect to SNP retrieval. Results CAP3 assembly resulted in higher numbers of contigs, lower numbers of reads per contig, and shorter average read lengths compared to CLC. Blast comparisons showed that CAP3 contigs were highly redundant. Contrastingly, CLC in rare cases combined paralogs in one contig. Redundant and chimeric contigs may lead to erroneous SNPs. Filtering for redundancy can be done by blasting selected SNP markers to the contigs and discarding all the SNP markers that show more than one blast hit. Results on chimeric contigs showed that only four out of 2,421 SNP markers were selected from chimeric contigs. Conclusion In practice, CLC performs better in assembling highly heterogeneous genome sequences compared to CAP3, and consequently SNP retrieval is more efficient. Additionally a simple flow scheme is suggested for SNP marker retrieval that can be valid for all non-model species.
Agribusiness model approach to territorial food development
Directory of Open Access Journals (Sweden)
Murcia Hector Horacio
2011-04-01
Full Text Available
Several research efforts have coordinated the academic program of Agricultural Business Management from the University De La Salle (Bogota D.C., to the design and implementation of a sustainable agribusiness model applied to food development, with territorial projection. Rural development is considered as a process that aims to improve the current capacity and potential of the inhabitant of the sector, which refers not only to production levels and productivity of agricultural items. It takes into account the guidelines of the Organization of the United Nations “Millennium Development Goals” and considered the concept of sustainable food and agriculture development, including food security and nutrition in an integrated interdisciplinary context, with holistic and systemic dimension. Analysis is specified by a model with an emphasis on sustainable agribusiness production chains related to agricultural food items in a specific region. This model was correlated with farm (technical objectives, family (social purposes and community (collective orientations projects. Within this dimension are considered food development concepts and methodologies of Participatory Action Research (PAR. Finally, it addresses the need to link the results to low-income communities, within the concepts of the “new rurality”.
On the optimal scheduling of periodic tests and maintenance for reliable redundant components
International Nuclear Information System (INIS)
Courtois, Pierre-Jacques; Delsarte, Philippe
2006-01-01
Periodically, some m of the n redundant components of a dependable system may have to be taken out of service for inspection, testing or preventive maintenance. The system is then constrained to operate with lower (n-m) redundancy and thus with less reliability during these periods. However, more frequent periodic inspections decrease the probability that a component fail undetected in the time interval between successive inspections. An optimal time schedule of periodic preventive operations arises from these two conflicting factors, balancing the loss of redundancy during inspections against the reliability benefits of more frequent inspections. Considering no other factor than this decreased redundancy at inspection time, this paper demonstrates the existence of an optimal interval between inspections, which maximizes the mean time between system failures. By suitable transformations and variable identifications, an analytic closed form expression of the optimum is obtained for the general (m, n) case. The optimum is shown to be unique within the ranges of parameter values valid in practice; its expression is easy to evaluate and shown to be useful to analyze and understand the influence of these parameters. Inspections are assumed to be perfect, i.e. they cause no component failure by themselves and leave no failure undetected. In this sense, the optimum determines a lowest bound for the system failure rate that can be achieved by a system of n-redundant components, m of which require for inspection or maintenance recurrent periods of unavailability of length t. The model and its general closed form solution are believed to be new . Previous work had computed optimal values for an estimation of a time average of system unavailability, but by numerical procedures only and with different numerical approximations, other objectives and model assumptions (one component only inspected at a time), and taking into account failures caused by testing itself, repair and
Engineering approach to modeling of piled systems
International Nuclear Information System (INIS)
Coombs, R.F.; Silva, M.A.G. da
1980-01-01
Available methods of analysis of piled systems subjected to dynamic excitation invade areas of mathematics usually beyond the reach of a practising engineer. A simple technique that avoids that conflict is proposed, at least for preliminary studies, and its application, compared with other methods, is shown to be satisfactory. A corrective factor for parameters currently used to represent transmitting boundaries is derived for a finite strip that models an infinite layer. The influence of internal damping on the dynamic stiffness of the layer and on radiation damping is analysed. (Author) [pt
Jackiw-Pi model: A superfield approach
Gupta, Saurabh
2014-12-01
We derive the off-shell nilpotent and absolutely anticommuting Becchi-Rouet-Stora-Tyutin (BRST) as well as anti-BRST transformations s ( a) b corresponding to the Yang-Mills gauge transformations of 3D Jackiw-Pi model by exploiting the "augmented" super-field formalism. We also show that the Curci-Ferrari restriction, which is a hallmark of any non-Abelian 1-form gauge theories, emerges naturally within this formalism and plays an instrumental role in providing the proof of absolute anticommutativity of s ( a) b .
Applied Regression Modeling A Business Approach
Pardoe, Iain
2012-01-01
An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling Saturn's Inner Plasmasphere: Cassini's Closest Approach
Moore, L.; Mendillo, M.
2005-05-01
Ion densities from the three-dimensional Saturn-Thermosphere-Ionosphere-Model (STIM, Moore et al., 2004) are extended above the plasma exobase using the formalism of Pierrard and Lemaire (1996, 1998), which evaluates the balance of gravitational, centrifugal and electric forces on the plasma. The parameter space of low-energy ionospheric contributions to Saturn's plasmasphere is explored by comparing results that span the observed extremes of plasma temperature, 650 K to 1700 K, and a range of velocity distributions, Lorentzian (or Kappa) to Maxwellian. Calculations are made for plasma densities along the path of the Cassini spacecraft's orbital insertion on 1 July 2004. These calculations neglect any ring or satellite sources of plasma, which are most likely minor contributors at 1.3 Saturn radii. Modeled densities will be compared with Cassini measurements as they become available. Moore, L.E., M. Mendillo, I.C.F. Mueller-Wodarg, and D.L. Murr, Icarus, 172, 503-520, 2004. Pierrard, V. and J. Lemaire, J. Geophys. Res., 101, 7923-7934, 1996. Pierrard, V. and J. Lemaire, J. Geophys. Res., 103, 4117, 1998.
Keyring models: An approach to steerability
Miller, Carl A.; Colbeck, Roger; Shi, Yaoyun
2018-02-01
If a measurement is made on one half of a bipartite system, then, conditioned on the outcome, the other half has a new reduced state. If these reduced states defy classical explanation—that is, if shared randomness cannot produce these reduced states for all possible measurements—the bipartite state is said to be steerable. Determining which states are steerable is a challenging problem even for low dimensions. In the case of two-qubit systems, a criterion is known for T-states (that is, those with maximally mixed marginals) under projective measurements. In the current work, we introduce the concept of keyring models—a special class of local hidden state models. When the measurements made correspond to real projectors, these allow us to study steerability beyond T-states. Using keyring models, we completely solve the steering problem for real projective measurements when the state arises from mixing a pure two-qubit state with uniform noise. We also give a partial solution in the case when the uniform noise is replaced by independent depolarizing channels.
Mathematical Modeling in Mathematics Education: Basic Concepts and Approaches
Erbas, Ayhan Kürsat; Kertil, Mahmut; Çetinkaya, Bülent; Çakiroglu, Erdinç; Alacaci, Cengiz; Bas, Sinem
2014-01-01
Mathematical modeling and its role in mathematics education have been receiving increasing attention in Turkey, as in many other countries. The growing body of literature on this topic reveals a variety of approaches to mathematical modeling and related concepts, along with differing perspectives on the use of mathematical modeling in teaching and…
A BEHAVIORAL-APPROACH TO LINEAR EXACT MODELING
ANTOULAS, AC; WILLEMS, JC
1993-01-01
The behavioral approach to system theory provides a parameter-free framework for the study of the general problem of linear exact modeling and recursive modeling. The main contribution of this paper is the solution of the (continuous-time) polynomial-exponential time series modeling problem. Both
A modular approach to numerical human body modeling
Forbes, P.A.; Griotto, G.; Rooij, L. van
2007-01-01
The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body
Integrating desalination to reservoir operation to increase redundancy for more secure water supply
Bhushan, Rashi; Ng, Tze Ling
2016-08-01
We investigate the potential of integrating desalination to existing reservoir systems to mitigate supply uncertainty. Desalinated seawater and wastewater are relatively reliable but expensive. Water from natural resources like reservoirs is generally cheaper but climate sensitive. We propose combining the operation of a reservoir and seawater and wastewater desalination plants for an overall system that is less vulnerable to scarcity and uncertainty, while constraining total cost. The joint system is modeled as a multiobjective optimization problem with the double objectives of minimizing risk and vulnerability, subject to a minimum limit on resilience. The joint model is applied to two cases, one based on the climate and demands of a location in India and the other of a location in California. The results for the Indian case indicate that it is possible for the joint system to reduce risk and vulnerability to zero given a budget increase of 20-120% under current climate conditions and 30-150% under projected future conditions. For the Californian case, this would require budget increases of 20-80% and 30-140% under current and future conditions, respectively. Further, our analysis shows a two-way interaction between the reservoir and desalination plants where the optimal operation of the former is just as much affected by the latter as the latter by the former. This highlights the importance of an integrated management approach. This study contributes to a greater quantitative understanding of desalination as a redundancy measure for adapting water supply infrastructures for a future of greater scarcity and uncertainty.
A Bayesian approach for quantification of model uncertainty
International Nuclear Information System (INIS)
Park, Inseok; Amarchinta, Hemanth K.; Grandhi, Ramana V.
2010-01-01
In most engineering problems, more than one model can be created to represent an engineering system's behavior. Uncertainty is inevitably involved in selecting the best model from among the models that are possible. Uncertainty in model selection cannot be ignored, especially when the differences between the predictions of competing models are significant. In this research, a methodology is proposed to quantify model uncertainty using measured differences between experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor approach is used to propagate model uncertainty into prediction of a system response. A nonlinear vibration system is used to demonstrate the processes for implementing the adjustment factor approach. Finally, the methodology is applied on the engineering benefits of a laser peening process, and a confidence band for residual stresses is established to indicate the reliability of model prediction.
Common mode failures in redundancy systems
International Nuclear Information System (INIS)
Watson, I.A.; Edwards, G.T.
1978-01-01
Difficulties are experienced in assessing the impact of common mode failures on the reliability of safety systems. The paper first covers the investigation, definition and classification of CMF based on an extensive study of the nature of CMF. This is used as a basis for analysing data from nuclear reactor safety systems and aircraft systems. Design and maintenance errors are shown to be the prdominant cause of CMF. The analysis has laid the grounds for work on relating CMF modelling and defences. (author)
A Networks Approach to Modeling Enzymatic Reactions.
Imhof, P
2016-01-01
Modeling enzymatic reactions is a demanding task due to the complexity of the system, the many degrees of freedom involved and the complex, chemical, and conformational transitions associated with the reaction. Consequently, enzymatic reactions are not determined by precisely one reaction pathway. Hence, it is beneficial to obtain a comprehensive picture of possible reaction paths and competing mechanisms. By combining individually generated intermediate states and chemical transition steps a network of such pathways can be constructed. Transition networks are a discretized representation of a potential energy landscape consisting of a multitude of reaction pathways connecting the end states of the reaction. The graph structure of the network allows an easy identification of the energetically most favorable pathways as well as a number of alternative routes. © 2016 Elsevier Inc. All rights reserved.
Carbonate rock depositional models: A microfacies approach
Energy Technology Data Exchange (ETDEWEB)
Carozzi, A.V.
1988-01-01
Carbonate rocks contain more than 50% by weight carbonate minerals such as calcite, dolomite, and siderite. Understanding how these rocks form can lead to more efficient methods of petroleum exploration. Micofacies analysis techniques can be used as a method of predicting models of sedimentation for carbonate rocks. Micofacies in carbonate rocks can be seen clearly only in thin sections under a microscope. This section analysis of carbonate rocks is a tool that can be used to understand depositional environments, diagenetic evolution of carbonate rocks, and the formation of porosity and permeability in carbonate rocks. The use of micofacies analysis techniques is applied to understanding the origin and formation of carbonate ramps, carbonate platforms, and carbonate slopes and basins. This book will be of interest to students and professionals concerned with the disciplines of sedimentary petrology, sedimentology, petroleum geology, and palentology.
A heuristic for solving the redundancy allocation problem for multi-state series-parallel systems
International Nuclear Information System (INIS)
Ramirez-Marquez, Jose E.; Coit, David W.
2004-01-01
The redundancy allocation problem is formulated with the objective of minimizing design cost, when the system exhibits a multi-state reliability behavior, given system-level performance constraints. When the multi-state nature of the system is considered, traditional solution methodologies are no longer valid. This study considers a multi-state series-parallel system (MSPS) with capacitated binary components that can provide different multi-state system performance levels. The different demand levels, which must be supplied during the system-operating period, result in the multi-state nature of the system. The new solution methodology offers several distinct benefits compared to traditional formulations of the MSPS redundancy allocation problem. For some systems, recognizing that different component versions yield different system performance is critical so that the overall system reliability estimation and associated design models the true system reliability behavior more realistically. The MSPS design problem, solved in this study, has been previously analyzed using genetic algorithms (GAs) and the universal generating function. The specific problem being addressed is one where there are multiple component choices, but once a component selection is made, only the same component type can be used to provide redundancy. This is the first time that the MSPS design problem has been addressed without using GAs. The heuristic offers more efficient and straightforward analyses. Solutions to three different problem types are obtained illustrating the simplicity and ease of application of the heuristic without compromising the intended optimization needs
Chelli, Ali
2014-11-01
In this paper, we study the performance of hybrid automatic repeat request (HARQ) with incremental redundancy over double Rayleigh channels, a common model for the fading amplitude of vehicle-to-vehicle communication systems. We investigate the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\\\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum number of HARQ rounds. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ with incremental redundancy. We provide analytical expressions for the expected waiting time, the packet\\'s sojourn time in the queue, the average consumed power, and the energy efficiency. In our study, the communication rate per HARQ round is adjusted to the average signal-to-noise ratio (SNR) such that a target outage probability is not exceeded. This setting conforms with communication systems in which a quality of service is expected regardless of the channel conditions. Our analysis underscores the importance of HARQ in improving the spectral efficiency and reliability of communication systems. We demonstrate as well that the explored HARQ scheme achieves full diversity. Additionally, we investigate the tradeoff between energy efficiency and spectral efficiency.
Risk prediction model: Statistical and artificial neural network approach
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Biogeographical disparity in the functional diversity and redundancy of corals.
McWilliam, Mike; Hoogenboom, Mia O; Baird, Andrew H; Kuo, Chao-Yang; Madin, Joshua S; Hughes, Terry P
2018-03-20
Corals are major contributors to a range of key ecosystem functions on tropical reefs, including calcification, photosynthesis, nutrient cycling, and the provision of habitat structure. The abundance of corals is declining at multiple scales, and the species composition of assemblages is responding to escalating human pressures, including anthropogenic global warming. An urgent challenge is to understand the functional consequences of these shifts in abundance and composition in different biogeographical contexts. While global patterns of coral species richness are well known, the biogeography of coral functions in provinces and domains with high and low redundancy is poorly understood. Here, we quantify the functional traits of all currently recognized zooxanthellate coral species ( n = 821) in both the Indo-Pacific and Atlantic domains to examine the relationships between species richness and the diversity and redundancy of functional trait space. We find that trait diversity is remarkably conserved (>75% of the global total) along latitudinal and longitudinal gradients in species richness, falling away only in species-poor provinces ( n < 200), such as the Persian Gulf (52% of the global total), Hawaii (37%), the Caribbean (26%), and the East-Pacific (20%), where redundancy is also diminished. In the more species-poor provinces, large and ecologically important areas of trait space are empty, or occupied by just a few, highly distinctive species. These striking biogeographical differences in redundancy could affect the resilience of critical reef functions and highlight the vulnerability of relatively depauperate, peripheral locations, which are often a low priority for targeted conservation efforts.
Inhibition and Language Pragmatic View in Redundant Data Problem Solving
Setti, Annalisa; Caramelli, Nicoletta
2007-01-01
The present study concerns redundant data problems, defined as problems in which irrelevant data is provided. This type of problem provides a misleading context [Pascual-Leone, J. (1987). Organismic process for neo-Piagetian theories: A dialectical causal account of cognitive development. "International Journal of Psychology," 22, 531-570] similar…
Redundancy scheme for multi-layered accelerator control system
International Nuclear Information System (INIS)
Chauhan, Amit; Fatnani, Pravin
2009-01-01
The control system for SRS Indus-2 has three-layered architecture. There are VMEbus based stations at the lower two layers that are controlled by their respective CPU board. The 'Profibus' fieldbus standard is used for communication between these VME stations distributed in the field. There is a Profibus controller board at each station to implement the communication protocol. The mode of communication is master-slave (command-response) type. This paper proposes a scheme to implement redundancy at the lower two layers namely Layer-2 (Supervisory Layer / Profibus-master) and Layer-3 (Equipment Unit Interface Layer / Profibus-slave). The redundancy is for both the CPU and the communication board. The scheme uses two CPU boards and two Profi controller boards at each L-3 station. This helps in decreasing any downtime resulting either from CPU faults or communication board faults that are placed in the field area. Redundancy of Profi boards provides two active communication channels between the stations that can be used in different ways thereby increasing the availability on a communication link. Redundancy of CPU boards provides certain level of auto fault-recovery as one CPU remains active and the other CPU remains in standby mode, which takes over the control of VMEbus in case of any fault in the main CPU. (author)
Mutual information and redundancy in spontaneous communication between cortical neurons.
Szczepanski, J; Arnold, M; Wajnryb, E; Amigó, J M; Sanchez-Vives, M V
2011-03-01
An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates-in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals.
The bliss (not the problem) of motor abundance (not redundancy).
Latash, Mark L
2012-03-01
Motor control is an area of natural science exploring how the nervous system interacts with other body parts and the environment to produce purposeful, coordinated actions. A central problem of motor control-the problem of motor redundancy-was formulated by Nikolai Bernstein as the problem of elimination of redundant degrees-of-freedom. Traditionally, this problem has been addressed using optimization methods based on a variety of cost functions. This review draws attention to a body of recent findings suggesting that the problem has been formulated incorrectly. An alternative view has been suggested as the principle of abundance, which considers the apparently redundant degrees-of-freedom as useful and even vital for many aspects of motor behavior. Over the past 10 years, dozens of publications have provided support for this view based on the ideas of synergic control, computational apparatus of the uncontrolled manifold hypothesis, and the equilibrium-point (referent configuration) hypothesis. In particular, large amounts of "good variance"-variance in the space of elements that has no effect on the overall performance-have been documented across a variety of natural actions. "Good variance" helps an abundant system to deal with secondary tasks and unexpected perturbations; its amount shows adaptive modulation across a variety of conditions. These data support the view that there is no problem of motor redundancy; there is bliss of motor abundance.
Virtual Modular Redundancy of Processor Module in the PLC
International Nuclear Information System (INIS)
Lee, Kwang-Il; Hwang, SungJae; Yoon, DongHwa
2016-01-01
Dual Modular Redundancy (DMR) is mainly used to implement these safety control systems. DMR is conveyed when components of a system are duplicated, providing another component in case one should fault or fail. This feature has a high availability and large fault tolerant. It provides zero downtime that is required for nuclear power plants. So nuclear power plant has been commercialized by multiple redundant systems. The following paper, we propose a Virtual Modular Redundancy (VMR) rather than physical triple of the Programmable Logic Controller (PLC) processor module to ensure the reliability of the nuclear power plant control system. VMR implementation minimizes design changes to continue to use the commercially available redundant system. Also, the purpose of the VMR is to improve the efficiency and reliability in many ways, such as fault tolerant and fail-safe and cost. VMR guarantees a wide range of reliable fault recovery, fault tolerance, etc. It is prevented before it causes great damages due to the continuous failure of the two modules. The reliable communication speed is slow and also it has a small bandwidth. It is a great loss in the safety control system. However, VMR aims to avoid nuclear power plants that were suspended due to fail-safe. It is not for the purpose of commonly used. Application of VMR is actually expected to require a lot of research and trial and error until they adapt to the nuclear regulatory and standards
Testing the significance of canonical axes in redundancy analysis
Legendre, P.; Oksanen, J.; Braak, ter C.J.F.
2011-01-01
1. Tests of significance of the individual canonical axes in redundancy analysis allow researchers to determine which of the axes represent variation that can be distinguished from random. Variation along the significant axes can be mapped, used to draw biplots or interpreted through subsequent
Network Gateway Technology: The Issue of Redundancy towards ...
African Journals Online (AJOL)
The Internet has provided advancement in the areas of network and networking facilities. Everyone connected to the Internet is concerned about two basic things: the availability of network services and the speed of the network. Network gateway redundancy technology falls within these categories and happens to be one of ...
The Evolution of Functionally Redundant Species; Evidence from Beetles
Scheffer, M.; Vergnon, R.O.H.; Nes, van E.H.; Cuppen, J.G.M.; Peeters, E.T.H.M.; Leijs, R.; Nilsson, A.N.
2015-01-01
While species fulfill many different roles in ecosystems, it has been suggested that numerous species might actually share the same function in a near neutral way. So-far, however, it is unclear whether such functional redundancy really exists. We scrutinize this question using extensive data on the
Simulator of a fail detector system for redundant sensors
International Nuclear Information System (INIS)
Assumpcao Filho, E.O.; Nakata, H.
1990-01-01
A failure detection and isolation system (FDI) simulation program has been developed for IBM-PC microcomputers. The program, based on the sequencial likelihood ratio testing method developed by A. Wald, was implemented with Monte-Carlo technique. The calculated failure detection rate was favorably compared against the wind-tunnel experimental redundant temperature sensors. (author)
75 FR 65238 - Loan Guaranty: Elimination of Redundant Regulations; Correction
2010-10-22
... DEPARTMENT OF VETERANS AFFAIRS 38 CFR Part 36 RIN 2900-AN71 Loan Guaranty: Elimination of... June 15, 2010 (75 FR 33704), amending its loan guaranty regulations to eliminate redundant regulations... INFORMATION CONTACT: William White, Acting Assistant Director for Loan Processing and Valuation (262...
Virtual Modular Redundancy of Processor Module in the PLC
Energy Technology Data Exchange (ETDEWEB)
Lee, Kwang-Il; Hwang, SungJae; Yoon, DongHwa [SOOSAN ENS Co., Seoul (Korea, Republic of)
2016-10-15
Dual Modular Redundancy (DMR) is mainly used to implement these safety control systems. DMR is conveyed when components of a system are duplicated, providing another component in case one should fault or fail. This feature has a high availability and large fault tolerant. It provides zero downtime that is required for nuclear power plants. So nuclear power plant has been commercialized by multiple redundant systems. The following paper, we propose a Virtual Modular Redundancy (VMR) rather than physical triple of the Programmable Logic Controller (PLC) processor module to ensure the reliability of the nuclear power plant control system. VMR implementation minimizes design changes to continue to use the commercially available redundant system. Also, the purpose of the VMR is to improve the efficiency and reliability in many ways, such as fault tolerant and fail-safe and cost. VMR guarantees a wide range of reliable fault recovery, fault tolerance, etc. It is prevented before it causes great damages due to the continuous failure of the two modules. The reliable communication speed is slow and also it has a small bandwidth. It is a great loss in the safety control system. However, VMR aims to avoid nuclear power plants that were suspended due to fail-safe. It is not for the purpose of commonly used. Application of VMR is actually expected to require a lot of research and trial and error until they adapt to the nuclear regulatory and standards.
The redundant target effect is affected by modality switch costs
DEFF Research Database (Denmark)
Gondan, Matthias; Lange, K.; Rösler, F.
2004-01-01
When participants have to respond to stimuli of two modalities, faster reaction times are observed for simultaneous, bimodal events than for unimodal events (the redundant target effect [RTE]). This finding has been interpreted as reflecting processing gains for bimodal relative to unimodal stimu...
Mediated Instruction and Redundancy Remediation in Sciences in ...
African Journals Online (AJOL)
The data were analyzed using t-test statistics. Data analysis revealed that use of mediated instruction significantly removed redundancy for science students also the use of mediated instruction influenced academic achievement of science students in secondary schools. Some of the recommendations include that science ...
A redundancy-removing feature selection algorithm for nominal data
Directory of Open Access Journals (Sweden)
Zhihua Li
2015-10-01
Full Text Available No order correlation or similarity metric exists in nominal data, and there will always be more redundancy in a nominal dataset, which means that an efficient mutual information-based nominal-data feature selection method is relatively difficult to find. In this paper, a nominal-data feature selection method based on mutual information without data transformation, called the redundancy-removing more relevance less redundancy algorithm, is proposed. By forming several new information-related definitions and the corresponding computational methods, the proposed method can compute the information-related amount of nominal data directly. Furthermore, by creating a new evaluation function that considers both the relevance and the redundancy globally, the new feature selection method can evaluate the importance of each nominal-data feature. Although the presented feature selection method takes commonly used MIFS-like forms, it is capable of handling high-dimensional datasets without expensive computations. We perform extensive experimental comparisons of the proposed algorithm and other methods using three benchmarking nominal datasets with two different classifiers. The experimental results demonstrate the average advantage of the presented algorithm over the well-known NMIFS algorithm in terms of the feature selection and classification accuracy, which indicates that the proposed method has a promising performance.
Network Gateway Technology: The Issue of Redundancy towards ...
African Journals Online (AJOL)
Everyone connected to the Internet is concerned about two basic things: the availability of network services and the speed of the network. Network gateway redundancy technology falls within these categories and happens to be one of the newest technologies which only few companies, such as mobile companies and ...
Redundancy Effect on Retention of Vocabulary Words Using Multimedia Presentation
Samur, Yavuz
2012-01-01
This study was designed to examine the effect of the redundancy principle in a multimedia presentation constructed for foreign language vocabulary learning on undergraduate students' retention. The underlying hypothesis of this study is that when the students are exposed to the material in multiple ways through animation, concurrent narration,…
The conservation of redundancy in genetic systems: effects of sexual ...
Indian Academy of Sciences (India)
Unknown
probability of a fatal error is reduced by a redundant sampling system, but the chance of error rises as the sys- .... Thus every function is covered three times and every component covers a ... ples reduce the variance of the distributions in figure 3a and thereby ..... cases p = 0⋅1, q = 1000 and the value of b is adjusted to.
A dual model approach to ground water recovery trench design
International Nuclear Information System (INIS)
Clodfelter, C.L.; Crouch, M.S.
1992-01-01
The design of trenches for contaminated ground water recovery must consider several variables. This paper presents a dual-model approach for effectively recovering contaminated ground water migrating toward a trench by advection. The approach involves an analytical model to determine the vertical influence of the trench and a numerical flow model to determine the capture zone within the trench and the surrounding aquifer. The analytical model is utilized by varying trench dimensions and head values to design a trench which meets the remediation criteria. The numerical flow model is utilized to select the type of backfill and location of sumps within the trench. The dual-model approach can be used to design a recovery trench which effectively captures advective migration of contaminants in the vertical and horizontal planes
Virtuous organization: A structural equation modeling approach
Directory of Open Access Journals (Sweden)
Majid Zamahani
2013-02-01
Full Text Available For years, the idea of virtue was unfavorable among researchers and virtues were traditionally considered as culture-specific, relativistic and they were supposed to be associated with social conservatism, religious or moral dogmatism, and scientific irrelevance. Virtue and virtuousness have been recently considered seriously among organizational researchers. The proposed study of this paper examines the relationships between leadership, organizational culture, human resource, structure and processes, care for community and virtuous organization. Structural equation modeling is employed to investigate the effects of each variable on other components. The data used in this study consists of questionnaire responses from employees in Payam e Noor University in Yazd province. A total of 250 questionnaires were sent out and a total of 211 valid responses were received. Our results have revealed that all the five variables have positive and significant impacts on virtuous organization. Among the five variables, organizational culture has the most direct impact (0.80 and human resource has the most total impact (0.844 on virtuous organization.
A systemic approach for modeling soil functions
Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute
2018-03-01
The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.
Modeling of phase equilibria with CPA using the homomorph approach
DEFF Research Database (Denmark)
Breil, Martin Peter; Tsivintzelis, Ioannis; Kontogeorgis, Georgios
2011-01-01
For association models, like CPA and SAFT, a classical approach is often used for estimating pure-compound and mixture parameters. According to this approach, the pure-compound parameters are estimated from vapor pressure and liquid density data. Then, the binary interaction parameters, kij, are ...
Modular Modelling and Simulation Approach - Applied to Refrigeration Systems
DEFF Research Database (Denmark)
Sørensen, Kresten Kjær; Stoustrup, Jakob
2008-01-01
This paper presents an approach to modelling and simulation of the thermal dynamics of a refrigeration system, specifically a reefer container. A modular approach is used and the objective is to increase the speed and flexibility of the developed simulation environment. The refrigeration system...
A Constructive Neural-Network Approach to Modeling Psychological Development
Shultz, Thomas R.
2012-01-01
This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…
The Intersystem Model of Psychotherapy: An Integrated Systems Treatment Approach
Weeks, Gerald R.; Cross, Chad L.
2004-01-01
This article introduces the intersystem model of psychotherapy and discusses its utility as a truly integrative and comprehensive approach. The foundation of this conceptually complex approach comes from dialectic metatheory; hence, its derivation requires an understanding of both foundational and integrational constructs. The article provides a…
Bystander Approaches: Empowering Students to Model Ethical Sexual Behavior
Lynch, Annette; Fleming, Wm. Michael
2005-01-01
Sexual violence on college campuses is well documented. Prevention education has emerged as an alternative to victim-- and perpetrator--oriented approaches used in the past. One sexual violence prevention education approach focuses on educating and empowering the bystander to become a point of ethical intervention. In this model, bystanders to…
Modelling road accidents: An approach using structural time series
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Numerical approaches to expansion process modeling
Directory of Open Access Journals (Sweden)
G. V. Alekseev
2017-01-01
Full Text Available Forage production is currently undergoing a period of intensive renovation and introduction of the most advanced technologies and equipment. More and more often such methods as barley toasting, grain extrusion, steaming and grain flattening, boiling bed explosion, infrared ray treatment of cereals and legumes, followed by flattening, and one-time or two-time granulation of the purified whole grain without humidification in matrix presses By grinding the granules. These methods require special apparatuses, machines, auxiliary equipment, created on the basis of different methods of compiled mathematical models. When roasting, simulating the heat fields arising in the working chamber, provide such conditions, the decomposition of a portion of the starch to monosaccharides, which makes the grain sweetish, but due to protein denaturation the digestibility of the protein and the availability of amino acids decrease somewhat. Grain is roasted mainly for young animals in order to teach them to eat food at an early age, stimulate the secretory activity of digestion, better development of the masticatory muscles. In addition, the high temperature is detrimental to bacterial contamination and various types of fungi, which largely avoids possible diseases of the gastrointestinal tract. This method has found wide application directly on the farms. Apply when used in feeding animals and legumes: peas, soy, lupine and lentils. These feeds are preliminarily ground, and then cooked or steamed for 1 hour for 30–40 minutes. In the feed mill. Such processing of feeds allows inactivating the anti-nutrients in them, which reduce the effectiveness of their use. After processing, legumes are used as protein supplements in an amount of 25–30% of the total nutritional value of the diet. But it is recommended to cook and steal a grain of good quality. A poor-quality grain that has been stored for a long time and damaged by pathogenic micro flora is subject to
Modelling and Generating Ajax Applications : A Model-Driven Approach
Gharavi, V.; Mesbah, A.; Van Deursen, A.
2008-01-01
Preprint of paper published in: IWWOST 2008 - 7th International Workshop on Web-Oriented Software Technologies, 14-15 July 2008 AJAX is a promising and rapidly evolving approach for building highly interactive web applications. In AJAX, user interface components and the event-based interaction
Understanding Gulf War Illness: An Integrative Modeling Approach
2017-10-01
using a novel mathematical model. The computational biology approach will enable the consortium to quickly identify targets of dysfunction and find... computer / mathematical paradigms for evaluation of treatment strategies 12-30 50% Develop pilot clinical trials on basis of animal studies 24-36 60...the goal of testing chemical treatments. The immune and autonomic biomarkers will be tested using a computational modeling approach allowing for a
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Data Analysis A Model Comparison Approach, Second Edition
Judd, Charles M; Ryan, Carey S
2008-01-01
This completely rewritten classic text features many new examples, insights and topics including mediational, categorical, and multilevel models. Substantially reorganized, this edition provides a briefer, more streamlined examination of data analysis. Noted for its model-comparison approach and unified framework based on the general linear model, the book provides readers with a greater understanding of a variety of statistical procedures. This consistent framework, including consistent vocabulary and notation, is used throughout to develop fewer but more powerful model building techniques. T
A novel approach to modeling and diagnosing the cardiovascular system
Energy Technology Data Exchange (ETDEWEB)
Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States); Allen, P.A. [Life Link, Richland, WA (United States)
1995-07-01
A novel approach to modeling and diagnosing the cardiovascular system is introduced. A model exhibits a subset of the dynamics of the cardiovascular behavior of an individual by using a recurrent artificial neural network. Potentially, a model will be incorporated into a cardiovascular diagnostic system. This approach is unique in that each cardiovascular model is developed from physiological measurements of an individual. Any differences between the modeled variables and the variables of an individual at a given time are used for diagnosis. This approach also exploits sensor fusion to optimize the utilization of biomedical sensors. The advantage of sensor fusion has been demonstrated in applications including control and diagnostics of mechanical and chemical processes.
Synthesis of industrial applications of local approach to fracture models
International Nuclear Information System (INIS)
Eripret, C.
1993-03-01
This report gathers different applications of local approach to fracture models to various industrial configurations, such as nuclear pressure vessel steel, cast duplex stainless steels, or primary circuit welds such as bimetallic welds. As soon as models are developed on the basis of microstructural observations, damage mechanisms analyses, and fracture process, the local approach to fracture proves to solve problems where classical fracture mechanics concepts fail. Therefore, local approach appears to be a powerful tool, which completes the standard fracture criteria used in nuclear industry by exhibiting where and why those classical concepts become unvalid. (author). 1 tab., 18 figs., 25 refs
An Adaptive Technique for a Redundant-Sensor Navigation System. Ph.D. Thesis
Chien, T. T.
1972-01-01
An on-line adaptive technique is developed to provide a self-contained redundant-sensor navigation system with a capability to utilize its full potentiality in reliability and performance. The gyro navigation system is modeled as a Gauss-Markov process, with degradation modes defined as changes in characteristics specified by parameters associated with the model. The adaptive system is formulated as a multistage stochastic process: (1) a detection system, (2) an identification system and (3) a compensation system. It is shown that the sufficient statistics for the partially observable process in the detection and identification system is the posterior measure of the state of degradation, conditioned on the measurement history.
Mathematical models for therapeutic approaches to control HIV disease transmission
Roy, Priti Kumar
2015-01-01
The book discusses different therapeutic approaches based on different mathematical models to control the HIV/AIDS disease transmission. It uses clinical data, collected from different cited sources, to formulate the deterministic as well as stochastic mathematical models of HIV/AIDS. It provides complementary approaches, from deterministic and stochastic points of view, to optimal control strategy with perfect drug adherence and also tries to seek viewpoints of the same issue from different angles with various mathematical models to computer simulations. The book presents essential methods and techniques for students who are interested in designing epidemiological models on HIV/AIDS. It also guides research scientists, working in the periphery of mathematical modeling, and helps them to explore a hypothetical method by examining its consequences in the form of a mathematical modelling and making some scientific predictions. The model equations, mathematical analysis and several numerical simulations that are...
A model-driven approach to information security compliance
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
A Model-Driven Approach for Telecommunications Network Services Definition
Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.
Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.
SEU-hardened silicon bipolar and GaAs MESFET SRAM cells using local redundancy techniques
International Nuclear Information System (INIS)
Hauser, J.R.
1992-01-01
Silicon bipolar and GaAs FET SRAM's have proven to be more difficult to harden with respect to single-event upset mechanisms than have silicon CMOS SRAM's. This is a fundamental property of bipolar and JFET or MESFET device technologies which do not have a high-impedance, nonactive isolation between the control electrode and the current or voltage being controlled. All SEU circuit level hardening techniques applied at the local level must use some type of information storage redundancy so that information loss on one node due to an SEU event can be recovered from information stored elsewhere in the cell. In CMOS technologies, this can be achieved by the use of simple cross-coupling resistors, whereas in bipolar and FET technologies, no such simple approach is possible. Several approaches to the use of local redundancy in bipolar and FET technologies are discussed in this paper. At the expense of increased cell complexity and increased power consumption and write time, several approaches are capable of providing complete SEU hardness at the local cell level
An approach for activity-based DEVS model specification
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2016-01-01
Creation of DEVS models has been advanced through Model Driven Architecture and its frameworks. The overarching role of the frameworks has been to help develop model specifications in a disciplined fashion. Frameworks can provide intermediary layers between the higher level mathematical models...... and their corresponding software specifications from both structural and behavioral aspects. Unlike structural modeling, developing models to specify behavior of systems is known to be harder and more complex, particularly when operations with non-trivial control schemes are required. In this paper, we propose specifying...... activity-based behavior modeling of parallel DEVS atomic models. We consider UML activities and actions as fundamental units of behavior modeling, especially in the presence of recent advances in the UML 2.5 specifications. We describe in detail how to approach activity modeling with a set of elemental...
Modelling diversity in building occupant behaviour: a novel statistical approach
DEFF Research Database (Denmark)
Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm
2016-01-01
We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...
Sensitivity analysis approaches applied to systems biology models.
Zi, Z
2011-11-01
With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.
A qualitative evaluation approach for energy system modelling frameworks
DEFF Research Database (Denmark)
Wiese, Frauke; Hilpert, Simon; Kaldemeyer, Cord
2018-01-01
properties define how useful it is in regard to the existing challenges. For energy system models, evaluation methods exist, but we argue that many decisions upon properties are rather made on the model generator or framework level. Thus, this paper presents a qualitative approach to evaluate frameworks...
Modeling Alaska boreal forests with a controlled trend surface approach
Mo Zhou; Jingjing Liang
2012-01-01
An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of
Towards modeling future energy infrastructures - the ELECTRA system engineering approach
DEFF Research Database (Denmark)
Uslar, Mathias; Heussen, Kai
2016-01-01
of the IEC 62559 use case template as well as needed changes to cope particularly with the aspects of controller conflicts and Greenfield technology modeling. From the original envisioned use of the standards, we show a possible transfer on how to properly deal with a Greenfield approach when modeling....
A Model-Driven Approach to e-Course Management
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
A study of multidimensional modeling approaches for data warehouse
Yusof, Sharmila Mat; Sidi, Fatimah; Ibrahim, Hamidah; Affendey, Lilly Suriani
2016-08-01
Data warehouse system is used to support the process of organizational decision making. Hence, the system must extract and integrate information from heterogeneous data sources in order to uncover relevant knowledge suitable for decision making process. However, the development of data warehouse is a difficult and complex process especially in its conceptual design (multidimensional modeling). Thus, there have been various approaches proposed to overcome the difficulty. This study surveys and compares the approaches of multidimensional modeling and highlights the issues, trend and solution proposed to date. The contribution is on the state of the art of the multidimensional modeling design.
Gray-box modelling approach for description of storage tunnel
DEFF Research Database (Denmark)
Harremoës, Poul; Carstensen, Jacob
1999-01-01
The dynamics of a storage tunnel is examined using a model based on on-line measured data and a combination of simple deterministic and black-box stochastic elements. This approach, called gray-box modeling, is a new promising methodology for giving an on-line state description of sewer systems...... of the water in the overflow structures. The capacity of a pump draining the storage tunnel is estimated for two different rain events, revealing that the pump was malfunctioning during the first rain event. The proposed modeling approach can be used in automated online surveillance and control and implemented...
Meta-analysis a structural equation modeling approach
Cheung, Mike W-L
2015-01-01
Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo
Learning the Task Management Space of an Aircraft Approach Model
Krall, Joseph; Menzies, Tim; Davies, Misty
2014-01-01
Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.
A novel approach of modeling continuous dark hydrogen fermentation.
Alexandropoulou, Maria; Antonopoulou, Georgia; Lyberatos, Gerasimos
2018-02-01
In this study a novel modeling approach for describing fermentative hydrogen production in a continuous stirred tank reactor (CSTR) was developed, using the Aquasim modeling platform. This model accounts for the key metabolic reactions taking place in a fermentative hydrogen producing reactor, using fixed stoichiometry but different reaction rates. Biomass yields are determined based on bioenergetics. The model is capable of describing very well the variation in the distribution of metabolic products for a wide range of hydraulic retention times (HRT). The modeling approach is demonstrated using the experimental data obtained from a CSTR, fed with food industry waste (FIW), operating at different HRTs. The kinetic parameters were estimated through fitting to the experimental results. Hydrogen and total biogas production rates were predicted very well by the model, validating the basic assumptions regarding the implicated stoichiometric biochemical reactions and their kinetic rates. Copyright © 2017 Elsevier Ltd. All rights reserved.
An integrated modeling approach to age invariant face recognition
Alvi, Fahad Bashir; Pears, Russel
2015-03-01
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches
On a model-based approach to radiation protection
International Nuclear Information System (INIS)
Waligorski, M.P.R.
2002-01-01
There is a preoccupation with linearity and absorbed dose as the basic quantifiers of radiation hazard. An alternative is the fluence approach, whereby radiation hazard may be evaluated, at least in principle, via an appropriate action cross section. In order to compare these approaches, it may be useful to discuss them as quantitative descriptors of survival and transformation-like endpoints in cell cultures in vitro - a system thought to be relevant to modelling radiation hazard. If absorbed dose is used to quantify these biological endpoints, then non-linear dose-effect relations have to be described, and, e.g. after doses of densely ionising radiation, dose-correction factors as high as 20 are required. In the fluence approach only exponential effect-fluence relationships can be readily described. Neither approach alone exhausts the scope of experimentally observed dependencies of effect on dose or fluence. Two-component models, incorporating a suitable mixture of the two approaches, are required. An example of such a model is the cellular track structure theory developed by Katz over thirty years ago. The practical consequences of modelling radiation hazard using this mixed two-component approach are discussed. (author)
Susceptibility of Redundant Versus Singular Clock Domains Implemented in SRAM-Based FPGA TMR Designs
Berg, Melanie D.; LaBel, Kenneth A.; Pellish, Jonathan
2016-01-01
We present the challenges that arise when using redundant clock domains due to their clock-skew. Radiation data show that a singular clock domain (DTMR) provides an improved TMR methodology for SRAM-based FPGAs over redundant clocks.
The Birth and Death of Redundancy in Decoherence and Quantum Darwinism
Riedel, Charles; Zurek, Wojciech; Zwolak, Michael
2012-02-01
Understanding the quantum-classical transition and the identification of a preferred classical domain through quantum Darwinism is based on recognizing high-redundancy states as both ubiquitous and exceptional. They are produced ubiquitously during decoherence, as has been demonstrated by the recent identification of very general conditions under which high-redundancy states develop. They are exceptional in that high-redundancy states occupy a very narrow corner of the global Hilbert space; states selected at random are overwelming likely to exhibit zero redundancy. In this letter, we examine the conditions and time scales for the transition from high-redundancy states to zero-redundancy states in many-body dynamics. We identify sufficient condition for the development of redundancy from product states and show that the destruction of redundancy can be accomplished even with highly constrained interactions.
A Cause-Consequence Chart of a Redundant Protection System
DEFF Research Database (Denmark)
Nielsen, Dan Sandvik; Platz, O.; Runge, B.
1975-01-01
A cause-consequence chart is applied in analysing failures of a redundant protection system (a core spray system in a nuclear power plant). It is shown how the diagram provides a basis for calculating two probability measures for malfunctioning of the protection system. The test policy of compone...... of components is taken into account. The possibility of using parameter variation as a basis for the choice of test policy is indicated.......A cause-consequence chart is applied in analysing failures of a redundant protection system (a core spray system in a nuclear power plant). It is shown how the diagram provides a basis for calculating two probability measures for malfunctioning of the protection system. The test policy...
Information filtering based on corrected redundancy-eliminating mass diffusion.
Directory of Open Access Journals (Sweden)
Xuzhen Zhu
Full Text Available Methods used in information filtering and recommendation often rely on quantifying the similarity between objects or users. The used similarity metrics often suffer from similarity redundancies arising from correlations between objects' attributes. Based on an unweighted undirected object-user bipartite network, we propose a Corrected Redundancy-Eliminating similarity index (CRE which is based on a spreading process on the network. Extensive experiments on three benchmark data sets-Movilens, Netflix and Amazon-show that when used in recommendation, the CRE yields significant improvements in terms of recommendation accuracy and diversity. A detailed analysis is presented to unveil the origins of the observed differences between the CRE and mainstream similarity indices.
Low-redundancy linear arrays in mirrored interferometric aperture synthesis.
Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang
2016-01-15
Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.
Information theory and artificial grammar learning: inferring grammaticality from redundancy.
Jamieson, Randall K; Nevzorova, Uliana; Lee, Graham; Mewhort, D J K
2016-03-01
In artificial grammar learning experiments, participants study strings of letters constructed using a grammar and then sort novel grammatical test exemplars from novel ungrammatical ones. The ability to distinguish grammatical from ungrammatical strings is often taken as evidence that the participants have induced the rules of the grammar. We show that judgements of grammaticality are predicted by the local redundancy of the test strings, not by grammaticality itself. The prediction holds in a transfer test in which test strings involve different letters than the training strings. Local redundancy is usually confounded with grammaticality in stimuli widely used in the literature. The confounding explains why the ability to distinguish grammatical from ungrammatical strings has popularized the idea that participants have induced the rules of the grammar, when they have not. We discuss the judgement of grammaticality task in terms of attribute substitution and pattern goodness. When asked to judge grammaticality (an inaccessible attribute), participants answer an easier question about pattern goodness (an accessible attribute).
Ageing behaviour of [(n-1)/n] active redundancy systems
International Nuclear Information System (INIS)
Eid, M.Y.
1995-01-01
Ageing of systems becomes a real concern if intelligent maintenance is required. Determining the ageing behaviour of a system necessitate having a powerful calculating tool and knowing the ageing behaviour of the basic components of the systems. Consequently, time dependent failure rates are required for basic components and need to be determined for systems. As, this is the general problem in reliability analysis, only (n-1)/n active redundancy system will be examined in the paper. Systems with (n-1)/n active redundancy are commonly used in a wide range of engineering fields. This should permit a priori improving the system reliability. Still, a deeper analysis of the ageing behaviour of such systems may reveal some particular aspects. (authors). 2 refs., 5 figs
Industrial plant electrical systems: Simplicity, reliability, cost savings, redundancies
International Nuclear Information System (INIS)
Silvestri, A.; Tommazzolli, F.; Pavia Univ.
1992-01-01
This article represents a compact but complete design and construction manual for industrial plant electrical systems. It is to be used by design engineers having prior knowledge of local power supply routes and voltages and regards principally the optimum choice of internal distribution systems which can be radial or single, double ringed or with various network configurations, and with single or multiple supplies, and many or few redundancies. After giving guidelines on the choosing of these options, the manual deals with problematics relevant to suitable cable sizing. A cost benefit benefit analysis method is suggested for the choice of the number of redundancies. Recommendations are given for the choice of transformers, motorized equipment, switch boards and circuit breakers. Reference is made to Italian electrical safety and building codes
Modeling gene expression measurement error: a quasi-likelihood approach
Directory of Open Access Journals (Sweden)
Strimmer Korbinian
2003-03-01
Full Text Available Abstract Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale. Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood. Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic variance structure of the data. As the quasi-likelihood behaves (almost like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also
International Nuclear Information System (INIS)
Son, Kwang Seop; Kim, Dong Hoon; Kim, Chang Hwoi; Kang, Hyun Gook
2016-01-01
The Markov analysis is a technique for modeling system state transitions and calculating the probability of reaching various system states. While it is a proper tool for modeling complex system designs involving timing, sequencing, repair, redundancy, and fault tolerance, as the complexity or size of the system increases, so does the number of states of interest, leading to difficulty in constructing and solving the Markov model. This paper introduces a systematic approach of Markov modeling to analyze the dependability of a complex fault-tolerant system. This method is based on the decomposition of the system into independent subsystem sets, and the system-level failure rate and the unavailability rate for the decomposed subsystems. A Markov model for the target system is easily constructed using the system-level failure and unavailability rates for the subsystems, which can be treated separately. This approach can decrease the number of states to consider simultaneously in the target system by building Markov models of the independent subsystems stage by stage, and results in an exact solution for the Markov model of the whole target system. To apply this method we construct a Markov model for the reactor protection system found in nuclear power plants, a system configured with four identical channels and various fault-tolerant architectures. The results show that the proposed method in this study treats the complex architecture of the system in an efficient manner using the merits of the Markov model, such as a time dependent analysis and a sequential process analysis. - Highlights: • Systematic approach of Markov modeling for system dependability analysis is proposed based on the independent subsystem set, its failure rate and unavailability rate. • As an application example, we construct the Markov model for the digital reactor protection system configured with four identical and independent channels, and various fault-tolerant architectures. • The
State-space Generalized Predicitve Control for redundant parallel robots
Czech Academy of Sciences Publication Activity Database
Belda, Květoslav; Böhm, Josef; Valášek, M.
2003-01-01
Roč. 31, č. 3 (2003), s. 413-432 ISSN 1539-7734 R&D Projects: GA ČR GA101/03/0620 Grant - others:CTU(CZ) 0204512 Institutional research plan: CEZ:AV0Z1075907 Keywords : parallel robot construction * generalized predictive control * drive redundancy Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/historie/belda-0411126.pdf
Estimation of component redundancy in optimal age maintenance
Siopa, Jorge; Garção, José; Silva, Júlio
2012-01-01
The classical Optimal Age-Replacement defines the maintenance strategy based on the equipment failure consequences. For severe consequences an early equipment replacement is recommended. For minor consequences the repair after failure is proposed. One way of reducing the failure consequences is the use of redundancies, especially if the equipment failure rate is decreasing over time, since in this case the preventive replacement does not reduce the risk of failure. The estimation of an ac...
Systematic Luby Transform codes as incremental redundancy scheme
CSIR Research Space (South Africa)
Grobler, TL
2011-09-01
Full Text Available Transform Codes as Incremental Redundancy Scheme T. L. Grobler y, E. R. Ackermann y, J. C. Olivier y and A. J. van Zylz Department of Electrical, Electronic and Computer Engineering University of Pretoria, Pretoria 0002, South Africa Email: trienkog...@gmail.com, etienne.ackermann@ieee.org yDefence, Peace, Safety and Security (DPSS) Council for Scientific and Industrial Research (CSIR), Pretoria 0001, South Africa zDepartment of Mathematics and Applied Mathematics University of Pretoria, Pretoria 0002, South...
Unavailability modeling and analysis of redundant safety systems
International Nuclear Information System (INIS)
Vaurio, J.K.; Sciaudone, D.
1979-10-01
Analytical expressions have been developed to estimate the average unavailability of an m-out-of-n (m/n, 1 less than or equal to m less than or equal to n less than or equal to 4) standby safety system of a nuclear power plant. The expressions take into account contributions made by testing, repair, equipment failure, human error, and different testing schemes. A computer code, ICARUS, has been written to incorporate these analytical equations. The code is capable of calculating the average unavailability, optimum test interval, and relative contributions of testing, repair, and random failures for any of three testing schemes. After verification of the methodology and coding in ICARUS, a typical auxiliary feedwater system of a nuclear power plant was analyzed. The results show that the failure modes associated with testing and true demands contribute considerably to the unavailability and that diesel generators are the most critical components contributing to the overall unavailability of the system
Unavailability modeling and analysis of redundant safety systems
Energy Technology Data Exchange (ETDEWEB)
Vaurio, J.K.; Sciaudone, D.
1979-10-01
Analytical expressions have been developed to estimate the average unavailability of an m-out-of-n (m/n, 1 less than or equal to m less than or equal to n less than or equal to 4) standby safety system of a nuclear power plant. The expressions take into account contributions made by testing, repair, equipment failure, human error, and different testing schemes. A computer code, ICARUS, has been written to incorporate these analytical equations. The code is capable of calculating the average unavailability, optimum test interval, and relative contributions of testing, repair, and random failures for any of three testing schemes. After verification of the methodology and coding in ICARUS, a typical auxiliary feedwater system of a nuclear power plant was analyzed. The results show that the failure modes associated with testing and true demands contribute considerably to the unavailability and that diesel generators are the most critical components contributing to the overall unavailability of the system.
Stilwell, Daniel J; Bishop, Bradley E; Sylvester, Caleb A
2005-08-01
An approach to real-time trajectory generation for platoons of autonomous vehicles is developed from well-known control techniques for redundant robotic manipulators. The partially decentralized structure of this approach permits each vehicle to independently compute its trajectory in real-time using only locally generated information and low-bandwidth feedback generated by a system exogenous to the platoon. Our work is motivated by applications for which communications bandwidth is severely limited, such for platoons of autonomous underwater vehicles. The communication requirements for our trajectory generation approach are independent of the number of vehicles in the platoon, enabling platoons composed of a large number of vehicles to be coordinated despite limited communication bandwidth.
Superlinearly scalable noise robustness of redundant coupled dynamical systems.
Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L
2016-03-01
We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.
Analysis of informational redundancy in the protein-assembling machinery
Berkovich, Simon
2004-03-01
Entropy analysis of the DNA structure does not reveal a significant departure from randomness indicating lack of informational redundancy. This signifies the absence of a hidden meaning in the genome text and supports the 'barcode' interpretation of DNA given in [1]. Lack of informational redundancy is a characteristic property of an identification label rather than of a message of instructions. Yet randomness of DNA has to induce non-random structures of the proteins. Protein synthesis is a two-step process: transcription into RNA with gene splicing and formation a structure of amino acids. Entropy estimations, performed by A. Djebbari, show typical values of redundancy of the biomolecules along these pathways: DNA gene 4proteins 15-40in gene expression, the RNA copy carries the same information as the original DNA template. Randomness is essentially eliminated only at the step of the protein creation by a degenerate code. According to [1], the significance of the substitution of U for T with a subsequent gene splicing is that these transformations result in a different pattern of RNA oscillations, so the vital DNA communications are protected against extraneous noise coming from the protein making activities. 1. S. Berkovich, "On the 'barcode' functionality of DNA, or the Phenomenon of Life in the Physical Universe", Dorrance Publishing Co., Pittsburgh, 2003
Advantage of redundancy in the controllability of remote handling manipulator
International Nuclear Information System (INIS)
Muhammad, Ali; Mattila, Jouni; Vilenius, Matti; Siuko, Mikko; Semeraro, Luigi
2011-01-01
To carry out a variety of remote handling operations inside the ITER divertor a Water Hydraulic MANipulator (WHMAN) and its control system have been designed and developed at Tampere University of Technology. The manipulator is installed on top of Cassette Multifunctional Mover (CMM) to assist during the cassette removal and installation operations. While CMM is designed to carry heavy components such as cassettes through the service ducts relying on positioning accuracy and repeatability, WHMAN is designed to execute a mix of remote handling operations using position trajectories and master-slave telemanipulation. WHMAN is composed of eight joints: six rotational and two translational. Since a manipulator requires only six joints to acquire the desired position and orientation in operational-space, the two additional joints of WHMAN provide the redundant degrees of mobility. This paper presents how this redundancy of WHMAN can be an advantage to optimize the execution of remote handling tasks. The paper also discusses an effective way to practically exploit the redundancy. The results show that the additional degrees of freedom can be utilized to improve the dynamic behavior of the manipulator.
A review of function modeling: Approaches and applications
Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.
2008-01-01
This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...
Top-down approach to unified supergravity models
International Nuclear Information System (INIS)
Hempfling, R.
1994-03-01
We introduce a new approach for studying unified supergravity models. In this approach all the parameters of the grand unified theory (GUT) are fixed by imposing the corresponding number of low energy observables. This determines the remaining particle spectrum whose dependence on the low energy observables can now be investigated. We also include some SUSY threshold corrections that have previously been neglected. In particular the SUSY threshold corrections to the fermion masses can have a significant impact on the Yukawa coupling unification. (orig.)
Intelligent Transportation and Evacuation Planning A Modeling-Based Approach
Naser, Arab
2012-01-01
Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...
An object-oriented approach to energy-economic modeling
Energy Technology Data Exchange (ETDEWEB)
Wise, M.A.; Fox, J.A.; Sands, R.D.
1993-12-01
In this paper, the authors discuss the experiences in creating an object-oriented economic model of the U.S. energy and agriculture markets. After a discussion of some central concepts, they provide an overview of the model, focusing on the methodology of designing an object-oriented class hierarchy specification based on standard microeconomic production functions. The evolution of the model from the class definition stage to programming it in C++, a standard object-oriented programming language, will be detailed. The authors then discuss the main differences between writing the object-oriented program versus a procedure-oriented program of the same model. Finally, they conclude with a discussion of the advantages and limitations of the object-oriented approach based on the experience in building energy-economic models with procedure-oriented approaches and languages.
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
Wave Resource Characterization Using an Unstructured Grid Modeling Approach
Directory of Open Access Journals (Sweden)
Wei-Cheng Wu
2018-03-01
Full Text Available This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization, using the unstructured grid Simulating WAve Nearshore (SWAN model coupled with a nested grid WAVEWATCH III® (WWIII model. The flexibility of models with various spatial resolutions and the effects of open boundary conditions simulated by a nested grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured grid-modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Center Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the ST2 physics package’s ability to predict wave power density for large waves, which is important for wave resource assessment, load calculation of devices, and risk management. In addition, bivariate distributions show that the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than with the ST2 physics package. This study demonstrated that the unstructured grid wave modeling approach, driven by regional nested grid WWIII outputs along with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (102 km.
A comprehensive dynamic modeling approach for giant magnetostrictive material actuators
International Nuclear Information System (INIS)
Gu, Guo-Ying; Zhu, Li-Min; Li, Zhi; Su, Chun-Yi
2013-01-01
In this paper, a comprehensive modeling approach for a giant magnetostrictive material actuator (GMMA) is proposed based on the description of nonlinear electromagnetic behavior, the magnetostrictive effect and frequency response of the mechanical dynamics. It maps the relationships between current and magnetic flux at the electromagnetic part to force and displacement at the mechanical part in a lumped parameter form. Towards this modeling approach, the nonlinear hysteresis effect of the GMMA appearing only in the electrical part is separated from the linear dynamic plant in the mechanical part. Thus, a two-module dynamic model is developed to completely characterize the hysteresis nonlinearity and the dynamic behaviors of the GMMA. The first module is a static hysteresis model to describe the hysteresis nonlinearity, and the cascaded second module is a linear dynamic plant to represent the dynamic behavior. To validate the proposed dynamic model, an experimental platform is established. Then, the linear dynamic part and the nonlinear hysteresis part of the proposed model are identified in sequence. For the linear part, an approach based on axiomatic design theory is adopted. For the nonlinear part, a Prandtl–Ishlinskii model is introduced to describe the hysteresis nonlinearity and a constrained quadratic optimization method is utilized to identify its coefficients. Finally, experimental tests are conducted to demonstrate the effectiveness of the proposed dynamic model and the corresponding identification method. (paper)
Directory of Open Access Journals (Sweden)
Ali Moeini
2015-01-01
Full Text Available Regarding the ecommerce growth, websites play an essential role in business success. Therefore, many authors have offered website evaluation models since 1995. Although, the multiplicity and diversity of evaluation models make it difficult to integrate them into a single comprehensive model. In this paper a quantitative method has been used to integrate previous models into a comprehensive model that is compatible with them. In this approach the researcher judgment has no role in integration of models and the new model takes its validity from 93 previous models and systematic quantitative approach.
Smeared crack modelling approach for corrosion-induced concrete damage
DEFF Research Database (Denmark)
Thybo, Anna Emilie Anusha; Michel, Alexander; Stang, Henrik
2017-01-01
In this paper a smeared crack modelling approach is used to simulate corrosion-induced damage in reinforced concrete. The presented modelling approach utilizes a thermal analogy to mimic the expansive nature of solid corrosion products, while taking into account the penetration of corrosion...... products into the surrounding concrete, non-uniform precipitation of corrosion products, and creep. To demonstrate the applicability of the presented modelling approach, numerical predictions in terms of corrosion-induced deformations as well as formation and propagation of micro- and macrocracks were......-induced damage phenomena in reinforced concrete. Moreover, good agreements were also found between experimental and numerical data for corrosion-induced deformations along the circumference of the reinforcement....
A model-data based systems approach to process intensification
DEFF Research Database (Denmark)
Gani, Rafiqul
. Their developments, however, are largely due to experiment based trial and error approaches and while they do not require validation, they can be time consuming and resource intensive. Also, one may ask, can a truly new intensified unit operation be obtained in this way? An alternative two-stage approach is to apply...... a model-based synthesis method to systematically generate and evaluate alternatives in the first stage and an experiment-model based validation in the second stage. In this way, the search for alternatives is done very quickly, reliably and systematically over a wide range, while resources are preserved...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model-based...
METHODOLOGICAL APPROACHES FOR MODELING THE RURAL SETTLEMENT DEVELOPMENT
Directory of Open Access Journals (Sweden)
Gorbenkova Elena Vladimirovna
2017-10-01
Full Text Available Subject: the paper describes the research results on validation of a rural settlement developmental model. The basic methods and approaches for solving the problem of assessment of the urban and rural settlement development efficiency are considered. Research objectives: determination of methodological approaches to modeling and creating a model for the development of rural settlements. Materials and methods: domestic and foreign experience in modeling the territorial development of urban and rural settlements and settlement structures was generalized. The motivation for using the Pentagon-model for solving similar problems was demonstrated. Based on a systematic analysis of existing development models of urban and rural settlements as well as the authors-developed method for assessing the level of agro-towns development, the systems/factors that are necessary for a rural settlement sustainable development are identified. Results: we created the rural development model which consists of five major systems that include critical factors essential for achieving a sustainable development of a settlement system: ecological system, economic system, administrative system, anthropogenic (physical system and social system (supra-structure. The methodological approaches for creating an evaluation model of rural settlements development were revealed; the basic motivating factors that provide interrelations of systems were determined; the critical factors for each subsystem were identified and substantiated. Such an approach was justified by the composition of tasks for territorial planning of the local and state administration levels. The feasibility of applying the basic Pentagon-model, which was successfully used for solving the analogous problems of sustainable development, was shown. Conclusions: the resulting model can be used for identifying and substantiating the critical factors for rural sustainable development and also become the basis of
An algebraic approach to modeling in software engineering
International Nuclear Information System (INIS)
Loegel, C.J.; Ravishankar, C.V.
1993-09-01
Our work couples the formalism of universal algebras with the engineering techniques of mathematical modeling to develop a new approach to the software engineering process. Our purpose in using this combination is twofold. First, abstract data types and their specification using universal algebras can be considered a common point between the practical requirements of software engineering and the formal specification of software systems. Second, mathematical modeling principles provide us with a means for effectively analyzing real-world systems. We first use modeling techniques to analyze a system and then represent the analysis using universal algebras. The rest of the software engineering process exploits properties of universal algebras that preserve the structure of our original model. This paper describes our software engineering process and our experience using it on both research and commercial systems. We need a new approach because current software engineering practices often deliver software that is difficult to develop and maintain. Formal software engineering approaches use universal algebras to describe ''computer science'' objects like abstract data types, but in practice software errors are often caused because ''real-world'' objects are improperly modeled. There is a large semantic gap between the customer's objects and abstract data types. In contrast, mathematical modeling uses engineering techniques to construct valid models for real-world systems, but these models are often implemented in an ad hoc manner. A combination of the best features of both approaches would enable software engineering to formally specify and develop software systems that better model real systems. Software engineering, like mathematical modeling, should concern itself first and foremost with understanding a real system and its behavior under given circumstances, and then with expressing this knowledge in an executable form
An examination of cue redundancy theory in cross-cultural decoding of emotions in music.
Kwoun, Soo-Jin
2009-01-01
The present study investigated the effects of structural features of music (i.e., variations in tempo, loudness, or articulation, etc.) and cultural and learning factors in the assignments of emotional meaning in music. Four participant groups, young Koreans, young Americans, older Koreans, and older Americans, rated emotional expressions of Korean folksongs with three adjective scales: happiness, sadness and anger. The results of the study are in accordance with the Cue Redundancy model of emotional perception in music, indicating that expressive music embodies both universal auditory cues that communicate the emotional meanings of music across cultures and cultural specific cues that result from cultural convention.
Redundancy gains in simple responses and go/no-go tasks
DEFF Research Database (Denmark)
Gondan, Matthias; Götze, C.; Greenlee, M.W.
2010-01-01
In divided-attention tasks with two classes of target stimuli, participants typically respond more quickly if both targets are presented simultaneously, as compared with single-target presentation (redundant-signals effect). Different explanations exist for this effect, including serial, parallel...... times in both the simple and go/no-go responses were well explained by a common coactivation model assuming linear superposition of modality-specific activation. In Experiment 2, the go/no-go task was made more difficult. Participants had to respond to high-frequency tones or right-tilted Gabor patches...
Towards a 3d Spatial Urban Energy Modelling Approach
Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.
2013-09-01
Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies
Modelling of ductile and cleavage fracture by local approach
International Nuclear Information System (INIS)
Samal, M.K.; Dutta, B.K.; Kushwaha, H.S.
2000-08-01
This report describes the modelling of ductile and cleavage fracture processes by local approach. It is now well known that the conventional fracture mechanics method based on single parameter criteria is not adequate to model the fracture processes. It is because of the existence of effect of size and geometry of flaw, loading type and rate on the fracture resistance behaviour of any structure. Hence, it is questionable to use same fracture resistance curves as determined from standard tests in the analysis of real life components because of existence of all the above effects. So, there is need to have a method in which the parameters used for the analysis will be true material properties, i.e. independent of geometry and size. One of the solutions to the above problem is the use of local approaches. These approaches have been extensively studied and applied to different materials (including SA33 Gr.6) in this report. Each method has been studied and reported in a separate section. This report has been divided into five sections. Section-I gives a brief review of the fundamentals of fracture process. Section-II deals with modelling of ductile fracture by locally uncoupled type of models. In this section, the critical cavity growth parameters of the different models have been determined for the primary heat transport (PHT) piping material of Indian pressurised heavy water reactor (PHWR). A comparative study has been done among different models. The dependency of the critical parameters on stress triaxiality factor has also been studied. It is observed that Rice and Tracey's model is the most suitable one. But, its parameters are not fully independent of triaxiality factor. For this purpose, a modification to Rice and Tracery's model is suggested in Section-III. Section-IV deals with modelling of ductile fracture process by locally coupled type of models. Section-V deals with the modelling of cleavage fracture process by Beremins model, which is based on Weibulls
Atomistic approach for modeling metal-semiconductor interfaces
DEFF Research Database (Denmark)
Stradi, Daniele; Martinez, Umberto; Blom, Anders
2016-01-01
realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via the I–V curve. In particular, it will be demonstrated how doping — and bias — modifies the Schottky barrier, and how finite size models (the slab approach) are unable to describe these interfaces......We present a general framework for simulating interfaces using an atomistic approach based on density functional theory and non-equilibrium Green's functions. The method includes all the relevant ingredients, such as doping and an accurate value of the semiconductor band gap, required to model...
Systems and context modeling approach to requirements analysis
Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick
2014-08-01
Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
A robust quantitative near infrared modeling approach for blend monitoring.
Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A
2018-01-30
This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.
Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling
Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.
2016-01-01
The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...
Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo
2018-02-01
Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.
Software sensors based on the grey-box modelling approach
DEFF Research Database (Denmark)
Carstensen, J.; Harremoës, P.; Strube, Rune
1996-01-01
In recent years the grey-box modelling approach has been applied to wastewater transportation and treatment Grey-box models are characterized by the combination of deterministic and stochastic terms to form a model where all the parameters are statistically identifiable from the on......-box model for the specific dynamics is identified. Similarly, an on-line software sensor for detecting the occurrence of backwater phenomena can be developed by comparing the dynamics of a flow measurement with a nearby level measurement. For treatment plants it is found that grey-box models applied to on......-line measurements. With respect to the development of software sensors, the grey-box models possess two important features. Firstly, the on-line measurements can be filtered according to the grey-box model in order to remove noise deriving from the measuring equipment and controlling devices. Secondly, the grey...
Bianchi VI0 and III models: self-similar approach
International Nuclear Information System (INIS)
Belinchon, Jose Antonio
2009-01-01
We study several cosmological models with Bianchi VI 0 and III symmetries under the self-similar approach. We find new solutions for the 'classical' perfect fluid model as well as for the vacuum model although they are really restrictive for the equation of state. We also study a perfect fluid model with time-varying constants, G and Λ. As in other studied models we find that the behaviour of G and Λ are related. If G behaves as a growing time function then Λ is a positive decreasing time function but if G is decreasing then Λ 0 is negative. We end by studying a massive cosmic string model, putting special emphasis in calculating the numerical values of the equations of state. We show that there is no SS solution for a string model with time-varying constants.
Environmental Radiation Effects on Mammals A Dynamical Modeling Approach
Smirnova, Olga A
2010-01-01
This text is devoted to the theoretical studies of radiation effects on mammals. It uses the framework of developed deterministic mathematical models to investigate the effects of both acute and chronic irradiation in a wide range of doses and dose rates on vital body systems including hematopoiesis, small intestine and humoral immunity, as well as on the development of autoimmune diseases. Thus, these models can contribute to the development of the system and quantitative approaches in radiation biology and ecology. This text is also of practical use. Its modeling studies of the dynamics of granulocytopoiesis and thrombocytopoiesis in humans testify to the efficiency of employment of the developed models in the investigation and prediction of radiation effects on these hematopoietic lines. These models, as well as the properly identified models of other vital body systems, could provide a better understanding of the radiation risks to health. The modeling predictions will enable the implementation of more ef...
A new approach to Naturalness in SUSY models
Ghilencea, D M
2013-01-01
We review recent results that provide a new approach to the old problem of naturalness in supersymmetric models, without relying on subjective definitions for the fine-tuning associated with {\\it fixing} the EW scale (to its measured value) in the presence of quantum corrections. The approach can address in a model-independent way many questions related to this problem. The results show that naturalness and its measure (fine-tuning) are an intrinsic part of the likelihood to fit the data that {\\it includes} the EW scale. One important consequence is that the additional {\\it constraint} of fixing the EW scale, usually not imposed in the data fits of the models, impacts on their overall likelihood to fit the data (or chi^2/ndf, ndf: number of degrees of freedom). This has negative implications for the viability of currently popular supersymmetric extensions of the Standard Model.
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
Merits of a Scenario Approach in Dredge Plume Modelling
DEFF Research Database (Denmark)
Pedersen, Claus; Chu, Amy Ling Chu; Hjelmager Jensen, Jacob
2011-01-01
Dredge plume modelling is a key tool for quantification of potential impacts to inform the EIA process. There are, however, significant uncertainties associated with the modelling at the EIA stage when both dredging methodology and schedule are likely to be a guess at best as the dredging...... contractor would rarely have been appointed. Simulation of a few variations of an assumed full dredge period programme will generally not provide a good representation of the overall environmental risks associated with the programme. An alternative dredge plume modelling strategy that attempts to encapsulate...... uncertainties associated with preliminary dredging programmes by using a scenario-based modelling approach is presented. The approach establishes a set of representative and conservative scenarios for key factors controlling the spill and plume dispersion and simulates all combinations of e.g. dredge, climatic...
Regularization of quantum gravity in the matrix model approach
International Nuclear Information System (INIS)
Ueda, Haruhiko
1991-02-01
We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)
PASSENGER TRAFFIC MOVEMENT MODELLING BY THE CELLULAR-AUTOMAT APPROACH
Directory of Open Access Journals (Sweden)
T. Mikhaylovskaya
2009-01-01
Full Text Available The mathematical model of passenger traffic movement developed on the basis of the cellular-automat approach is considered. The program realization of the cellular-automat model of pedastrians streams movement in pedestrian subways at presence of obstacles, at subway structure narrowing is presented. The optimum distances between the obstacles and the angle of subway structure narrowing providing pedastrians stream safe movement and traffic congestion occurance are determined.
Functional redundancy and food web functioning in linuron-exposed ecosystems
International Nuclear Information System (INIS)
De Laender, F.; Van den Brink, P.J.; Janssen, C.R.
2011-01-01
An extensive data set describing effects of the herbicide linuron on macrophyte-dominated microcosms was analysed with a food web model to assess effects on ecosystem functioning. We showed that sensitive phytoplankton and periphyton groups in the diets of heterotrophs were gradually replaced by more tolerant phytoplankton species as linuron concentrations increased. This diet shift - showing redundancy among phytoplankton species - allowed heterotrophs to maintain their functions in the contaminated microcosms. On an ecosystem level, total gross primary production was up to hundred times lower in the treated microcosms but the uptake of dissolved organic carbon by bacteria and mixotrophs was less sensitive. Food web efficiency was not consistently lower in the treated microcosms. We conclude that linuron predominantly affected the macrophytes but did not alter the overall functioning of the surrounding planktonic food web. Therefore, a risk assessment that protects macrophyte growth also protects the functioning of macrophyte-dominated microcosms. - Highlights: → Food web modelling reveals the functional response of species and ecosystem to linuron. → Primary production was more sensitive to linuron than bacterial production. → Linuron replaced sensitive phytoplankton by tolerant phytoplankton in heterotrophs' diets. → Linuron did not change the functioning of heterotrophs. - Food web modelling reveals functional redundancy of the planktonic community in microcosms treated with linuron.
Functional redundancy and food web functioning in linuron-exposed ecosystems
Energy Technology Data Exchange (ETDEWEB)
De Laender, F., E-mail: frederik.delaender@ugent.be [Laboratory of Environmental Toxicity and Aquatic Ecology, Ghent University, Plateaustraat 22, 9000 Ghent (Belgium); Van den Brink, P.J., E-mail: Paul.vandenBrink@wur.nl [Department of Aquatic Ecology and Water Quality Management, Wageningen University, PO Box 47, 6700 AA Wageningen (Netherlands); Janssen, C.R., E-mail: colin.janssen@ugent.be [Laboratory of Environmental Toxicity and Aquatic Ecology, Ghent University, Plateaustraat 22, 9000 Ghent (Belgium)
2011-10-15
An extensive data set describing effects of the herbicide linuron on macrophyte-dominated microcosms was analysed with a food web model to assess effects on ecosystem functioning. We showed that sensitive phytoplankton and periphyton groups in the diets of heterotrophs were gradually replaced by more tolerant phytoplankton species as linuron concentrations increased. This diet shift - showing redundancy among phytoplankton species - allowed heterotrophs to maintain their functions in the contaminated microcosms. On an ecosystem level, total gross primary production was up to hundred times lower in the treated microcosms but the uptake of dissolved organic carbon by bacteria and mixotrophs was less sensitive. Food web efficiency was not consistently lower in the treated microcosms. We conclude that linuron predominantly affected the macrophytes but did not alter the overall functioning of the surrounding planktonic food web. Therefore, a risk assessment that protects macrophyte growth also protects the functioning of macrophyte-dominated microcosms. - Highlights: > Food web modelling reveals the functional response of species and ecosystem to linuron. > Primary production was more sensitive to linuron than bacterial production. > Linuron replaced sensitive phytoplankton by tolerant phytoplankton in heterotrophs' diets. > Linuron did not change the functioning of heterotrophs. - Food web modelling reveals functional redundancy of the planktonic community in microcosms treated with linuron.
The Generalised Ecosystem Modelling Approach in Radiological Assessment
International Nuclear Information System (INIS)
Klos, Richard
2008-03-01
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment
The Generalised Ecosystem Modelling Approach in Radiological Assessment
Energy Technology Data Exchange (ETDEWEB)
Klos, Richard
2008-03-15
An independent modelling capability is required by SSI in order to evaluate dose assessments carried out in Sweden by, amongst others, SKB. The main focus is the evaluation of the long-term radiological safety of radioactive waste repositories for both spent fuel and low-level radioactive waste. To meet the requirement for an independent modelling tool for use in biosphere dose assessments, SSI through its modelling team CLIMB commissioned the development of a new model in 2004, a project to produce an integrated model of radionuclides in the landscape. The generalised ecosystem modelling approach (GEMA) is the result. GEMA is a modular system of compartments representing the surface environment. It can be configured, through water and solid material fluxes, to represent local details in the range of ecosystem types found in the past, present and future Swedish landscapes. The approach is generic but fine tuning can be carried out using local details of the surface drainage system. The modular nature of the modelling approach means that GEMA modules can be linked to represent large scale surface drainage features over an extended domain in the landscape. System change can also be managed in GEMA, allowing a flexible and comprehensive model of the evolving landscape to be constructed. Environmental concentrations of radionuclides can be calculated and the GEMA dose pathway model provides a means of evaluating the radiological impact of radionuclide release to the surface environment. This document sets out the philosophy and details of GEMA and illustrates the functioning of the model with a range of examples featuring the recent CLIMB review of SKB's SR-Can assessment