WorldWideScience

Sample records for hard timing constraints

  1. Scheduling of Fault-Tolerant Embedded Systems with Soft and Hard Timing Constraints

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2008-01-01

    In this paper we present an approach to the synthesis of fault-tolerant schedules for embedded applications with soft and hard real-time constraints. We are interested to guarantee the deadlines for the hard processes even in the case of faults, while maximizing the overall utility. We use time....../utility functions to capture the utility of soft processes. Process re-execution is employed to recover from multiple faults. A single static schedule computed off-line is not fault tolerant and is pessimistic in terms of utility, while a purely online approach, which computes a new schedule every time a process...

  2. Learning With Mixed Hard/Soft Pointwise Constraints.

    Science.gov (United States)

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  3. Constraint satisfaction problems with isolated solutions are hard

    International Nuclear Information System (INIS)

    Zdeborová, Lenka; Mézard, Marc

    2008-01-01

    We study the phase diagram and the algorithmic hardness of the random 'locked' constraint satisfaction problems, and compare them to the commonly studied 'non-locked' problems like satisfiability of Boolean formulae or graph coloring. The special property of the locked problems is that clusters of solutions are isolated points. This simplifies significantly the determination of the phase diagram, which makes the locked problems particularly appealing from the mathematical point of view. On the other hand, we show empirically that the clustered phase of these problems is extremely hard from the algorithmic point of view: the best known algorithms all fail to find solutions. Our results suggest that the easy/hard transition (for currently known algorithms) in the locked problems coincides with the clustering transition. These should thus be regarded as new benchmarks of really hard constraint satisfaction problems

  4. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  5. Time Extensions of Petri Nets for Modelling and Verification of Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Tomasz Szmuc

    2002-01-01

    Full Text Available The main aim ofthe paper is a presentation of time extensions of Petri nets appropriate for modelling and analysis of hard real-time systems. It is assumed, that the extensions must provide a model of time flow an ability to force a transition to fire within a stated timing constraint (the so-called the strong firing rule, and timing constraints represented by intervals. The presented survey includes extensions of classical Place/Transition Petri nets, as well as the ones applied to high-level Petri nets. An expressiveness of each time extension is illustrated using simple hard real-time system. The paper includes also a brief description of analysis and veryication methods related to the extensions, and a survey of software tools supporting modelling and analysis ofthe considered Petri nets.

  6. Advanced Hard Real-Time Operating System, the Maruti Project. Part 2.

    Science.gov (United States)

    1997-01-01

    Real - Time Operating System , The Maruti Project DASG-60-92-C-0055 5b. Program Element # 62301E 6. Author(s...The maruti hard real - time " operating system . A CM SIGOPS, Operating Systems Review. 23:90-106, July 1989. 254 !1 110) C. L. Liu and J. Layland...February 14, 1995 Abstract The Maruti Real - Time Operating System was developed for applications that must meet hard real-time constraints. In order

  7. Designing a fuzzy scheduler for hard real-time systems

    Science.gov (United States)

    Yen, John; Lee, Jonathan; Pfluger, Nathan; Natarajan, Swami

    1992-01-01

    In hard real-time systems, tasks have to be performed not only correctly, but also in a timely fashion. If timing constraints are not met, there might be severe consequences. Task scheduling is the most important problem in designing a hard real-time system, because the scheduling algorithm ensures that tasks meet their deadlines. However, the inherent nature of uncertainty in dynamic hard real-time systems increases the problems inherent in scheduling. In an effort to alleviate these problems, we have developed a fuzzy scheduler to facilitate searching for a feasible schedule. A set of fuzzy rules are proposed to guide the search. The situation we are trying to address is the performance of the system when no feasible solution can be found, and therefore, certain tasks will not be executed. We wish to limit the number of important tasks that are not scheduled.

  8. Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    Cuixia Miao

    2015-01-01

    Full Text Available We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.

  9. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    Science.gov (United States)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  10. Statistical mechanics of fluids under internal constraints: Rigorous results for the one-dimensional hard rod fluid

    International Nuclear Information System (INIS)

    Corti, D.S.; Debenedetti, P.G.

    1998-01-01

    The rigorous statistical mechanics of metastability requires the imposition of internal constraints that prevent access to regions of phase space corresponding to inhomogeneous states. We derive exactly the Helmholtz energy and equation of state of the one-dimensional hard rod fluid under the influence of an internal constraint that places an upper bound on the distance between nearest-neighbor rods. This type of constraint is relevant to the suppression of boiling in a superheated liquid. We determine the effects of this constraint upon the thermophysical properties and internal structure of the hard rod fluid. By adding an infinitely weak and infinitely long-ranged attractive potential to the hard core, the fluid exhibits a first-order vapor-liquid transition. We determine exactly the equation of state of the one-dimensional superheated liquid and show that it exhibits metastable phase equilibrium. We also derive statistical mechanical relations for the equation of state of a fluid under the action of arbitrary constraints, and show the connection between the statistical mechanics of constrained and unconstrained ensembles. copyright 1998 The American Physical Society

  11. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    Science.gov (United States)

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  12. Towards harnessing theories through tool support for hard real-time Java programming

    DEFF Research Database (Denmark)

    Bøgholm, Thomas; Frost, Christian; Hansen, Rene Rydhof

    2013-01-01

    We present a rationale for a selection of tools that assist developers of hard real-time applications to verify that programs conform to a Java real-time profile and that platform-specific resource constraints are satisfied. These tools are specialised instances of more generic static analysis...... and model checking frameworks. The concepts are illustrated by two case studies, and the strengths and the limitations of the tools are discussed....

  13. Towards harnessing theories through tool support for hard real-time Java programming

    DEFF Research Database (Denmark)

    Søndergaard, Hans; Bøgholm, Thomas; Frost, Christian

    2012-01-01

    We present a rationale for a selection of tools that assist developers of hard real-time applications to verify that programs conform to a Java real-time profile and that platform-specific resource constraints are satisfied. These tools are specialised instances of more generic static analysis...... and model checking frameworks. The concepts are illustrated by two case studies, and the strengths and the limitations of the tools are discussed....

  14. Evaluating Distributed Timing Constraints

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems.......In this paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems....

  15. Reliability modeling of a hard real-time system using the path-space approach

    International Nuclear Information System (INIS)

    Kim, Hagbae

    2000-01-01

    A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems' reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Wobble failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old

  16. From Hard Times to Better Times: College Majors, Unemployment, and Earnings

    Science.gov (United States)

    Carnevale, Anthony P.; Cheah, Ban

    2015-01-01

    This third installment of "Hard Times" updates the previous analyses of college majors, unemployment, and earnings over the Great Recession. While there is wide variation by college majors, hard times have become better times for most college graduates, but the recovery is far from complete. Hard times are becoming better times for most…

  17. University Course Timetabling using Constraint Programming

    Directory of Open Access Journals (Sweden)

    Hadi Shahmoradi

    2017-03-01

    Full Text Available University course timetabling problem is a challenging and time-consuming task on the overall structure of timetable in every academic environment. The problem deals with many factors such as the number of lessons, classes, teachers, students and working time, and these are influenced by some hard and soft constraints. The aim of solving this problem is to assign courses and classes to teachers and students, so that the restrictions are held. In this paper, a constraint programming method is proposed to satisfy maximum constraints and expectation, in order to address university timetabling problem. For minimizing the penalty of soft constraints, a cost function is introduced and AHP method is used for calculating its coefficients. The proposed model is tested on department of management, University of Isfahan dataset using OPL on the IBM ILOG CPLEX Optimization Studio platform. A statistical analysis has been conducted and shows the performance of the proposed approach in satisfying all hard constraints and also the satisfying degree of the soft constraints is on maximum desirable level. The running time of the model is less than 20 minutes that is significantly better than the non-automated ones.

  18. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  19. Time Optimal Run-time Evaluation of Distributed Timing Constraints in Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.; Kristensen, C.H.

    1993-01-01

    This paper considers run-time evaluation of an important class of constraints; Timing constraints. These appear extensively in process control systems. Timing constraints are considered in distributed systems, i.e. systems consisting of multiple autonomous nodes......

  20. Implementing Run-Time Evaluation of Distributed Timing Constraints in a Real-Time Environment

    DEFF Research Database (Denmark)

    Kristensen, C. H.; Drejer, N.

    1994-01-01

    In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments......In this paper we describe a solution to the problem of implementing run-time evaluation of timing constraints in distributed real-time environments...

  1. Hard times; Schwere Zeiten

    Energy Technology Data Exchange (ETDEWEB)

    Grunwald, Markus

    2012-10-02

    The prices of silicon and solar wafers keep dropping. According to market research specialist IMS research, this is the result of weak traditional solar markets and global overcapacities. While many manufacturers are facing hard times, big producers of silicon are continuing to expand.

  2. Implementering Run-time Evaluation of Distributed Timing Constraints in a Micro Kernel

    DEFF Research Database (Denmark)

    Kristensen, C.H.; Drejer, N.; Nielsen, Jens Frederik Dalsgaard

    In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems......In the present paper we describe a solution to the problem of implementing time-optimal evaluation of timing constraints in distributed real-time systems...

  3. Hard Times as Bodie: the allegorical functionality in E.L. Doctorow’s Welcome to Hard Times (1960

    Directory of Open Access Journals (Sweden)

    P. van der Merwe

    2007-07-01

    Full Text Available “Welcome to Hard Times” (1960, E.L. Doctorow’s first novel, differs from the rest of his oeuvre because it is not set in a metropolitan context like New York. References to historical events that contain an apparent “mixture” of “factual” and fictional elements that are typical of Doctorow’s oeuvre are less prominent than in his other fiction, though definitely not absent. An analysis of the pioneer setting, the town Hard Times, reveals that other settings (including metropolitan ones like New York are not merely representations of specific contexts, but portrayals with allegorical elements. Criticism of Doctorow’s fiction does not sufficiently point out the rationale of Doctorow’s fiction in relation to his first novel: it is not just the basic level that contains the true topicality but also the underlying causal and thematic relationships. This article sets out to explore “Welcome to Hard Times” as a case in point. The objective of this article is therefore also to show that an analysis of this novel provides a valuable basis for understanding the allegorical character of his fiction. Angus Fletcher’s theoretical analysis, “Allegory: the theory of a symbolic mode” (1964, serves as a useful starting point for the analysis of the allegorical value of space and the town Hard Times as a microcosmic or symbolic society, as well as the “daemonic agents” in the town and the role of causality.

  4. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  5. Asymmetric continuous-time neural networks without local traps for solving constraint satisfaction problems.

    Directory of Open Access Journals (Sweden)

    Botond Molnár

    Full Text Available There has been a long history of using neural networks for combinatorial optimization and constraint satisfaction problems. Symmetric Hopfield networks and similar approaches use steepest descent dynamics, and they always converge to the closest local minimum of the energy landscape. For finding global minima additional parameter-sensitive techniques are used, such as classical simulated annealing or the so-called chaotic simulated annealing, which induces chaotic dynamics by addition of extra terms to the energy landscape. Here we show that asymmetric continuous-time neural networks can solve constraint satisfaction problems without getting trapped in non-solution attractors. We concentrate on a model solving Boolean satisfiability (k-SAT, which is a quintessential NP-complete problem. There is a one-to-one correspondence between the stable fixed points of the neural network and the k-SAT solutions and we present numerical evidence that limit cycles may also be avoided by appropriately choosing the parameters of the model. This optimal parameter region is fairly independent of the size and hardness of instances, this way parameters can be chosen independently of the properties of problems and no tuning is required during the dynamical process. The model is similar to cellular neural networks already used in CNN computers. On an analog device solving a SAT problem would take a single operation: the connection weights are determined by the k-SAT instance and starting from any initial condition the system searches until finding a solution. In this new approach transient chaotic behavior appears as a natural consequence of optimization hardness and not as an externally induced effect.

  6. Sustaining Transformation: "Resiliency in Hard Times"

    Science.gov (United States)

    Guarasci, Richard; Lieberman, Devorah

    2009-01-01

    The strategic, systemic, and encompassing evolution of a college or university spans a number of years, and the vagaries of economic cycles inevitably catch transforming institutions in mid-voyage. "Sustaining Transformation: Resiliency in Hard Times" presents a study of Wagner College as it moves into its second decade of purposeful…

  7. Dynamic I/O Power Management for Hard Real-Time Systems

    Science.gov (United States)

    2005-01-01

    recently emerged as an attractive alternative to inflexible hardware solutions. DPM for hard real - time systems has received relatively little attention...In particular, energy-driven I/O device scheduling for real - time systems has not been considered before. We present the first online DPM algorithm...which we call Low Energy Device Scheduler (LEDES), for hard real - time systems . LEDES takes as inputs a predetermined task schedule and a device-usage

  8. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  9. Constraint Logic Programming for Resolution of Relative Time Expressions

    DEFF Research Database (Denmark)

    Christiansen, Henning

    2014-01-01

    Translating time expression into absolute time points or durations is a challenge for natural languages processing such as text mining and text understanding in general. We present a constraint logic language CLP(Time) tailored to text usages concerned with time and calendar. It provides a simple...... and flexible formalism to express relationships between different time expressions in a text, thereby giving a recipe for resolving them into absolute time. A constraint solver is developed which, as opposed to some earlier approaches, is independent of the order in which temporal information is introduced...

  10. The ticking time bomb: Using eye-tracking methodology to capture attentional processing during gradual time constraints.

    Science.gov (United States)

    Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G

    2016-11-01

    Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.

  11. On quantization of time-dependent systems with constraints

    International Nuclear Information System (INIS)

    Gadjiev, S A; Jafarov, R G

    2007-01-01

    The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity

  12. On quantization of time-dependent systems with constraints

    International Nuclear Information System (INIS)

    Hadjialieva, F.G.; Jafarov, R.G.

    1993-07-01

    The Dirac method of canonical quantization of theories with second class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from physical equivalent theory with trivial nonstationarity. (author). 4 refs

  13. On quantization of time-dependent systems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Gadjiev, S A; Jafarov, R G [Institute for Physical Problems, Baku State University, AZ11 48 Baku (Azerbaijan)

    2007-03-30

    The Dirac method of canonical quantization of theories with second-class constraints has to be modified if the constraints depend on time explicitly. A solution of the problem was given by Gitman and Tyutin. In the present work we propose an independent way to derive the rules of quantization for these systems, starting from the physical equivalent theory with trivial non-stationarity.

  14. Using soft constraints to guide users in flexible business process management systems

    DEFF Research Database (Denmark)

    Stefansen, Christian; Borch, Signe Ellegård

    2008-01-01

    Current Business Process Management Systems (BPMS) allow designers to specify processes in highly expressive languages supporting numerous control flow constructs, exceptions, complex predicates, etc., but process specifications are expressed in terms of hard constraints, and this leads...... to an unfortunate trade off: information about preferred practices must either be abandoned or promoted to hard constraints. If abandoned, the BPMS cannot guide its users; if promoted to hard constraints, it becomes a hindrance when unanticipated deviations occur. Soft constraints can make this trade-off less...... painful. Soft constraints specify what rules can be violated and by how much. With soft constraints, the BPMS knows what deviations it can permit, and it can guide the user through the process. The BPMS should allow designers to easily specify soft goals and allow its users to immediately see...

  15. Hard-real-time resource management for autonomous spacecraft

    Science.gov (United States)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  16. Legal, ethical,and economic constraints

    International Nuclear Information System (INIS)

    Libassi, F.P.; Donaldson, L.F.

    1980-01-01

    This paper considers the legal, ethical, and economic constraints to developing a comprehensive knowledge of the biological effects of ionizing radiation. These constraints are not fixed and immutable; rather they are determined by the political process. Political issues cannot be evaded. The basic objective of developing a comprehensive knowledge about the biological effects of ionizing radiation exists as an objective not only because we wish to add to the store of human knowledge but also because we have important use for that knowledge. It will assist our decision-makers to make choices that affect us all. These choices require both hard factual information and application of political judgment. Research supplies some of the hard factual information and should be as free as possible from political influence in its execution. At the same time, the political choices that must be made influence the direction and nature of the research program as a whole. Similarly, the legal, ethical, and economic factors that constrain our ability to expand knowledge through research reflect a judgment by political agents that values other than expansion of knowledge should be recognized and given effect

  17. Formal Constraints on Memory Management for Composite Overloaded Operations

    Directory of Open Access Journals (Sweden)

    Damian W.I. Rouson

    2006-01-01

    Full Text Available The memory management rules for abstract data type calculus presented by Rouson, Morris & Xu [15] are recast as formal statements in the Object Constraint Language (OCL and applied to the design of a thermal energy equation solver. One set of constraints eliminates memory leaks observed in composite overloaded expressions with three current Fortran 95/2003 compilers. A second set of constraints ensures economical memory recycling. The constraints are preconditions, postconditions and invariants on overloaded operators and the objects they receive and return. It is demonstrated that systematic run-time assertion checking inspired by the formal constraints facilitated the pinpointing of an exceptionally hard-to-reproduce compiler bug. It is further demonstrated that the interplay between OCL's modeling capabilities and Fortran's programming capabilities led to a conceptual breakthrough that greatly improved the readability of our code by facilitating operator overloading. The advantages and disadvantages of our memory management rules are discussed in light of other published solutions [11,19]. Finally, it is demonstrated that the run-time assertion checking has a negligible impact on performance.

  18. Reconfiguration in FPGA-Based Multi-Core Platforms for Hard Real-Time Applications

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Schoeberl, Martin; Sparsø, Jens

    2016-01-01

    -case execution-time of tasks of an application that determines the systems ability to respond in time. To support this focus, the platform must provide service guarantees for both communication and computation resources. In addition, many hard real-time applications have multiple modes of operation, and each......In general-purpose computing multi-core platforms, hardware accelerators and reconfiguration are means to improve performance; i.e., the average-case execution time of a software application. In hard real-time systems, such average-case speed-up is not in itself relevant - it is the worst...... mode has specific requirements. An interesting perspective on reconfigurable computing is to exploit run-time reconfiguration to support mode changes. In this paper we explore approaches to reconfiguration of communication and computation resources in the T-CREST hard real-time multi-core platform...

  19. Ant colony optimization and constraint programming

    CERN Document Server

    Solnon, Christine

    2013-01-01

    Ant colony optimization is a metaheuristic which has been successfully applied to a wide range of combinatorial optimization problems. The author describes this metaheuristic and studies its efficiency for solving some hard combinatorial problems, with a specific focus on constraint programming. The text is organized into three parts. The first part introduces constraint programming, which provides high level features to declaratively model problems by means of constraints. It describes the main existing approaches for solving constraint satisfaction problems, including complete tree search

  20. Correlation-based decimation in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Higuchi, Saburo; Mezard, Marc

    2010-01-01

    We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.

  1. RTnet -- A Flexible Hard Real-Time Networking Framework

    NARCIS (Netherlands)

    Kiszka, Jan; Wagner, Bernardo; Zhang, Yuchen; Broenink, Johannes F.

    2005-01-01

    In this paper, the open source project RTnet is presented. RTnet provides a customisable and extensible framework for hard real-time communication over Ethernet and other transport media. The paper describes architecture, core components, and protocols of RTnet. FireWire is introduced as a powerful

  2. Departure time choice: Modelling individual preferences, intention and constraints

    DEFF Research Database (Denmark)

    Thorhauge, Mikkel

    by nearly all studies within departure time. More importantly it shows that the underlying psychological processes are more complex than simply accounting for attitudes and perceptions which are typically used in other areas. The work in this PhD thesis accounts for the full Theory of Planned Behaviour......, but can also be perceived by the individuals as barriers towards participating in activities. Perceived constraints affect the departure time choice through the individual intention of being on time. This PhD thesis also contributes to the departure time literature by discussing the problem of collecting...... whether they are constrained. The thesis also provides empirical evidences of the policy implication of not accounting for other activities and their constraints. Thirdly, the thesis shows that the departure time choice can be partly explained by psychological factors, which have previously been neglected...

  3. The Roles of Ideological State Apparatus in Maintaining Hegemony in Charles Dicken's Hard Times

    OpenAIRE

    Prasetya, Farid Adi

    2013-01-01

    One of literary works, a novel, may be reflects social phenomenon. The correlation between literary works and social phenomenon includes an analysis towards a novel entitled Hard Times by Charles Dickens, which covers a social phenomenon. The overall image of Hard Times is a society of industrial city namely Coketown, which has unequal economic condition. Through characters that appear in the novel, it can be analyzed, Hard Times reflects social clashes that are triggered by economic conditio...

  4. Constraints on a parity-even/time-reversal-odd interaction

    International Nuclear Information System (INIS)

    Oers, Willem T.H. van

    2000-01-01

    Time-Reversal-Invariance non-conservation has for the first time been unequivocally demonstrated in a direct measurement, one of the results of the CPLEAR experiment. What is the situation then with regard to time-reversal-invariance non-conservation in systems other than the neutral kaon system? Two classes of tests of time-reversal-invariance need to be distinguished: the first one deals with parity violating (P-odd)/time-reversal-invariance non-conserving (T-odd) interactions, while the second one deals with P-even/T-odd interactions (assuming CPT conservation this implies C-conjugation non-conservation). Limits on a P-odd/T-odd interaction follow from measurements of the electric dipole moment of the neutron. This in turn provides a limit on a P-odd/T-odd pion-nucleon coupling constant which is 10 -4 times the weak interaction strength. Limits on a P-even/T-odd interaction are much less stringent. The better constraint stems also from the measurement of the electric dipole moment of the neutron. Of all the other tests, measurements of charge-symmetry breaking in neutron-proton elastic scattering provide the next better constraint. The latter experiments were performed at TRIUMF (at 477 and 347 MeV) and at IUCF (at 183 MeV). Weak decay experiments (the transverse polarization of the muon in K + →π 0 μ + ν μ and the transverse polarization of the positrons in polarized muon decay) have the potential to provide comparable or possibly better constraints

  5. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    Science.gov (United States)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation

  6. Constraining designs for synthesis and timing analysis a practical guide to synopsys design constraints (SDC)

    CERN Document Server

    Gangadharan, Sridhar

    2013-01-01

    This book serves as a hands-on guide to timing constraints in integrated circuit design.  Readers will learn to maximize performance of their IC designs, by specifying timing requirements correctly.  Coverage includes key aspects of the design flow impacted by timing constraints, including synthesis, static timing analysis and placement and routing.  Concepts needed for specifying timing requirements are explained in detail and then applied to specific stages in the design flow, all within the context of Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints.  ·         Provides a hands-on guide to synthesis and timing analysis, using Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints; ·         Includes key topics of interest to a synthesis, static timing analysis or  place and route engineer; ·         Explains which constraints command to use for ease of maintenance and reuse, given several options pos...

  7. Periodic capacity management under a lead-time performance constraint

    NARCIS (Netherlands)

    Büyükkaramikli, N.C.; Bertrand, J.W.M.; Ooijen, van H.P.G.

    2013-01-01

    In this paper, we study a production system that operates under a lead-time performance constraint which guarantees the completion of an order before a pre-determined lead-time with a certain probability. The demand arrival times and the service requirements for the orders are random. To reduce the

  8. Notes on Timed Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia, Frank D.

    2004-01-01

    and program reactive systems. This note provides a comprehensive introduction to the background for and central notions from the theory of tccp. Furthermore, it surveys recent results on a particular tccp calculus, ntcc, and it provides a classification of the expressive power of various tccp languages.......A constraint is a piece of (partial) information on the values of the variables of a system. Concurrent constraint programming (ccp) is a model of concurrency in which agents (also called processes) interact by telling and asking information (constraints) to and from a shared store (a constraint...

  9. Geometry and dynamics with time-dependent constraints

    CERN Document Server

    Evans, Jonathan M.; Jonathan M Evans; Philip A Tuckey

    1995-01-01

    We describe how geometrical methods can be applied to a system with explicitly time-dependent second-class constraints so as to cast it in Hamiltonian form on its physical phase space. Examples of particular interest are systems which require time-dependent gauge fixing conditions in order to reduce them to their physical degrees of freedom. To illustrate our results we discuss the gauge-fixing of relativistic particles and strings moving in arbitrary background electromagnetic and antisymmetric tensor fields.

  10. A Conceptual Level Design for a Static Scheduler for Hard Real-Time Systems

    Science.gov (United States)

    1988-03-01

    The design of hard real - time systems is gaining a great deal of attention in the software engineering field as more and more real-world processes are...for these hard real - time systems . PSDL, as an executable design language, is supported by an execution support system consisting of a static scheduler, dynamic scheduler, and translator.

  11. Pruning-Based, Energy-Optimal, Deterministic I/O Device Scheduling for Hard Real-Time Systems

    Science.gov (United States)

    2005-02-01

    However, DPM via I/O device scheduling for hard real - time systems has received relatively little attention. In this paper,we present an offline I/O...polynomial time. We present experimental results to show that EDS and MDO reduce the energy consumption of I/O devices significantly for hard real - time systems .

  12. Minimum Time Trajectory Optimization of CNC Machining with Tracking Error Constraints

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2014-01-01

    Full Text Available An off-line optimization approach of high precision minimum time feedrate for CNC machining is proposed. Besides the ordinary considered velocity, acceleration, and jerk constraints, dynamic performance constraint of each servo drive is also considered in this optimization problem to improve the tracking precision along the optimized feedrate trajectory. Tracking error is applied to indicate the servo dynamic performance of each axis. By using variable substitution, the tracking error constrained minimum time trajectory planning problem is formulated as a nonlinear path constrained optimal control problem. Bang-bang constraints structure of the optimal trajectory is proved in this paper; then a novel constraint handling method is proposed to realize a convex optimization based solution of the nonlinear constrained optimal control problem. A simple ellipse feedrate planning test is presented to demonstrate the effectiveness of the approach. Then the practicability and robustness of the trajectory generated by the proposed approach are demonstrated by a butterfly contour machining example.

  13. Thinking aloud in the presence of interruptions and time constraints

    DEFF Research Database (Denmark)

    Hertzum, Morten; Holmegaard, Kristin Due

    2013-01-01

    and time constraints, two frequent elements of real-world activities. We find that the presence of auditory, visual, audiovisual, or no interruptions interacts with thinking aloud for task solution rate, task completion time, and participants’ fixation rate. Thinking-aloud participants also spend longer......Thinking aloud is widely used for usability evaluation and its reactivity is therefore important to the quality of evaluation results. This study investigates whether thinking aloud (i.e., verbalization at levels 1 and 2) affects the behaviour of users who perform tasks that involve interruptions...... responding to interruptions than control participants. Conversely, the absence or presence of time constraints does not interact with thinking aloud, suggesting that time pressure is less likely to make thinking aloud reactive than previously assumed. Our results inform practitioners faced with the decision...

  14. Constraints on Dbar uplifts

    International Nuclear Information System (INIS)

    Alwis, S.P. de

    2016-01-01

    We discuss constraints on KKLT/KKLMMT and LVS scenarios that use anti-branes to get an uplift to a deSitter vacuum, coming from requiring the validity of an effective field theory description of the physics. We find these are not always satisfied or are hard to satisfy.

  15. An Analysis of a Hard Real-Time Execution Environment Extension for FreeRTOS

    Directory of Open Access Journals (Sweden)

    STANGACIU, C.

    2015-08-01

    Full Text Available FreeRTOS is a popular real-time operating system, which has been under a significant attention in the last years due to its main advantages: it is open source, portable, well documented and implemented on more than 30 architectures. FreeRTOS execution environment is dynamic, preemptive and priority based, but it is not suitable for hard real-time tasks, because it provides task execution determinism only to a certain degree and cannot guarantee the absence of task execution jitter. As a solution to this problem, we propose a hard real time execution extension to FreeRTOS in order to support a particular model of HRT tasks, called ModXs, which are executed with no jitter. This article presents a detailed analysis, in terms of scheduling, task execution and memory usage of this hard real time execution environment extension. The article is concluding with the advantages this extension brings to the system compared to the small memory and timing overhead introduced.

  16. Randomized Caches Considered Harmful in Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Jan Reineke

    2014-06-01

    Full Text Available We investigate the suitability of caches with randomized placement and replacement in the context of hard real-time systems. Such caches have been claimed to drastically reduce the amount of information required by static worst-case execution time (WCET analysis, and to be an enabler for measurement-based probabilistic timing analysis. We refute these claims and conclude that with prevailing static and measurement-based analysis techniques caches with deterministic placement and least-recently-used replacement are preferable over randomized ones.

  17. Adaptive Neural Network Control for Nonlinear Hydraulic Servo-System with Time-Varying State Constraints

    Directory of Open Access Journals (Sweden)

    Shu-Min Lu

    2017-01-01

    Full Text Available An adaptive neural network control problem is addressed for a class of nonlinear hydraulic servo-systems with time-varying state constraints. In view of the low precision problem of the traditional hydraulic servo-system which is caused by the tracking errors surpassing appropriate bound, the previous works have shown that the constraint for the system is a good way to solve the low precision problem. Meanwhile, compared with constant constraints, the time-varying state constraints are more general in the actual systems. Therefore, when the states of the system are forced to obey bounded time-varying constraint conditions, the high precision tracking performance of the system can be easily realized. In order to achieve this goal, the time-varying barrier Lyapunov function (TVBLF is used to prevent the states from violating time-varying constraints. By the backstepping design, the adaptive controller will be obtained. A radial basis function neural network (RBFNN is used to estimate the uncertainties. Based on analyzing the stability of the hydraulic servo-system, we show that the error signals are bounded in the compacts sets; the time-varying state constrains are never violated and all singles of the hydraulic servo-system are bounded. The simulation and experimental results show that the tracking accuracy of system is improved and the controller has fast tracking ability and strong robustness.

  18. Statistical quality analysis of schedulers under soft-real-time constraints

    NARCIS (Netherlands)

    Baarsma, H.E.; Hurink, Johann L.; Jansen, P.G.

    2007-01-01

    This paper describes an algorithm to determine the performance of real-time systems with tasks using stochastic processing times. Such an algorithm can be used for guaranteeing Quality of Service of periodic tasks with soft real-time constraints. We use a discrete distribution model of processing

  19. Finite-time stabilisation of a class of switched nonlinear systems with state constraints

    Science.gov (United States)

    Huang, Shipei; Xiang, Zhengrong

    2018-06-01

    This paper investigates the finite-time stabilisation for a class of switched nonlinear systems with state constraints. Some power orders of the system are allowed to be ratios of positive even integers over odd integers. A Barrier Lyapunov function is introduced to guarantee that the state constraint is not violated at any time. Using the convex combination method and a recursive design approach, a state-dependent switching law and state feedback controllers of individual subsystems are constructed such that the closed-loop system is finite-time stable without violation of the state constraint. Two examples are provided to show the effectiveness of the proposed method.

  20. Hard Real-Time Linux for Off-The-Shelf Multicore Architectures

    OpenAIRE

    Radder, Dirk

    2015-01-01

    This document describes the research results that were obtained from the development of a real-time extension for the Linux operating system. The paper describes a full extension of the kernel, which enables hard real-time performance on a 64-bit x86 architecture. In the first part of this study, real-time systems are categorized and concepts of real-time operating systems are introduced to the reader. In addition, numerous well-known real-time operating systems are considered. QNX Neutrino, ...

  1. Comparison of time-dependent changes in the surface hardness of different composite resins

    Science.gov (United States)

    Ozcan, Suat; Yikilgan, Ihsan; Uctasli, Mine Betul; Bala, Oya; Kurklu, Zeliha Gonca Bek

    2013-01-01

    Objective: The aim of this study was to evaluate the change in surface hardness of silorane-based composite resin (Filtek Silorane) in time and compare the results with the surface hardness of two methacrylate-based resins (Filtek Supreme and Majesty Posterior). Materials and Methods: From each composite material, 18 wheel-shaped samples (5-mm diameter and 2-mm depth) were prepared. Top and bottom surface hardness of these samples was measured using a Vicker's hardness tester. The samples were then stored at 37°C and 100% humidity. After 24 h and 7, 30 and 90 days, the top and bottom surface hardness of the samples was measured. In each measurement, the rate between the hardness of the top and bottom surfaces were recorded as the hardness rate. Statistical analysis was performed by one-way analysis of variance, multiple comparisons by Tukey's test and binary comparisons by t-test with a significance level of P = 0.05. Results: The highest hardness values were obtained from each two surfaces of Majesty Posterior and the lowest from Filtek Silorane. Both the top and bottom surface hardness of the methacrylate based composite resins was high and there was a statistically significant difference between the top and bottom hardness values of only the silorane-based composite, Filtek Silorane (P composite resin Filtek Silorane showed adequate hardness ratio, the use of incremental technic during application is more important than methacrylate based composites. PMID:24966724

  2. HRT-UML: a design method for hard real-time systems based on the UML notation

    Science.gov (United States)

    D'Alessandro, Massimo; Mazzini, Silvia; di Natale, Marco; Lipari, Giuseppe

    2002-07-01

    The Hard Real-Time-Unified Modelling Language (HRT-UML) method aims at providing a comprehensive solution to the modeling of Hard Real Time systems. The experience shows that the design of Hard Real-Time systems needs methodologies suitable for the modeling and analysis of aspects related to time, schedulability and performance. In the context of the European Aerospace community a reference method for design is Hierarchical Object Oriented Design (HOOD) and in particular its extension for the modeling of hard real time systems, Hard Real-Time-Hierarchical Object Oriented Design (HRT-HOOD), recommended by the European Space Agency (ESA) for the development of on-board systems. On the other hand in recent years the Unified Modelling Language (UML) has been gaining a very large acceptance in a wide range of domains, all over the world, becoming a de-facto international standard. Tool vendors are very active in this potentially big market. In the Aerospace domain the common opinion is that UML, as a general notation, is not suitable for Hard Real Time systems, even if its importance is recognized as a standard and as a technological trend in the near future. These considerations suggest the possibility of replacing the HRT-HOOD method with a customized version of UML, that incorporates the advantages of both standards and complements the weak points. This approach has the clear advantage of making HRT-HOOD converge on a more powerful and expressive modeling notation. The paper identifies a mapping of the HRT-HOOD semantics into the UML one, and proposes a UML extension profile, that we call HRT-UML, based on the UML standard extension mechanisms, to fully represent HRT-HOOD design concepts. Finally it discusses the relationships between our profile and the UML profile for schedulability, performance and time, adopted by OMG in November 2001.

  3. Effect of Anneal temperature and Time on Change of Texture and Hardness of Al-Cu-Mg

    International Nuclear Information System (INIS)

    Masrukan; Adolf Asih, S.

    2000-01-01

    Observation of the effect of annealing temperature to its texture and hardness of the Al-Cu-Mg has been done. In this experiments aluminium alloy powder and 5 pieces cubes of this alloy with size of 8 x 8 x 8 mm 3 were used. The powder was not annealed, 2 pieces cube were annealed for 20 hours at temperatures of 200 o C and 300 o respectively, finally 3 pieces cube were annealed at temperature of 400 o C. Texture measurement was done using x-ray diffraction with wave length of 1.78892 A using inverse pole figure method. The hardness testing results at constant temperature of 400 o C and various time indicated that the hardness values are decreased with increasing annealed time. Also, at hardness testing for constant time and various annealing temperatures indicated that the hardness values decreased with increasing annealing temperature

  4. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  5. Automatic Verification of Timing Constraints for Safety Critical Space Systems

    Science.gov (United States)

    Fernandez, Javier; Parra, Pablo; Sanchez Prieto, Sebastian; Polo, Oscar; Bernat, Guillem

    2015-09-01

    In this paper is presented an automatic process of verification. We focus in the verification of scheduling analysis parameter. This proposal is part of process based on Model Driven Engineering to automate a Verification and Validation process of the software on board of satellites. This process is implemented in a software control unit of the energy particle detector which is payload of Solar Orbiter mission. From the design model is generated a scheduling analysis model and its verification model. The verification as defined as constraints in way of Finite Timed Automatas. When the system is deployed on target the verification evidence is extracted as instrumented points. The constraints are fed with the evidence, if any of the constraints is not satisfied for the on target evidence the scheduling analysis is not valid.

  6. Time constraints and autonomy at work in the European Union

    NARCIS (Netherlands)

    Dhondt, S.

    1998-01-01

    Time constraints and job autonomy are seen as two major dimensions of work content. These two dimensions play a major role in controlling psychosocial stress at work. The European Foundation for the Improvement of Living and Working Conditions (EFILWC) has asked NIA TNO to prepare a report on time

  7. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  8. Effect of Heating Time on Hardness Properties of Laser Clad Gray Cast Iron Surface

    Science.gov (United States)

    Norhafzan, B.; Aqida, S. N.; Mifthal, F.; Zulhishamuddin, A. R.; Ismail, I.

    2018-03-01

    This paper presents effect of heating time on cladded gray cast iron. In this study, the effect of heating time on cladded gray cast iron and melted gray cast iron were analysed. The gray cast iron sample were added with mixed Mo-Cr powder using laser cladding technique. The mixed Mo and Cr powder was pre-placed on gray cast iron surface. Modified layer were sectioned using diamond blade cutter and polish using SiC abrasive paper before heated. Sample was heated in furnace for 15, 30 and 45 minutes at 650 °C and cool down in room temperature. Metallographic study was conduct using inverted microscope while surface hardness properties were tested using Wilson hardness test with Vickers scale. Results for metallographic study showed graphite flakes within matrix of pearlite. The surface hardness for modified layer decreased when increased heating time process. These findings are significant to structure stability of laser cladded gray cast iron with different heating times.

  9. Quiet planting in the locked constraints satisfaction problems

    Energy Technology Data Exchange (ETDEWEB)

    Zdeborova, Lenka [Los Alamos National Laboratory; Krzakala, Florent [Los Alamos National Laboratory

    2009-01-01

    We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble, thus providing hard satisfiable benchmarks. In a part of that hard region instances have with high probability a single satisfying assignment.

  10. Effects of Social Constraints on Career Maturity: The Mediating Effect of the Time Perspective

    Science.gov (United States)

    Kim, Kyung-Nyun; Oh, Se-Hee

    2013-01-01

    Previous studies have provided mixed results for the effects of social constraints on career maturity. However, there has been growing interest in these effects from the time perspective. Few studies have examined the effects of social constraints on the time perspective which in turn influences career maturity. This study examines the mediating…

  11. Uncertain travel times and activity schedules under conditions of space-time constraints and invariant choice heuristics

    NARCIS (Netherlands)

    Rasouli, S.; Timmermans, H.J.P.

    2014-01-01

    The aim of this paper is to assess the impact of uncertain travel times as reflected in travel time variability on the outcomes of individuals’ activity–travel scheduling decisions, assuming they are faced with fixed space–time constraints and apply the set of decision rules that they have developed

  12. Standard hardness conversion tables for metals relationship among brinell hardness, vickers hardness, rockwell hardness, superficial hardness, knoop hardness, and scleroscope hardness

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 Conversion Table 1 presents data in the Rockwell C hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.2 Conversion Table 2 presents data in the Rockwell B hardness range on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, Knoop hardness, and Scleroscope hardness of non-austenitic steels including carbon, alloy, and tool steels in the as-forged, annealed, normalized, and quenched and tempered conditions provided that they are homogeneous. 1.3 Conversion Table 3 presents data on the relationship among Brinell hardness, Vickers hardness, Rockwell hardness, Rockwell superficial hardness, and Knoop hardness of nickel and high-nickel alloys (nickel content o...

  13. Investigating the Effect of Voltage-Switching on Low-Energy Task Scheduling in Hard Real-Time Systems

    Science.gov (United States)

    2005-01-01

    We investigate the effect of voltage-switching on task execution times and energy consumption for dual-speed hard real - time systems , and present a...scheduling algorithm and apply it to two real-life task sets. Our results show that energy can be conserved in embedded real - time systems using energy...aware task scheduling. We also show that switching times have a significant effect on the energy consumed in hard real - time systems .

  14. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Science.gov (United States)

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  15. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2014-01-01

    Full Text Available The traveling salesman problem with time windows (TSPTW is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle’s boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  16. Evaluation of human muscle hardness after dynamic exercise with ultrasound real-time tissue elastography: A feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Yanagisawa, O., E-mail: o.yanagisawa@aoni.waseda.jp [Faculty of Sport Sciences, Waseda University, Tokorozawa, Saitama (Japan); Niitsu, M. [Department of Radiological Science, Faculty of Health Sciences, Tokyo Metropolitan University, Arakawa-ku, Tokyo (Japan); Kurihara, T. [Faculty of Sport and Health Science, Ritsumeikan University, Kusatsu, Shiga (Japan); Fukubayashi, T. [Faculty of Sport Sciences, Waseda University, Tokorozawa, Saitama (Japan)

    2011-09-15

    Aim: To assess the feasibility of ultrasound real-time tissue elastography (RTE) for measuring exercise-induced changes in muscle hardness and to compare the findings of RTE with those of a tissue hardness meter for semi-quantitative assessment of the hardness of exercised muscles. Materials and methods: Nine male participants performed an arm-curl exercise. RTE measurements were performed by manually applying repetitive compression with the transducer on the scan position before exercise, immediately after exercise, and at 30 min after exercise; strain ratios between muscle and a reference material (hydrogel) were calculated (muscle strain/material strain). A tissue hardness meter was also used to evaluate muscle hardness. The intraclass correlation coefficients (ICCs) for the three repeated measurements at each measurement time were calculated to evaluate the intra-observer reproducibility of each technique. Results: Immediately after exercise, the strain ratio and the value obtained using the tissue hardness meter significantly decreased (from 1.65 to 1.35) and increased (from 51.8 to 54.3), respectively. Both parameters returned to their pre-exercise value 30 min after exercise. The ICCs of the RTE (and the ICCs of the muscle hardness meter) were 0.971 (0.816) before exercise, 0.939 (0.776) immediately after exercise, and 0.959 (0.882) at 30 min after exercise. Conclusion: Similar to the muscle hardness meter, RTE revealed the exercise-induced changes of muscle hardness semi-quantitatively. The intra-observer reproducibility of RTE was very high at each measurement time. These findings suggest that RTE is a clinically useful technique for assessing hardness of specific exercised muscles.

  17. Evaluation of human muscle hardness after dynamic exercise with ultrasound real-time tissue elastography: A feasibility study

    International Nuclear Information System (INIS)

    Yanagisawa, O.; Niitsu, M.; Kurihara, T.; Fukubayashi, T.

    2011-01-01

    Aim: To assess the feasibility of ultrasound real-time tissue elastography (RTE) for measuring exercise-induced changes in muscle hardness and to compare the findings of RTE with those of a tissue hardness meter for semi-quantitative assessment of the hardness of exercised muscles. Materials and methods: Nine male participants performed an arm-curl exercise. RTE measurements were performed by manually applying repetitive compression with the transducer on the scan position before exercise, immediately after exercise, and at 30 min after exercise; strain ratios between muscle and a reference material (hydrogel) were calculated (muscle strain/material strain). A tissue hardness meter was also used to evaluate muscle hardness. The intraclass correlation coefficients (ICCs) for the three repeated measurements at each measurement time were calculated to evaluate the intra-observer reproducibility of each technique. Results: Immediately after exercise, the strain ratio and the value obtained using the tissue hardness meter significantly decreased (from 1.65 to 1.35) and increased (from 51.8 to 54.3), respectively. Both parameters returned to their pre-exercise value 30 min after exercise. The ICCs of the RTE (and the ICCs of the muscle hardness meter) were 0.971 (0.816) before exercise, 0.939 (0.776) immediately after exercise, and 0.959 (0.882) at 30 min after exercise. Conclusion: Similar to the muscle hardness meter, RTE revealed the exercise-induced changes of muscle hardness semi-quantitatively. The intra-observer reproducibility of RTE was very high at each measurement time. These findings suggest that RTE is a clinically useful technique for assessing hardness of specific exercised muscles.

  18. Constraints on the optical afterglow emission of the short/hard burst GRB 010119

    DEFF Research Database (Denmark)

    Gorosabel, J.; Andersen, M.I.; Hjorth, J.

    2002-01-01

    We report optical observations of the short/hard burst GRB 010119 error box, one of the smallest error boxes reported to date for short/hard GRBs. Limits of R >22.3 and I >21.2 are imposed by observations carried out 20.31 and 20.58 hours after the gamma-ray event, respectively. They represent th...

  19. Real-time aircraft continuous descent trajectory optimization with ATC time constraints using direct collocation methods.

    OpenAIRE

    Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico

    2014-01-01

    1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...

  20. Quadratic third-order tensor optimization problem with quadratic constraints

    Directory of Open Access Journals (Sweden)

    Lixing Yang

    2014-05-01

    Full Text Available Quadratically constrained quadratic programs (QQPs problems play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Semidenite programming (SDP relaxations often provide good approximate solutions to these hard problems. For several special cases of QQP, e.g., convex programs and trust region subproblems, SDP relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective.In this paper, we consider a certain QQP where the variable is neither vector nor matrix but a third-order tensor. This problem can be viewed as a generalization of the ordinary QQP with vector or matrix as it's variant. Under some mild conditions, we rst show that SDP relaxation provides exact optimal solutions for the original problem. Then we focus on two classes of homogeneous quadratic tensor programming problems which have no requirements on the constraints number. For one, we provide an easily implemental polynomial time algorithm to approximately solve the problem and discuss the approximation ratio. For the other, we show there is no gap between the SDP relaxation and itself.

  1. HVM-TP: A Time Predictable, Portable Java Virtual Machine for Hard Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Luckow, Kasper Søe; Thomsen, Bent; Korsholm, Stephan Erbs

    2014-01-01

    We present HVMTIME; a portable and time predictable JVM implementation with applications in resource-constrained hard real-time embedded systems. In addition, it implements the Safety Critical Java (SCJ) Level 1 specification. Time predictability is achieved by a combination of time predictable...... algorithms, exploiting the programming model of the SCJ specification, and harnessing static knowledge of the hosted SCJ system. This paper presents HVMTIME in terms of its design and capabilities, and demonstrates how a complete timing model of the JVM represented as a Network of Timed Automata can...... be obtained using the tool TetaSARTSJVM. Further, using the timing model, we derive Worst Case Execution Times (WCETs) and Best Case Execution Times (BCETs) of the Java Bytecodes....

  2. RealWorld evaluation: working under budget, time, data, and political constraints

    National Research Council Canada - National Science Library

    Bamberger, Michael; Rugh, Jim; Mabry, Linda

    2012-01-01

    This book addresses the challenges of conducting program evaluations in real-world contexts where evaluators and their clients face budget and time constraints and where critical data may be missing...

  3. Randomized Caches Can Be Pretty Useful to Hard Real-Time Systems

    Directory of Open Access Journals (Sweden)

    Enrico Mezzetti

    2015-03-01

    Full Text Available Cache randomization per se, and its viability for probabilistic timing analysis (PTA of critical real-time systems, are receiving increasingly close attention from the scientific community and the industrial practitioners. In fact, the very notion of introducing randomness and probabilities in time-critical systems has caused strenuous debates owing to the apparent clash that this idea has with the strictly deterministic view traditionally held for those systems. A paper recently appeared in LITES (Reineke, J. (2014. Randomized Caches Considered Harmful in Hard Real-Time Systems. LITES, 1(1, 03:1-03:13. provides a critical analysis of the weaknesses and risks entailed in using randomized caches in hard real-time systems. In order to provide the interested reader with a fuller, balanced appreciation of the subject matter, a critical analysis of the benefits brought about by that innovation should be provided also. This short paper addresses that need by revisiting the array of issues addressed in the cited work, in the light of the latest advances to the relevant state of the art. Accordingly, we show that the potential benefits of randomized caches do offset their limitations, causing them to be - when used in conjunction with PTA - a serious competitor to conventional designs.

  4. Advanced Hard Real-Time Operating System, The Maruti Project. Part 1.

    Science.gov (United States)

    1997-01-01

    REAL - TIME OPERATING SYSTEM , THE MARUTI PROJECT Part 1 of 2 Ashok K. Agrawala Satish K. Tripathi Department of Computer Science University of Maryland...Hard Real - Time Operating System , The Maruti Project DASG-60-92-C-0055 5b. Program Element # 62301E 6. Author(s) 5c. Project # DRPB Ashok K. Agrawala...SdSA94), a real - time operating system developed at the I3nversity of Maryland, and conducted extensive experiments under various task

  5. A reliable information management for real-time systems

    International Nuclear Information System (INIS)

    Nishihara, Takuo; Tomita, Seiji

    1995-01-01

    In this paper, we propose a system configuration suitable for the hard realtime systems in which integrity and durability of information are important. On most hard real-time systems, where response time constraints are critical, the data which program access are volatile, and may be lost in case the systems are down. But for some real-time systems, the value-added intelligent network (IN) systems, e.g., integrity and durability of the stored data are very important. We propose a distributed system configuration for such hard real-time systems, comprised of service control modules and data management modules. The service control modules process transactions and responses based on deadline control, and the data management modules deal the stored data based on information recovery schemes well-restablished in fault real-time systems. (author)

  6. Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems

    DEFF Research Database (Denmark)

    Taslaman, Nina Sofia

    of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions.      The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements.      The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...

  7. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfy ability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named ”locked” constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfy ability.

  8. Statistical physics of hard optimization problems

    International Nuclear Information System (INIS)

    Zdeborova, L.

    2009-01-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an non-deterministic polynomial-complete problem the practically arising instances might, in fact, be easy to solve. The principal the question we address in the article is: How to recognize if an non-deterministic polynomial-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named 'locked' constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability (Authors)

  9. Statistical physics of hard optimization problems

    Science.gov (United States)

    Zdeborová, Lenka

    2009-06-01

    Optimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.

  10. Effect of Nitridation Time on the Surface Hardness of Medium Carbon Steels (AISI 1045)

    International Nuclear Information System (INIS)

    Setyo Atmojo; Tjipto Sujitno; Sukidi

    2003-01-01

    It has been investigated the effect of nitridation time on the surface hardness of medium carbon steels (AISI 1045). Parameters determining to the results were flow rate of the nitrogen gas, temperature and time. In this experiments, sample having diameter of 15 mm, thick 2 mm placed in tube of glass with diameter 35 mm heated 550 o C, flow rate and temperature were kept constants, 100 cc/minutes and 550 o C respectively, while the time were varied from 5, 10, 20 and 30 hours. It was found, that for the nitridation time of 5, 10, 20, and 30 hours, the surface hardness increased from 145 VHN to, 23.7, 296.8, 382.4 and 426.1 VHN, respectively. (author)

  11. Design with Nonlinear Constraints

    KAUST Repository

    Tang, Chengcheng

    2015-12-10

    Most modern industrial and architectural designs need to satisfy the requirements of their targeted performance and respect the limitations of available fabrication technologies. At the same time, they should reflect the artistic considerations and personal taste of the designers, which cannot be simply formulated as optimization goals with single best solutions. This thesis aims at a general, flexible yet e cient computational framework for interactive creation, exploration and discovery of serviceable, constructible, and stylish designs. By formulating nonlinear engineering considerations as linear or quadratic expressions by introducing auxiliary variables, the constrained space could be e ciently accessed by the proposed algorithm Guided Projection, with the guidance of aesthetic formulations. The approach is introduced through applications in different scenarios, its effectiveness is demonstrated by examples that were difficult or even impossible to be computationally designed before. The first application is the design of meshes under both geometric and static constraints, including self-supporting polyhedral meshes that are not height fields. Then, with a formulation bridging mesh based and spline based representations, the application is extended to developable surfaces including origami with curved creases. Finally, general approaches to extend hard constraints and soft energies are discussed, followed by a concluding remark outlooking possible future studies.

  12. A hybrid metaheuristic for the time-dependent vehicle routing problem with hard time windows

    Directory of Open Access Journals (Sweden)

    N. Rincon-Garcia

    2017-01-01

    Full Text Available This article paper presents a hybrid metaheuristic algorithm to solve the time-dependent vehicle routing problem with hard time windows. Time-dependent travel times are influenced by different congestion levels experienced throughout the day. Vehicle scheduling without consideration of congestion might lead to underestimation of travel times and consequently missed deliveries. The algorithm presented in this paper makes use of Large Neighbourhood Search approaches and Variable Neighbourhood Search techniques to guide the search. A first stage is specifically designed to reduce the number of vehicles required in a search space by the reduction of penalties generated by time-window violations with Large Neighbourhood Search procedures. A second stage minimises the travel distance and travel time in an ‘always feasible’ search space. Comparison of results with available test instances shows that the proposed algorithm is capable of obtaining a reduction in the number of vehicles (4.15%, travel distance (10.88% and travel time (12.00% compared to previous implementations in reasonable time.

  13. Towards provably correct code generation for a hard real-time programming language

    DEFF Research Database (Denmark)

    Fränzle, Martin; Müller-Olm, Markus

    1994-01-01

    This paper sketches a hard real-time programming language featuring operators for expressing timeliness requirements in an abstract, implementation-independent way and presents parts of the design and verification of a provably correct code generator for that language. The notion of implementation...

  14. Synthesis of Flexible Fault-Tolerant Schedules with Preemption for Mixed Soft and Hard Real-Time Systems

    DEFF Research Database (Denmark)

    Izosimov, Viacheslav; Pop, Paul; Eles, Petru

    2008-01-01

    In this paper we present an approach for scheduling with preemption for fault-tolerant embedded systems composed of soft and hard real-time processes. We are interested to maximize the overall utility for average, most likely to happen, scenarios and to guarantee the deadlines for the hard...

  15. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    Science.gov (United States)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  16. A mixed integer programming model for a continuous move transportation problem with service constraints

    Directory of Open Access Journals (Sweden)

    J. Fabian Lopez

    2010-01-01

    Full Text Available We consider a Pickup and Delivery Vehicle Routing Problem (PDP commonly encountered in real-world logistics operations. The problem involves a set of practical complications that have received little attention in the vehicle routing literature. In this problem, there are multiple vehicle types available to cover a set of pickup and delivery requests, each of which has pickup time windows and delivery time windows. Transportation orders and vehicle types must satisfy a set of compatibility constraints that specify which orders cannot be covered by which vehicle types. In addition we include some dock service capacity constraints as is required on common real world operations. This problem requires to be attended on large scale instances (orders ≥ 500, (vehicles ≥ 150. As a generalization of the traveling salesman problem, clearly this problem is NP-hard. The exact algorithms are too slow for large scale instances. The PDP-TWDS is both a packing problem (assign order to vehicles, and a routing problem (find the best route for each vehicle. We propose to solve the problem in three stages. The first stage constructs initials solutions at aggregate level relaxing some constraints on the original problem. The other two stages imposes time windows and dock service constraints. Our results are favorable finding good quality solutions in relatively short computational times.

  17. Maximizing Entropy of Pickard Random Fields for 2x2 Binary Constraints

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren

    2014-01-01

    This paper considers the problem of maximizing the entropy of two-dimensional (2D) Pickard Random Fields (PRF) subject to constraints. We consider binary Pickard Random Fields, which provides a 2D causal finite context model and use it to define stationary probabilities for 2x2 squares, thus...... allowing us to calculate the entropy of the field. All possible binary 2x2 constraints are considered and all constraints are categorized into groups according to their properties. For constraints which can be modeled by a PRF approach and with positive entropy, we characterize and provide statistics...... of the maximum PRF entropy. As examples, we consider the well known hard square constraint along with a few other constraints....

  18. Hard-Boiled for Hard Times in Leonardo Padura Fuentes's Detective Fiction

    Science.gov (United States)

    Song, H. Rosi

    2009-01-01

    Focusing on Leonardo Padura Fuentes's hard-boiled fiction, this essay traces the origin and evolution of the genre in Cuba. Padura Fuentes has challenged the officially sanctioned socialist "literatura policial" that became popular in the 1970s and 1980s. creating a new model of criticism that is not afraid to confront the island's socio-economic…

  19. Resolving relative time expressions in Dutch text with Constraint Handling Rules

    DEFF Research Database (Denmark)

    van de Camp, Matje; Christiansen, Henning

    2012-01-01

    It is demonstrated how Constraint Handling Rules can be applied for resolution of indirect and relative time expressions in text as part of a shallow analysis, following a specialized tagging phase. A method is currently under development, optimized for a particular corpus of historical biographies...

  20. Hard and soft sub-time-optimal controllers for a mechanical system with uncertain mass

    DEFF Research Database (Denmark)

    Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.

    2004-01-01

    An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....

  1. Hard and soft Sub-Time-Optimal Controllers for a Mechanical System with Uncertain Mass

    DEFF Research Database (Denmark)

    Kulczycki, P.; Wisniewski, Rafal; Kowalski, P.

    2005-01-01

    An essential limitation in using the classical optimal control has been its limited robustness to modeling inadequacies and perturbations. This paper presents conceptions of two practical control structures based on the time-optimal approach: hard and soft ones. The hard structure is defined...... by parameters selected in accordance with the rules of the statistical decision theory; however, the soft structure allows additionally to eliminate rapid changes in control values. The object is a basic mechanical system, with uncertain (also non-stationary) mass treated as a stochastic process....... The methodology proposed here is of a universal nature and may easily be applied with respect to other elements of uncertainty of time-optimal controlled mechanical systems....

  2. Comment on "Scrutinizing the carbon cycle and CO2residence time in the atmosphere" by H. Harde

    Science.gov (United States)

    Köhler, Peter; Hauck, Judith; Völker, Christoph; Wolf-Gladrow, Dieter A.; Butzin, Martin; Halpern, Joshua B.; Rice, Ken; Zeebe, Richard E.

    2018-05-01

    Harde (2017) proposes an alternative accounting scheme for the modern carbon cycle and concludes that only 4.3% of today's atmospheric CO2 is a result of anthropogenic emissions. As we will show, this alternative scheme is too simple, is based on invalid assumptions, and does not address many of the key processes involved in the global carbon cycle that are important on the timescale of interest. Harde (2017) therefore reaches an incorrect conclusion about the role of anthropogenic CO2 emissions. Harde (2017) tries to explain changes in atmospheric CO2 concentration with a single equation, while the most simple model of the carbon cycle must at minimum contain equations of at least two reservoirs (the atmosphere and the surface ocean), which are solved simultaneously. A single equation is fundamentally at odds with basic theory and observations. In the following we will (i) clarify the difference between CO2 atmospheric residence time and adjustment time, (ii) present recently published information about anthropogenic carbon, (iii) present details about the processes that are missing in Harde (2017), (iv) briefly discuss shortcoming in Harde's generalization to paleo timescales, (v) and comment on deficiencies in some of the literature cited in Harde (2017).

  3. Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    NARCIS (Netherlands)

    Moreira, O.

    2012-01-01

    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for

  4. Selection of the optimal hard facing (HF technology of damaged forging dies based on cooling time t8/5

    Directory of Open Access Journals (Sweden)

    D. Arsić

    2016-01-01

    Full Text Available In exploitation, the forging dies are exposed to heating up to very high temperatures, variable loads: compressive, impact and shear. In this paper, the reparatory hard facing of the damaged forging dies is considered. The objective was to establish the optimal reparatory technology based on cooling time t8/5 . The verification of the adopted technology was done by investigation of the hard faced layers microstructure and measurements of hardness within the welded layers’ characteristic zones. Cooling time was determined theoretically, numerically and experimentally.

  5. Balancing healthy meals and busy lives: associations between work, school, and family responsibilities and perceived time constraints among young adults.

    Science.gov (United States)

    Pelletier, Jennifer E; Laska, Melissa N

    2012-01-01

    To characterize associations between perceived time constraints for healthy eating and work, school, and family responsibilities among young adults. Cross-sectional survey. A large, Midwestern metropolitan region. A diverse sample of community college (n = 598) and public university (n = 603) students. Time constraints in general, as well as those specific to meal preparation/structure, and perceptions of a healthy life balance. Chi-square tests and multivariate logistic regression (α = .005). Women, 4-year students, and students with lower socioeconomic status perceived more time constraints (P balance (P ≤ .003). Having a heavy course load and working longer hours were important predictors of time constraints among men (P life balance despite multiple time demands. Interventions focused on improved time management strategies and nutrition-related messaging to achieve healthy diets on a low time budget may be more successful if tailored to the factors that contribute to time constraints separately among men and women. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  6. Timing of Family Income, Borrowing Constraints and Child Achievement

    DEFF Research Database (Denmark)

    Humlum, Maria Knoth

    In this paper, I investigate the effects of the timing of family income on child achievement production. Detailed administrative data augmented with PISA test scores at age 15 are used to analyze the effects of the timing of family income on child achievement. Contrary to many earlier studies, te...... with generous child and education subsidies. Actually, later family income (age 12-15) is a more important determinant of child achievement than earlier income.......In this paper, I investigate the effects of the timing of family income on child achievement production. Detailed administrative data augmented with PISA test scores at age 15 are used to analyze the effects of the timing of family income on child achievement. Contrary to many earlier studies......, tests for early borrowing constraints suggest that parents are not constrained in early investments in their children's achievement, and thus that the timing of income does not matter for long-term child outcomes. This is a reasonable result given the setting in a Scandinavian welfare state...

  7. Energy-Aware Synthesis of Fault-Tolerant Schedules for Real-Time Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Poulsen, Kåre Harbo; Pop, Paul; Izosimov, Viacheslav

    2007-01-01

    This paper presents a design optimisation tool for distributed embedded real-time systems that 1) decides mapping, fault-tolerance policy and generates a fault-tolerant schedule, 2) is targeted for hard real-time, 3) has hard reliability goal, 4) generates static schedule for processes and messages......, 5) provides fault-tolerance for k transient/soft faults, 6) optimises for minimal energy consumption, while considering impact of lowering voltages on the probability of faults, 7) uses constraint logic programming (CLP) based implementation....

  8. Does a time constraint modify results from rating-based conjoint analysis? Case study with orange/pomegranate juice bottles.

    Science.gov (United States)

    Reis, Felipe; Machín, Leandro; Rosenthal, Amauri; Deliza, Rosires; Ares, Gastón

    2016-12-01

    People do not usually process all the available information on packages for making their food choices and rely on heuristics for making their decisions, particularly when having limited time. However, in most consumer studies encourage participants to invest a lot of time for making their choices. Therefore, imposing a time-constraint in consumer studies may increase their ecological validity. In this context, the aim of the present work was to evaluate the influence of a time-constraint on consumer evaluation of pomegranate/orange juice bottles using rating-based conjoint task. A consumer study with 100 participants was carried out, in which they had to evaluate 16 pomegranate/orange fruit juice bottles, differing in bottle design, front-of-pack nutritional information, nutrition claim and processing claim, and to rate their intention to purchase. Half of the participants evaluated the bottle images without time constraint and the other half had a time-constraint of 3s for evaluating each image. Eye-movements were recorded during the evaluation. Results showed that time-constraint when evaluating intention to purchase did not largely modify the way in which consumers visually processed bottle images. Regardless of the experimental condition (with or without time constraint), they tended to evaluate the same product characteristics and to give them the same relative importance. However, a trend towards a more superficial evaluation of the bottles that skipped complex information was observed. Regarding the influence of product characteristics on consumer intention to purchase, bottle design was the variable with the largest relative importance in both conditions, overriding the influence of nutritional or processing characteristics, which stresses the importance of graphic design in shaping consumer perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Daniel Pierre Bovet

    2008-01-01

    Full Text Available Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  10. Hard Real-Time Performances in Multiprocessor-Embedded Systems Using ASMP-Linux

    Directory of Open Access Journals (Sweden)

    Betti Emiliano

    2008-01-01

    Full Text Available Abstract Multiprocessor systems, especially those based on multicore or multithreaded processors, and new operating system architectures can satisfy the ever increasing computational requirements of embedded systems. ASMP-LINUX is a modified, high responsiveness, open-source hard real-time operating system for multiprocessor systems capable of providing high real-time performance while maintaining the code simple and not impacting on the performances of the rest of the system. Moreover, ASMP-LINUX does not require code changing or application recompiling/relinking. In order to assess the performances of ASMP-LINUX, benchmarks have been performed on several hardware platforms and configurations.

  11. Investigating the Effect of Voltage-Switching on Low-Energy Task Scheduling in Hard Real-Time Systems

    National Research Council Canada - National Science Library

    Swaminathan, Vishnu; Chakrabarty, Krishnendu

    2005-01-01

    We investigate the effect of voltage-switching on task execution times and energy consumption for dual-speed hard real-time systems, and present a new approach for scheduling workloads containing periodic tasks...

  12. Achieving few-femtosecond time-sorting at hard X-ray free-electron lasers

    Science.gov (United States)

    Harmand, M.; Coffee, R.; Bionta, M. R.; Chollet, M.; French, D.; Zhu, D.; Fritz, D. M.; Lemke, H. T.; Medvedev, N.; Ziaja, B.; Toleikis, S.; Cammarata, M.

    2013-03-01

    Recently, few-femtosecond pulses have become available at hard X-ray free-electron lasers. Coupled with the available sub-10 fs optical pulses, investigations into few-femtosecond dynamics are not far off. However, achieving sufficient synchronization between optical lasers and X-ray pulses continues to be challenging. We report a `measure-and-sort' approach, which achieves sub-10 fs root-mean-squared (r.m.s.) error measurement at hard X-ray FELs, far beyond the 100-200 fs r.m.s. jitter limitations. This timing diagnostic, now routinely available at the Linac Coherent Light Source (LCLS), is based on ultrafast free-carrier generation in optically transparent materials. Correlation between two independent measurements enables unambiguous demonstration of ~6 fs r.m.s. error in reporting the optical/X-ray delay, with single shot error suggesting the possibility of reaching few-femtosecond resolution.

  13. Location-Dependent Query Processing Under Soft Real-Time Constraints

    Directory of Open Access Journals (Sweden)

    Zoubir Mammeri

    2009-01-01

    Full Text Available In recent years, mobile devices and applications achieved an increasing development. In database field, this development required methods to consider new query types like location-dependent queries (i.e. the query results depend on the query issuer location. Although several researches addressed problems related to location-dependent query processing, a few works considered timing requirements that may be associated with queries (i.e., the query results must be delivered to mobile clients on time. The main objective of this paper is to propose a solution for location-dependent query processing under soft real-time constraints. Hence, we propose methods to take into account client location-dependency and to maximize the percentage of queries respecting their deadlines. We validate our proposal by implementing a prototype based on Oracle DBMS. Performance evaluation results show that the proposed solution optimizes the percentage of queries meeting their deadlines and the communication cost.

  14. SUPERLUMINOUS SUPERNOVAE POWERED BY MAGNETARS: LATE-TIME LIGHT CURVES AND HARD EMISSION LEAKAGE

    International Nuclear Information System (INIS)

    Wang, S. Q.; Wang, L. J.; Dai, Z. G.; Wu, X. F.

    2015-01-01

    Recently, research performed by two groups has revealed that the magnetar spin-down energy injection model with full energy trapping can explain the early-time light curves of SN 2010gx, SN 2013dg, LSQ12dlf, SSS120810, and CSS121015 but fails to fit the late-time light curves of these superluminous supernovae (SLSNe). These results imply that the original magnetar-powered model is challenged in explaining these SLSNe. Our paper aims to simultaneously explain both the early- and late-time data/upper limits by considering the leakage of hard emissions. We incorporate quantitatively the leakage effect into the original magnetar-powered model and derive a new semianalytical equation. Comparing the light curves reproduced by our revised magnetar-powered model with the observed data and/or upper limits of these five SLSNe, we found that the late-time light curves reproduced by our semianalytical equation are in good agreement with the late-time observed data and/or upper limits of SN 2010gx, CSS121015, SN 2013dg, and LSQ12dlf and the late-time excess of SSS120810, indicating that the magnetar-powered model might be responsible for these SLSNe and that the gamma-ray and X-ray leakages are unavoidable when the hard photons were down-Comptonized to softer photons. To determine the details of the leakage effect and unveil the nature of SLSNe, more high-quality bolometric light curves and spectra of SLSNe are required

  15. Dickens on the Industrial Revolution. Hard Times and Household Words

    OpenAIRE

    Herrero Migueláñez, Beatríz

    2015-01-01

    Este trabajo se centra en las ideas de Dickens acerca de la Revolución Industrial y su reflejo en Hard Times y en los artículos periodísticos cuyo contenido es similar al reflejado en la novela, publicada en Household Words. Se centra también en el análisis comparativo de las dos versiones existentes de la novela y de los artículos. También se analiza el género de la novela considerada como una novela industrial. De este estudio se puede concluir, primero que hay intertextualidad entre los ar...

  16. Quantum states and the Hadamard form. III. Constraints in cosmological space-times

    International Nuclear Information System (INIS)

    Najmi, A.; Ottewill, A.C.

    1985-01-01

    We examine the constraints on the construction of Fock spaces for scalar fields in spatially flat Robertson-Walker space-times imposed by requiring that the vacuum state of the theory have a two-point function possessing the Hadamard singularity structure required by standard renormalization theory. It is shown that any such vacuum state must be a second-order adiabatic vacuum. We discuss the global requirements on the two-point function for it to possess the Hadamard form at all times if it possesses it at one time

  17. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    Science.gov (United States)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  18. The Effect of 3D Virtual Reality on Sequential Time Perception among Deaf and Hard-of-Hearing Children

    Science.gov (United States)

    Eden, Sigal

    2008-01-01

    Over the years deaf and hard-of-hearing children have been reported as having difficulty with time conception and, in particular, the proper arrangement of events in a logical, temporal order. The research examined whether deaf and hard-of-hearing children perceive a temporal sequence differently under different representational modes. We compared…

  19. Overcoming Learning Time And Space Constraints Through Technological Tool

    Directory of Open Access Journals (Sweden)

    Nafiseh Zarei

    2015-08-01

    Full Text Available Today the use of technological tools has become an evolution in language learning and language acquisition. Many instructors and lecturers believe that integrating Web-based learning tools into language courses allows pupils to become active learners during learning process. This study investigate how the Learning Management Blog (LMB overcomes the learning time and space constraints that contribute to students’ language learning and language acquisition processes. The participants were 30 ESL students at National University of Malaysia. A qualitative approach comprising an open-ended questionnaire and a semi-structured interview was used to collect data. The results of the study revealed that the students’ language learning and acquisition processes were enhanced. The students did not face any learning time and space limitations while being engaged in the learning process via the LMB. They learned and acquired knowledge using the language learning materials and forum at anytime and anywhere. Keywords: learning time, learning space, learning management blog

  20. Work Hard / Play Hard

    OpenAIRE

    Burrows, J.; Johnson, V.; Henckel, D.

    2016-01-01

    Work Hard / Play Hard was a participatory performance/workshop or CPD experience hosted by interdisciplinary arts atelier WeAreCodeX, in association with AntiUniversity.org. As a socially/economically engaged arts practice, Work Hard / Play Hard challenged employees/players to get playful, or go to work. 'The game changes you, you never change the game'. Employee PLAYER A 'The faster the better.' Employer PLAYER B

  1. State control of discrete-time linear systems to be bound in state variables by equality constraints

    International Nuclear Information System (INIS)

    Filasová, Anna; Krokavec, Dušan; Serbák, Vladimír

    2014-01-01

    The paper is concerned with the problem of designing the discrete-time equivalent PI controller to control the discrete-time linear systems in such a way that the closed-loop state variables satisfy the prescribed equality constraints. Since the problem is generally singular, using standard form of the Lyapunov function and a symmetric positive definite slack matrix, the design conditions are proposed in the form of the enhanced Lyapunov inequality. The results, offering the conditions of the control existence and the optimal performance with respect to the prescribed equality constraints for square discrete-time linear systems, are illustrated with the numerical example to note effectiveness and applicability of the considered approach

  2. Time resolved, 2-D hard X-ray imaging of relativistic electron-beam target interactions on ETA-II

    International Nuclear Information System (INIS)

    Crist, C.E.; Sampayan, S.; Westenskow, G.; Caporaso, G.; Houck, T.; Weir, J.; Trimble, D.; Krogh, M.

    1998-01-01

    Advanced radiographic applications require a constant source size less than 1 mm. To study the time history of a relativistic electron beam as it interacts with a bremsstrahlung converter, one of the diagnostics they use is a multi-frame time-resolved hard x-ray camera. They are performing experiments on the ETA-II accelerator at Lawrence Livermore National Laboratory to investigate details of the electron beam/converter interactions. The camera they are using contains 6 time-resolved images, each image is a 5 ns frame. By starting each successive frame 10 ns after the previous frame, they create a 6-frame movie from the hard x-rays produced from the interaction of the 50-ns electron beam pulse

  3. Closing in on a Short-Hard Burst Progenitor: Constraints From Early-Time Optical Imaging and Spectroscopy of a Possible Host Galaxy of GRB 050509b

    Energy Technology Data Exchange (ETDEWEB)

    Bloom, Joshua S.; Prochaska, J.X.; Pooley, D.; Blake, C.W.; Foley, R.J.; Jha, S.; Ramirez-Ruiz, E.; Granot, J.; Filippenko, A.V.; Sigurdsson, S.; Barth, A.J.; Chen,; Cooper, M.C.; Falco, E.E.; Gal, R.R.; Gerke, B.F.; Gladders, M.D.; Greene, J.E.; Hennanwi, J.; Ho, L.C.; Hurley, K.; /UC, Berkeley, Astron. Dept. /Lick Observ.

    2005-06-07

    The localization of the short-duration, hard-spectrum gamma-ray burst GRB050509b by the Swift satellite was a watershed event. Never before had a member of this mysterious subclass of classic GRBs been rapidly and precisely positioned in a sky accessible to the bevy of ground-based follow-up facilities. Thanks to the nearly immediate relay of the GRB position by Swift, we began imaging the GRB field 8 minutes after the burst and have continued during the 8 days since. Though the Swift X-ray Telescope (XRT) discovered an X-ray afterglow of GRB050509b, the first ever of a short-hard burst, thus far no convincing optical/infrared candidate afterglow or supernova has been found for the object. We present a re-analysis of the XRT afterglow and find an absolute position of R.A. = 12h36m13.59s, Decl. = +28{sup o}59'04.9'' (J2000), with a 1{sigma} uncertainty of 3.68'' in R.A., 3.52'' in Decl.; this is about 4'' to the west of the XRT position reported previously. Close to this position is a bright elliptical galaxy with redshift z = 0.2248 {+-} 0.0002, about 1' from the center of a rich cluster of galaxies. This cluster has detectable diffuse emission, with a temperature of kT = 5.25{sub -1.68}{sup +3.36} keV. We also find several ({approx}11) much fainter galaxies consistent with the XRT position from deep Keck imaging and have obtained Gemini spectra of several of these sources. Nevertheless we argue, based on positional coincidences, that the GRB and the bright elliptical are likely to be physically related. We thus have discovered reasonable evidence that at least some short-duration, hard-spectra GRBs are at cosmological distances. We also explore the connection of the properties of the burst and the afterglow, finding that GRB050509b was underluminous in both of these relative to long-duration GRBs. However, we also demonstrate that the ratio of the blast-wave energy to the {gamma}-ray energy is consistent with that

  4. Constraints on silicates formation in the Si-Al-Fe system: Application to hard deposits in steam generators of PWR nuclear reactors

    Science.gov (United States)

    Berger, Gilles; Million-Picallion, Lisa; Lefevre, Grégory; Delaunay, Sophie

    2015-04-01

    Introduction: The hydrothermal crystallization of silicates phases in the Si-Al-Fe system may lead to industrial constraints that can be encountered in the nuclear industry in at least two contexts: the geological repository for nuclear wastes and the formation of hard sludges in the steam generator of the PWR nuclear plants. In the first situation, the chemical reactions between the Fe-canister and the surrounding clays have been extensively studied in laboratory [1-7] and pilot experiments [8]. These studies demonstrated that the high reactivity of metallic iron leads to the formation of Fe-silicates, berthierine like, in a wide range of temperature. By contrast, the formation of deposits in the steam generators of PWR plants, called hard sludges, is a newer and less studied issue which can affect the reactor performance. Experiments: We present here a preliminary set of experiments reproducing the formation of hard sludges under conditions representative of the steam generator of PWR power plant: 275°C, diluted solutions maintained at low potential by hydrazine addition and at alkaline pH by low concentrations of amines and ammoniac. Magnetite, a corrosion by-product of the secondary circuit, is the source of iron while aqueous Si and Al, the major impurities in this system, are supplied either as trace elements in the circulating solution or by addition of amorphous silica and alumina when considering confined zones. The fluid chemistry is monitored by sampling aliquots of the solution. Eh and pH are continuously measured by hydrothermal Cormet© electrodes implanted in a titanium hydrothermal reactor. The transformation, or not, of the solid fraction was examined post-mortem. These experiments evidenced the role of Al colloids as precursor of cements composed of kaolinite and boehmite, and the passivation of amorphous silica (becoming unreactive) likely by sorption of aqueous iron. But no Fe-bearing was formed by contrast to many published studies on the Fe

  5. PAVENET OS: A Compact Hard Real-Time Operating System for Precise Sampling in Wireless Sensor Networks

    Science.gov (United States)

    Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki

    The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).

  6. Scheduling a maintenance activity under skills constraints to minimize total weighted tardiness and late tasks

    Directory of Open Access Journals (Sweden)

    Djalal Hedjazi

    2015-04-01

    Full Text Available Skill management is a key factor in improving effectiveness of industrial companies, notably their maintenance services. The problem considered in this paper concerns scheduling of maintenance tasks under resource (maintenance teams constraints. This problem is generally known as unrelated parallel machine scheduling. We consider the problem with a both objectives of minimizing total weighted tardiness (TWT and number of tardiness tasks. Our interest is focused particularly on solving this problem under skill constraints, which each resource has a skill level. So, we propose a new efficient heuristic to obtain an approximate solution for this NP-hard problem and demonstrate his effectiveness through computational experiments. This heuristic is designed for implementation in a static maintenance scheduling problem (with unequal release dates, processing times and resource skills, while minimizing objective functions aforementioned.

  7. Performance Comparison of EPICS IOC and MARTe in a Hard Real-Time Control Application

    Science.gov (United States)

    Barbalace, Antonio; Manduchi, Gabriele; Neto, A.; De Tommasi, G.; Sartori, F.; Valcarcel, D. F.

    2011-12-01

    EPICS is used worldwide mostly for controlling accelerators and large experimental physics facilities. Although EPICS is well fit for the design and development of automation systems, which are typically VME or PLC-based systems, and for soft real-time systems, it may present several drawbacks when used to develop hard real-time systems/applications especially when general purpose operating systems as plain Linux are chosen. This is in particular true in fusion research devices typically employing several hard real-time systems, such as the magnetic control systems, that may require strict determinism, and high performance in terms of jitter and latency. Serious deterioration of important plasma parameters may happen otherwise, possibly leading to an abrupt termination of the plasma discharge. The MARTe framework has been recently developed to fulfill the demanding requirements for such real-time systems that are alike to run on general purpose operating systems, possibly integrated with the low-latency real-time preemption patches. MARTe has been adopted to develop a number of real-time systems in different Tokamaks. In this paper, we first summarize differences and similarities between EPICS IOC and MARTe. Then we report on a set of performance measurements executed on an x86 64 bit multicore machine running Linux with an IO control algorithm implemented in an EPICS IOC and in MARTe.

  8. Elastic Spatial Query Processing in OpenStack Cloud Computing Environment for Time-Constraint Data Analysis

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2017-03-01

    Full Text Available Geospatial big data analysis (GBDA is extremely significant for time-constraint applications such as disaster response. However, the time-constraint analysis is not yet a trivial task in the cloud computing environment. Spatial query processing (SQP is typical computation-intensive and indispensable for GBDA, and the spatial range query, join query, and the nearest neighbor query algorithms are not scalable without using MapReduce-liked frameworks. Parallel SQP algorithms (PSQPAs are trapped in screw-processing, which is a known issue in Geoscience. To satisfy time-constrained GBDA, we propose an elastic SQP approach in this paper. First, Spark is used to implement PSQPAs. Second, Kubernetes-managed Core Operation System (CoreOS clusters provide self-healing Docker containers for running Spark clusters in the cloud. Spark-based PSQPAs are submitted to Docker containers, where Spark master instances reside. Finally, the horizontal pod auto-scaler (HPA would scale-out and scale-in Docker containers for supporting on-demand computing resources. Combined with an auto-scaling group of virtual instances, HPA helps to find each of the five nearest neighbors for 46,139,532 query objects from 834,158 spatial data objects in less than 300 s. The experiments conducted on an OpenStack cloud demonstrate that auto-scaling containers can satisfy time-constraint GBDA in clouds.

  9. Measures to reduce glyphosate runoff from hard surfaces, 2: effect of time interval between application and first precipitation event

    NARCIS (Netherlands)

    Luijendijk, C.D.; Beltman, W.H.J.; Smidt, R.A.; Pas, van der L.J.T.; Kempenaar, C.

    2005-01-01

    In this research the effect of moisture conditions of hard surfaces on emission of herbicides from hard surfaces was quantified. In addition the dissipation of glyphosate applied on brick-pavement is determined in time. The outdoor experiment was carried out on 3 and 17 June 2003. In previous

  10. Multi-dimensional Bin Packing Problems with Guillotine Constraints

    DEFF Research Database (Denmark)

    Amossen, Rasmus Resen; Pisinger, David

    2010-01-01

    The problem addressed in this paper is the decision problem of determining if a set of multi-dimensional rectangular boxes can be orthogonally packed into a rectangular bin while satisfying the requirement that the packing should be guillotine cuttable. That is, there should exist a series of face...... parallel straight cuts that can recursively cut the bin into pieces so that each piece contains a box and no box has been intersected by a cut. The unrestricted problem is known to be NP-hard. In this paper we present a generalization of a constructive algorithm for the multi-dimensional bin packing...... problem, with and without the guillotine constraint, based on constraint programming....

  11. Walking to the Beat of Their Own Drum: How Children and Adults Meet Timing Constraints

    Science.gov (United States)

    Gill, Simone V.

    2015-01-01

    Walking requires adapting to meet task constraints. Between 5- and 7-years old, children’s walking approximates adult walking without constraints. To examine how children and adults adapt to meet timing constraints, 57 5- to 7-year olds and 20 adults walked to slow and fast audio metronome paces. Both children and adults modified their walking. However, at the slow pace, children had more trouble matching the metronome compared to adults. The youngest children’s walking patterns deviated most from the slow metronome pace, and practice improved their performance. Five-year olds were the only group that did not display carryover effects to the metronome paces. Findings are discussed in relation to what contributes to the development of adaptation in children. PMID:26011538

  12. Walking to the beat of their own drum: how children and adults meet timing constraints.

    Directory of Open Access Journals (Sweden)

    Simone V Gill

    Full Text Available Walking requires adapting to meet task constraints. Between 5- and 7-years old, children's walking approximates adult walking without constraints. To examine how children and adults adapt to meet timing constraints, 57 5- to 7-year olds and 20 adults walked to slow and fast audio metronome paces. Both children and adults modified their walking. However, at the slow pace, children had more trouble matching the metronome compared to adults. The youngest children's walking patterns deviated most from the slow metronome pace, and practice improved their performance. Five-year olds were the only group that did not display carryover effects to the metronome paces. Findings are discussed in relation to what contributes to the development of adaptation in children.

  13. Hard electronics; Hard electronics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Hard material technologies were surveyed to establish the hard electronic technology which offers superior characteristics under hard operational or environmental conditions as compared with conventional Si devices. The following technologies were separately surveyed: (1) The device and integration technologies of wide gap hard semiconductors such as SiC, diamond and nitride, (2) The technology of hard semiconductor devices for vacuum micro- electronics technology, and (3) The technology of hard new material devices for oxides. The formation technology of oxide thin films made remarkable progress after discovery of oxide superconductor materials, resulting in development of an atomic layer growth method and mist deposition method. This leading research is expected to solve such issues difficult to be easily realized by current Si technology as high-power, high-frequency and low-loss devices in power electronics, high temperature-proof and radiation-proof devices in ultimate electronics, and high-speed and dense- integrated devices in information electronics. 432 refs., 136 figs., 15 tabs.

  14. A Constraint Programming Model for Fast Optimal Stowage of Container Vessel Bays

    DEFF Research Database (Denmark)

    Delgado-Ortegon, Alberto; Jensen, Rune Møller; Janstrup, Kira

    2012-01-01

    Container vessel stowage planning is a hard combinatorial optimization problem with both high economic and environmental impact. We have developed an approach that often is able to generate near-optimal plans for large container vessels within a few minutes. It decomposes the problem into a master...... planning phase that distributes the containers to bay sections and a slot planning phase that assigns containers of each bay section to slots. In this paper, we focus on the slot planning phase of this approach and present a constraint programming and integer programming model for stowing a set...... of containers in a single bay section. This so-called slot planning problem is NP-hard and often involves stowing several hundred containers. Using state-of-the-art constraint solvers and modeling techniques, however, we were able to solve 90% of 236 real instances from our industrial collaborator to optimality...

  15. Hard X-ray bremsstrahlung production in solar flares by high-energy proton beams

    Science.gov (United States)

    Emslie, A. G.; Brown, J. C.

    1985-01-01

    The possibility that solar hard X-ray bremsstrahlung is produced by acceleration of stationary electrons by fast-moving protons, rather than vice versa, as commonly assumed, was investigated. It was found that a beam of protons which involves 1836 times fewer particles, each having an energy 1836 times greater than that of the electrons in the equivalent electron beam model, has exactly the same bremsstrahlung yield for a given target, i.e., the mechanism has an energetic efficiency equal to that of conventional bremsstrahlung models. Allowance for the different degrees of target ionization appropriate to the two models (for conventional flare geometries) makes the proton beam model more efficient than the electron beam model, by a factor of order three. The model places less stringent constraints than a conventional electron beam model on the flare energy release mechanism. It is also consistent with observed X-ray burst spectra, intensities, and directivities. The altitude distribution of hard X-rays predicted by the model agrees with observations only if nonvertical injection of the protons is assumed. The model is inconsistent with gamma-ray data in terms of conventional modeling.

  16. Terrestrial carbon turnover time constraints on future carbon cycle-climate feedback

    Science.gov (United States)

    Fan, N.; Carvalhais, N.; Reichstein, M.

    2017-12-01

    Understanding the terrestrial carbon cycle-climate feedback is essential to reduce the uncertainties resulting from the between model spread in prognostic simulations (Friedlingstein et al., 2006). One perspective is to investigate which factors control the variability of the mean residence times of carbon in the land surface, and how these may change in the future, consequently affecting the response of the terrestrial ecosystems to changes in climate as well as other environmental conditions. Carbon turnover time of the whole ecosystem is a dynamic parameter that represents how fast the carbon cycle circulates. Turnover time τ is an essential property for understanding the carbon exchange between the land and the atmosphere. Although current Earth System Models (ESMs), supported by GVMs for the description of the land surface, show a strong convergence in GPP estimates, but tend to show a wide range of simulated turnover times (Carvalhais, 2014). Thus, there is an emergent need of constraints on the projected response of the balance between terrestrial carbon fluxes and carbon stock which will give us more certainty in response of carbon cycle to climate change. However, the difficulty of obtaining such a constraint is partly due to lack of observational data on temporal change of terrestrial carbon stock. Since more new datasets of carbon stocks such as SoilGrid (Hengl, et al., 2017) and fluxes such as GPP (Jung, et al., 2017) are available, improvement in estimating turnover time can be achieved. In addition, previous study ignored certain aspects such as the relationship between τ and nutrients, fires, etc. We would like to investigate τ and its role in carbon cycle by combining observatinoal derived datasets and state-of-the-art model simulations.

  17. Dynamic design method for deep hard rock tunnels and its application

    Directory of Open Access Journals (Sweden)

    Xia-Ting Feng

    2016-08-01

    Full Text Available Numerous deep underground projects have been designed and constructed in China, which are beyond the current specifications in terms of scale and construction difficulty. The severe failure problems induced by high in situ stress, such as rockburst, spalling, damage of deep surrounding rocks, and time-dependent damage, were observed during construction of these projects. To address these problems, the dynamic design method for deep hard rock tunnels is proposed based on the disintegration process of surrounding rocks using associated dynamic control theories and technologies. Seven steps are basically employed: (i determination of design objective, (ii characteristics of site, rock mass and project, and identification of constraint conditions, (iii selection or development of global design strategy, (iv determination of modeling method and software, (v preliminary design, (vi comprehensive integrated method and dynamic feedback analysis, and (vii final design. This dynamic method was applied to the construction of the headrace tunnels at Jinping II hydropower station. The key technical issues encountered during the construction of deep hard rock tunnels, such as in situ stress distribution along the tunnels, mechanical properties and constitutive model of deep hard rocks, determination of mechanical parameters of surrounding rocks, stability evaluation of surrounding rocks, and optimization design of rock support and lining, have been adequately addressed. The proposed method and its application can provide guidance for deep underground projects characterized with similar geological conditions.

  18. Time-resolved scanning Kerr microscopy of flux beam formation in hard disk write heads

    International Nuclear Information System (INIS)

    Valkass, Robert A. J.; Spicer, Timothy M.; Burgos Parra, Erick; Hicken, Robert J.; Bashir, Muhammad A.; Gubbins, Mark A.; Czoschke, Peter J.; Lopusnik, Radek

    2016-01-01

    To meet growing data storage needs, the density of data stored on hard disk drives must increase. In pursuit of this aim, the magnetodynamics of the hard disk write head must be characterized and understood, particularly the process of “flux beaming.” In this study, seven different configurations of perpendicular magnetic recording (PMR) write heads were imaged using time-resolved scanning Kerr microscopy, revealing their detailed dynamic magnetic state during the write process. It was found that the precise position and number of driving coils can significantly alter the formation of flux beams during the write process. These results are applicable to the design and understanding of current PMR and next-generation heat-assisted magnetic recording devices, as well as being relevant to other magnetic devices.

  19. Influence of timing of hard palate repair in a two-stage procedure on early language development in Danish children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth

    2012-01-01

    Objective: to investigate the influence of timing of hard palate closure on early language development from 18 months to 3 years of age. Design: a prospective, randomized clinical trial. Participants: thirty-four children with UCLP with velum closure at 4 months of age, and hard palate closure...... language development in cleft palate children. Key words: cleft palate, phonological development, lexical development, surgical timing....... at 12 months (EarlyHPR (Early hard palate repair)) or 36 months (LateHPU (Late hard palate Unrepaired)) by random assignment. Thirty-five control children matched for gender and age. Methods: all children were video recorded during a play interaction with a parent at 18 months of age. These recordings...

  20. Timing Constraints Based High Performance Des Design And Implementation On 28nm FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Sujeet; Hussain, Dil muhammed Akbar

    2018-01-01

    in this work, we are going to implement DES Algorithm on 28nm Artix-7 FPGA. To achieve high performance design goal, we are using minimum period, maximum frequency, minimum low pulse, minimum high pulse for different cases of worst case slack, maximum delay, setup time, hold time and data skew path....... The cases on which analysis is done are like worst case slack, best case achievable, timing error and timing score, which help in differentiating the amount of timing constraint at two different frequencies. We analyzed that in timing analysis there is maximum of 19.56% of variation in worst case slack, 0...

  1. Comparison of Time/Phase Lags in the Hard State and Plateau State of GRS 1915+105

    NARCIS (Netherlands)

    Pahari, M.; Neilsen, J.; Yadav, J.S.; Misra, R.; Uttley, P.

    2013-01-01

    We investigate the complex behavior of energy- and frequency-dependent time/phase lags in the plateau state and the radio-quiet hard (χ) state of GRS 1915+105. In our timing analysis, we find that when the source is faint in the radio, quasi-periodic oscillations (QPOs) are observed above 2 Hz and

  2. Time Delays Between Decimetric Type-Iii Bursts and Associated Hard X-Rays

    Science.gov (United States)

    Sawant, H. S.; Lattari, C. J. B.; Benz, A. O.; Dennis, B. R.

    1990-11-01

    RESUMEN. En julio de 1987, se efectuaron radio observaciones en 1.6 CHz usando la antena de 13.7-m de Itapetinga con un tiempo de resoluci5n de 3 ms. Las observaciones en rayos-X fueron obtenidas del HXRBS en SMM. Comparaciones de observaciones de 1.6 CHz con espectro dinamico en el intervalo de (1000 - 100) MHz y rayos-X duros muestran los siguientes resultados: I) en 12 casos, identificamos la continuaci6n de brotes de tipo Ill-RD hasta 1.6 GHz. ii) Por primera vez, hemos identificadopicos de rayos-X demorados en comparaci6n con el brote decimetrico tipolll-RD. Estos retardos son mas largos - 1 5 - que lo esperado ( " 100 ms) y han sido interpretados suponiendo que la emisi6n decimetrica es la 2a. ar- m6nica y esta causada por el borde delantero del excitador, mientras que los picos de los rayos-X han sido atribuidos a la entrada completa del excitador dentro de la regi6n que produce los rayos-X. ABSTRACT. In July, 1985 radio observations were made at 1.6 GHz using 13.7 m Itapetinga antenna with time resolution of 3 ms. The hard X-ray observations were obtained from HXRBS on SMM. Comparison of 1.6 GHz observations with dynamic spectra in the frequency range of (1000 - 100) MHz and hard X-rays shows the following results: i) In 12 cases, we identify continuation of type Ill-RD bursts up to 1.6 GHz suggesting presence of type Ill-RD bursts at 1.6 GHz. ii) For the first time, we have idetified hard X-ray peaks delayed in comparison to decimetric type Ill-RD bursts. These dalays are longer - 1 5 - than expected ( 100 ms) and have been interpreted assuming that the decimetric emission is at 2 nd harmonic and caused by the leading edge of the exciter, whereas peaks of X-rays have been attributed to entire entry of the exciter into the X-ray producing region. Keq : SUN BURSTS - SUN-

  3. Conformal symmetry and pion form factor: Soft and hard contributions

    International Nuclear Information System (INIS)

    Choi, Ho-Meoyng; Ji, Chueng-Ryong

    2006-01-01

    We discuss a constraint of conformal symmetry in the analysis of the pion form factor. The usual power-law behavior of the form factor obtained in the perturbative QCD analysis can also be attained by taking negligible quark masses in the nonperturbative quark model analysis, confirming the recent AdS/CFT correspondence. We analyze the transition from soft to hard contributions in the pion form factor considering a momentum-dependent dynamical quark mass from an appreciable constituent quark mass at low momentum region to a negligible current quark mass at high momentum region. We find a correlation between the shape of nonperturbative quark distribution amplitude and the amount of soft and hard contributions to the pion form factor

  4. Selecting local constraint for alignment of batch process data with dynamic time warping

    DEFF Research Database (Denmark)

    Spooner, Max Peter; Kold, David; Kulahci, Murat

    2017-01-01

    ” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid...

  5. Trajectory reshaping based guidance with impact time and angle constraints

    Directory of Open Access Journals (Sweden)

    Zhao Yao

    2016-08-01

    Full Text Available This study presents a novel impact time and angle constrained guidance law for homing missiles. The guidance law is first developed with the prior-assumption of a stationary target, which is followed by the practical extension to a maneuvering target scenario. To derive the closed-form guidance law, the trajectory reshaping technique is utilized and it results in defining a specific polynomial function with two unknown coefficients. These coefficients are determined to satisfy the impact time and angle constraints as well as the zero miss distance. Furthermore, the proposed guidance law has three additional guidance gains as design parameters which make it possible to adjust the guided trajectory according to the operational conditions and missile’s capability. Numerical simulations are presented to validate the effectiveness of the proposed guidance law.

  6. Physical constraints on models of gamma-ray bursters

    International Nuclear Information System (INIS)

    Epstein, R.I.

    1985-01-01

    This report deals with the constraints that can be placed on models of gamma-ray burst sources based on only the well-established observational facts and physical principles. The premise is developed that the very hard x-ray and gamma-ray continua spectra are well-established aspects of gamma-ray bursts. Recent theoretical work on gamma-ray bursts are summarized with emphasis on the geometrical properties of the models. Constraints on the source models which are implied by the x-ray and gamma-ray spectra are described. The allowed ranges for the luminosity and characteristic dimension for gamma-ray burst sources are shown. Some of the deductions and inferences about the nature of the gamma-ray burst sources are summarized. 67 refs., 3 figs

  7. Towards Reconfigurable, Separable and Hard Real-Time Hybrid Simulation and Test Systems

    Science.gov (United States)

    Quartier, F.; Delatte, B.; Joubert, M.

    2009-05-01

    Formation flight needs several new technologies, new disciplines, new approaches and above all, more concurrent engineering by more players. One of the problems to be addressed are more complex simulation and test systems that are easy to re-configure to include parts of the target hardware and that can provide sufficient power to handle simulation cores that are requiring one to two orders of magnitude more processing power than the current technology provides. Critical technologies that are already addressed by CNES and Spacebel are study model reuse and simulator reconfigurability (Basiles), model portability (SMP2) and the federation of several simulators using HLA. Two more critical issues are addressed in ongoing R&D work by CNES and Spacebel and are covered by this paper and concern the time engineering and management. The first issue concerns separability (characterisation, identification and handling of separable subsystems) and the consequences on practical systems. Experiments on the Pleiades operational simulator have shown that adding precise simulation of instruments such as Doris and the Star Tracker can be added without significantly impacting overall performance. Improved time analysis leads to better system understanding and testability. The second issue concerns architectures for distributed hybrid simulators systems that provide hard real-time capabilities and can react with a relative time precision and jitter that is in the 10 to 50 µsecond range using mainstream PC's and mainstream Operating Systems. This opens a way to make smaller economic hardware test systems that can be reconfigured to make large hardware test systems without restarting development. Although such systems were considered next to impossible till now, distributed hard real-time systems are getting in reach when modern but mainstream electronics are used and when processor cores can be isolated and reserved for real-time cores. This requires a complete rethinking of the

  8. Deducing Electron Properties from Hard X-Ray Observations

    Science.gov (United States)

    Kontar, E. P.; Brown, J. C.; Emslie, A. G.; Hajdas, W.; Holman, G. D.; Hurford, G. J.; Kasparova, J.; Mallik, P. C. V.; Massone, A. M.; McConnell, M. L.; hide

    2011-01-01

    X-radiation from energetic electrons is the prime diagnostic of flare-accelerated electrons. The observed X-ray flux (and polarization state) is fundamentally a convolution of the cross-section for the hard X-ray emission process(es) in question with the electron distribution function, which is in turn a function of energy, direction, spatial location and time. To address the problems of particle propagation and acceleration one needs to infer as much information as possible on this electron distribution function, through a deconvolution of this fundamental relationship. This review presents recent progress toward this goal using spectroscopic, imaging and polarization measurements, primarily from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Previous conclusions regarding the energy, angular (pitch angle) and spatial distributions of energetic electrons in solar flares are critically reviewed. We discuss the role and the observational evidence of several radiation processes: free-free electron-ion, free-free electron-electron, free-bound electron-ion, photoelectric absorption and Compton backscatter (albedo), using both spectroscopic and imaging techniques. This unprecedented quality of data allows for the first time inference of the angular distributions of the X-ray-emitting electrons and improved model-independent inference of electron energy spectra and emission measures of thermal plasma. Moreover, imaging spectroscopy has revealed hitherto unknown details of solar flare morphology and detailed spectroscopy of coronal, footpoint and extended sources in flaring regions. Additional attempts to measure hard X-ray polarization were not sufficient to put constraints on the degree of anisotropy of electrons, but point to the importance of obtaining good quality polarization data in the future.

  9. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  10. Social Justice in Hard Times: Celebrating the Vision of Dr. Martin Luther King, Jr.

    Science.gov (United States)

    Nieto, Sonia

    2005-01-01

    It is important to remember that one's presence "can" create a clamor, a person's action "does" make a difference. The author is reminded of this fact whenever he thinks about a poem by Angel Nieto. Similarly, individuals need to be reminded of this fact more than ever before because these are hard times for social justice. As individuals…

  11. Zhang neural network for online solution of time-varying convex quadratic program subject to time-varying linear-equality constraints

    International Nuclear Information System (INIS)

    Zhang Yunong; Li Zhan

    2009-01-01

    In this Letter, by following Zhang et al.'s method, a recurrent neural network (termed as Zhang neural network, ZNN) is developed and analyzed for solving online the time-varying convex quadratic-programming problem subject to time-varying linear-equality constraints. Different from conventional gradient-based neural networks (GNN), such a ZNN model makes full use of the time-derivative information of time-varying coefficient. The resultant ZNN model is theoretically proved to have global exponential convergence to the time-varying theoretical optimal solution of the investigated time-varying convex quadratic program. Computer-simulation results further substantiate the effectiveness, efficiency and novelty of such ZNN model and method.

  12. An investigation of leg and trunk strength and reaction times of hard-style martial arts practitioners.

    Science.gov (United States)

    Donovan, Oliver O; Cheung, Jeanette; Catley, Maria; McGregor, Alison H; Strutton, Paul H

    2006-01-01

    The purpose of this study was to investigate trunk and knee strength in practitioners of hard-style martial arts. An additional objective was to examine reaction times in these participants by measuring simple reaction times (SRT), choice reaction times (CRT) and movement times (MT). Thirteen high-level martial artists and twelve sedentary participants were tested under isokinetic and isometric conditions on an isokinetic dynamometer. Response and movement times were also measured in response to simple and choice auditory cues. Results indicated that the martial arts group generated a greater body-weight adjusted peak torque with both legs at all speeds during isokinetic extension and flexion, and in isometric extension but not flexion. In isokinetic and isometric trunk flexion and extension, martial artists tended to have higher peak torques than controls, but they were not significantly different (p > 0.05). During the SRT and CRT tasks the martial artists were no quicker in lifting their hand off a button in response to the stimulus [reaction time (RT)] but were significantly faster in moving to press another button [movement time (MT)]. In conclusion, the results reveal that training in a martial art increases the strength of both the flexors and extensors of the leg. Furthermore, they have faster movement times to auditory stimuli. These results are consistent with the physical aspects of the martial arts. Key PointsMartial artists undertaking hard-style martial arts have greater strength in their knee flexor and extensor muscles as tested under isokinetic testing. Under isometric testing conditions they have stronger knee extensors only.The trunk musculature is generally higher under both conditions of testing in the martial artists, although not significantly.The total reaction times of the martial artists to an auditory stimulus were significantly faster than the control participants. When analysed further it was revealed that the decrease in reaction time

  13. HARD X-RAY ASYMMETRY LIMITS IN SOLAR FLARE CONJUGATE FOOTPOINTS

    Energy Technology Data Exchange (ETDEWEB)

    Daou, Antoun G.; Alexander, David, E-mail: agdaou@rice.edu, E-mail: dalex@rice.edu [Department of Physics and Astronomy, Rice University, 6100 Main Street, MS 108, Houston, TX, 77005 (United States)

    2016-11-20

    The transport of energetic electrons in a solar flare is modeled using a time-dependent one-dimensional Fokker–Planck code that incorporates asymmetric magnetic convergence. We derive the temporal and spectral evolution of the resulting hard X-ray (HXR) emission in the conjugate chromospheric footpoints, assuming thick target photon production, and characterize the time evolution of the numerically simulated footpoint asymmetry and its relationship to the photospheric magnetic configuration. The thick target HXR asymmetry in the conjugate footpoints is found to increase with magnetic field ratio as expected. However, we find that the footpoint HXR asymmetry saturates for conjugate footpoint magnetic field ratios ≥4. This result is borne out in a direct comparison with observations of 44 double-footpoint flares. The presence of such a limit has not been reported before, and may serve as both a theoretical and observational benchmark for testing a range of particle transport and flare morphology constraints, particularly as a means to differentiate between isotropic and anisotropic particle injection.

  14. Possibilities of implementation of synchronous Ethernet in popular Ethernet version using timing and interference constraints

    Directory of Open Access Journals (Sweden)

    Seetaiah KILARU

    2015-12-01

    Full Text Available Popular network architectures are following packet based architectures instead of conventional Time division multiplexing. The existed Ethernet is basically asynchronous in nature and was not designed based on timing transfer constraints. To achieve the challenge of next generation network with respect to efficient bandwidth and faster data rates, we have to deploy the network which has less latency. This can be achieved by Synchronous Ethernet (SyncE. In Sync-E, Phase Locked Loop (PLL was used to recover the incoming jitter from clock recovery circuit. Then feed the PLL block to transmission device. We have to design the network in an unaffected way that the functions of Ethernet should run in normal way even we introduced timing path at physical layer. This paper will give detailed outlook on how Sync-E is achieved from Asynchronous format. Reference model of 100 Base-TX/FX was analyzed with respect to timing and interference constraints. Finally, it was analyzed with the data rate improvement with the proposed method.

  15. Constraint Solver Techniques for Implementing Precise and Scalable Static Program Analysis

    DEFF Research Database (Denmark)

    Zhang, Ye

    solver using unification we could make a program analysis easier to design and implement, much more scalable, and still as precise as expected. We present an inclusion constraint language with the explicit equality constructs for specifying program analysis problems, and a parameterized framework...... developers to build reliable software systems more quickly and with fewer bugs or security defects. While designing and implementing a program analysis remains a hard work, making it both scalable and precise is even more challenging. In this dissertation, we show that with a general inclusion constraint...... data flow analyses for C language, we demonstrate a large amount of equivalences could be detected by off-line analyses, and they could then be used by a constraint solver to significantly improve the scalability of an analysis without sacrificing any precision....

  16. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  17. Banking Competition and Soft Budget Constraints: How Market Power can Threaten Discipline in Lending

    NARCIS (Netherlands)

    Arping, S.

    2012-01-01

    n imperfectly competitive credit markets, banks can face a tradeoff between exploiting their market power and enforcing hard budget constraints. As market power rises, banks eventually find it too costly to discipline underperforming borrowers by stopping their projects. Lending relationships become

  18. A Pilot Study Examining the Effects of Time Constraints on Student Performance in Accounting Classes

    Science.gov (United States)

    Morris, David E., Sr.; Scott, John

    2017-01-01

    The purpose of this study was to examine the effects, if any, of time constraints on the success of accounting students completing exams. This study examined how time allowed to take exams affected the grades on examinations in three different accounting classes. Two were sophomore classes and one was a senior accounting class. This limited pilot…

  19. A constructive heuristic for time-dependent multi-depot vehicle routing problem with time-windows and heterogeneous fleet

    Directory of Open Access Journals (Sweden)

    Behrouz Afshar-Nadjafi

    2017-01-01

    Full Text Available In this paper, we consider the time-dependent multi-depot vehicle routing problem. The objective is to minimize the total heterogeneous fleet cost assuming that the travel time between locations depends on the departure time. Also, hard time window constraints for the customers and limitation on maximum number of the vehicles in depots must be satisfied. The problem is formulated as a mixed integer programming model. A constructive heuristic procedure is proposed for the problem. Also, the efficiency of the proposed algorithm is evaluated on 180 test problems. The obtained computational results indicate that the procedure is capable to obtain a satisfying solution.

  20. Influence of ageing time on hardness, microstructure and wear behaviour of AISI2507 super duplex stainless steel

    Science.gov (United States)

    Davanageri, Mahesh; Narendranath, S.; Kadoli, Ravikiran

    2017-08-01

    The effect of ageing time on hardness, microstructure and wear behaviour of super duplex stainless AISI 2507 is examined. The material was solution treated at 1050 °C and water quenched, further the ageing has been carried out at 850 °C for 30 min, 60 min and 90 min. The chromium (Cr) and molybdenum (Mo) enriched intermetallic sigma phase (σ) were found to precipitate at the ferrite/austenite interface and within the ferrite region. The concentration of intermetallic sigma phase (σ), which was quantified by a combination of scanning electron microscopy and image analysis, increases with increasing ageing time, leading to significant increase in the hardness. The x-ray diffraction (XRD) and energy dispersive x-ray (EDX) was employed to investigate the element distribution and phase identification. Wear characterstics of the aged super duplex stainless steel were measured by varying normal loads, sliding speeds, sliding distance and compared with solution treated (as-cast) specimens. Scanning electron microscopy was used to assist in analysis of worn out surfaces. The outcomes suggested that the increase in percentage of sigma phase increases hardness and wear resistance in heat-treated specimens compared to solution treated specimens (as-cast).

  1. Diagnostics of underwater electrical wire explosion through a time- and space-resolved hard x-ray source.

    Science.gov (United States)

    Sheftman, D; Shafer, D; Efimov, S; Gruzinsky, K; Gleizer, S; Krasik, Ya E

    2012-10-01

    A time- and space-resolved hard x-ray source was developed as a diagnostic tool for imaging underwater exploding wires. A ~4 ns width pulse of hard x-rays with energies of up to 100 keV was obtained from the discharge in a vacuum diode consisting of point-shaped tungsten electrodes. To improve contrast and image quality, an external pulsed magnetic field produced by Helmholtz coils was used. High resolution x-ray images of an underwater exploding wire were obtained using a sensitive x-ray CCD detector, and were compared to optical fast framing images. Future developments and application of this diagnostic technique are discussed.

  2. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    For the application of photon to industrial technologies, in particular, a hard photon technology was surveyed which uses photon beams of 0.1-200nm in wavelength. Its features such as selective atom reaction, dense inner shell excitation and spacial high resolution by quantum energy are expected to provide innovative techniques for various field such as fine machining, material synthesis and advanced inspection technology. This wavelength region has been hardly utilized for industrial fields because of poor development of suitable photon sources and optical devices. The developmental meaning, usable time and issue of a hard photon reduction lithography were surveyed as lithography in ultra-fine region below 0.1{mu}m. On hard photon analysis/evaluation technology, the industrial use of analysis, measurement and evaluation technologies by micro-beam was viewed, and optimum photon sources and optical systems were surveyed. Prediction of surface and surface layer modification by inner shell excitation, the future trend of this process and development of a vacuum ultraviolet light source were also surveyed. 383 refs., 153 figs., 17 tabs.

  3. A Hybrid Method for Modeling and Solving Supply Chain Optimization Problems with Soft and Logical Constraints

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2016-01-01

    Full Text Available This paper presents a hybrid method for modeling and solving supply chain optimization problems with soft, hard, and logical constraints. Ability to implement soft and logical constraints is a very important functionality for supply chain optimization models. Such constraints are particularly useful for modeling problems resulting from commercial agreements, contracts, competition, technology, safety, and environmental conditions. Two programming and solving environments, mathematical programming (MP and constraint logic programming (CLP, were combined in the hybrid method. This integration, hybridization, and the adequate multidimensional transformation of the problem (as a presolving method helped to substantially reduce the search space of combinatorial models for supply chain optimization problems. The operation research MP and declarative CLP, where constraints are modeled in different ways and different solving procedures are implemented, were linked together to use the strengths of both. This approach is particularly important for the decision and combinatorial optimization models with the objective function and constraints, there are many decision variables, and these are summed (common in manufacturing, supply chain management, project management, and logistic problems. The ECLiPSe system with Eplex library was proposed to implement a hybrid method. Additionally, the proposed hybrid transformed model is compared with the MILP-Mixed Integer Linear Programming model on the same data instances. For illustrative models, its use allowed finding optimal solutions eight to one hundred times faster and reducing the size of the combinatorial problem to a significant extent.

  4. Proximate effects of temperature versus evolved intrinsic constraints for embryonic development times among temperate and tropical songbirds

    Science.gov (United States)

    Ton, Riccardo; Martin, Thomas E.

    2017-01-01

    The relative importance of intrinsic constraints imposed by evolved physiological trade-offs versus the proximate effects of temperature for interspecific variation in embryonic development time remains unclear. Understanding this distinction is important because slow development due to evolved trade-offs can yield phenotypic benefits, whereas slow development from low temperature can yield costs. We experimentally increased embryonic temperature in free-living tropical and north temperate songbird species to test these alternatives. Warmer temperatures consistently shortened development time without costs to embryo mass or metabolism. However, proximate effects of temperature played an increasingly stronger role than intrinsic constraints for development time among species with colder natural incubation temperatures. Long development times of tropical birds have been thought to primarily reflect evolved physiological trade-offs that facilitate their greater longevity. In contrast, our results indicate a much stronger role of temperature in embryonic development time than currently thought.

  5. Influence of flow constraints on the properties of the critical endpoint of symmetric nuclear matter

    Science.gov (United States)

    Ivanytskyi, A. I.; Bugaev, K. A.; Sagun, V. V.; Bravina, L. V.; Zabrodin, E. E.

    2018-06-01

    We propose a novel family of equations of state for symmetric nuclear matter based on the induced surface tension concept for the hard-core repulsion. It is shown that having only four adjustable parameters the suggested equations of state can, simultaneously, reproduce not only the main properties of the nuclear matter ground state, but the proton flow constraint up its maximal particle number densities. Varying the model parameters we carefully examine the range of values of incompressibility constant of normal nuclear matter and its critical temperature, which are consistent with the proton flow constraint. This analysis allows us to show that the physically most justified value of nuclear matter critical temperature is 15.5-18 MeV, the incompressibility constant is 270-315 MeV and the hard-core radius of nucleons is less than 0.4 fm.

  6. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  7. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Palamidessi, Catuscia; Valencia, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...

  8. A Study on Evaluation Issues of Real-Time Operating System in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Y. M.; Jeong, C. H.; Koh, J. S.

    2006-01-01

    Control applications such as aircraft, robotics and nuclear power plant have to maintain a very high level of safety, typically defined as the avoidance of unplanned events resulting in hazard. These applications usually operate with hard real-time operating system (RTOS). In this case, hard RTOS software should be reliable and safe. RTOS used in safety-critical I and C system is the base software for the purpose of satisfying the real-time constraints. So, careful evaluation of its safety and functionality is very important. In this paper, we present the case study for RTOSs used in real nuclear power plants (NPP), and suggest the evaluation approach for the RTOS

  9. A Study on Evaluation Issues of Real-Time Operating System in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. M.; Jeong, C. H.; Koh, J. S. [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    2006-07-01

    Control applications such as aircraft, robotics and nuclear power plant have to maintain a very high level of safety, typically defined as the avoidance of unplanned events resulting in hazard. These applications usually operate with hard real-time operating system (RTOS). In this case, hard RTOS software should be reliable and safe. RTOS used in safety-critical I and C system is the base software for the purpose of satisfying the real-time constraints. So, careful evaluation of its safety and functionality is very important. In this paper, we present the case study for RTOSs used in real nuclear power plants (NPP), and suggest the evaluation approach for the RTOS.

  10. Overcoming the hard law/soft law dichotomy in times of (financial crises

    Directory of Open Access Journals (Sweden)

    Rolf H. Weber

    2012-03-01

    Full Text Available Traditional legal doctrine calls for hard law to regulate markets. Nevertheless, in financial markets, soft law has a long tradition, not at least due to the lack of multilateral agreements in this field. On the one hand, the recent financial crisis has shown that soft law does not suffice to avoid detrimental developments; on the other hand, a straight call for hard law would not be able to manage the recognized regulatory weaknesses. Therefore, emphasis should be put on the possibilities of combining hard law and soft law; specific areas allowing realizing such kind of “combination” are organizational issues, transparency requirements, and dispute settlement mechanisms.

  11. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V., Herne (Germany)

    2013-04-01

    The year 2012 benefited from a growth of the consumption of hard coal at the national level as well as at the international level. Worldwide, the hard coal still is the number one energy source for power generation. This leads to an increasing demand for power plant coal. In this year, the conversion of hard coal into electricity also increases in this year. In contrast to this, the demand for coking coal as well as for coke of the steel industry is still declining depending on the market conditions. The enhanced utilization of coal for the domestic power generation is due to the reduction of the nuclear power from a relatively bad year for wind power as well as reduced import prices and low CO{sub 2} prices. Both justify a significant price advantage for coal in comparison to the utilisation of natural gas in power plants. This was mainly due to the price erosion of the inexpensive US coal which partly was replaced by the expansion of shale gas on the domestic market. As a result of this, the inexpensive US coal looked for an outlet for sales in Europe. The domestic hard coal has continued the process of adaptation and phase-out as scheduled. Two further hard coal mines were decommissioned in the year 2012. RAG Aktiengesellschaft (Herne, Federal Republic of Germany) running the hard coal mining in this country begins with the preparations for the activities after the time of mining.

  12. Adaptive fuzzy dynamic surface control of nonlinear systems with input saturation and time-varying output constraints

    Science.gov (United States)

    Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.

    2018-02-01

    This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.

  13. Private sector involvement in times of armed conflict: What are the constraints for trading medical equipment?

    Science.gov (United States)

    Schmidt, Georg

    Today, healthcare facilities are highly dependent on the private sector to keep their medical equipment functioning. Moreover, private sector involvement becomes particularly important for the supply of spare parts and consumables. However, in times of armed conflict, the capacity of the corporate world appears to be seriously hindered. Subsequently, this study researches the influence of armed conflict on the private medical equipment sector. This study follows a qualitative approach by conducting 19 interviews with representatives of the corporate world in an active conflict zone. A semistructured interview guide, consisting of 10 questions, was used to examine the constraints of this sector. The results reveal that the lack of skilled personnel, complicated importation procedures, and a decrease in financial capacity are the major constraints faced by private companies dealing in medical equipment in conflict zones. Even when no official sanctions and embargoes for medical items exist, constraints for trading medical equipment are clearly recognizable. Countries at war would benefit from a centralized structure that deals with the importation procedures for medical items, to assist local companies in their purchasing procedures. A high degree of adaption is needed to continue operating, despite the emerging constraints of armed conflict. Future studies might research the constraints for manufacturers outside the conflict to export medical items to the country of war.

  14. First passage time for a diffusive process under a geometric constraint

    International Nuclear Information System (INIS)

    Tateishi, A A; Michels, F S; Dos Santos, M A F; Lenzi, E K; Ribeiro, H V

    2013-01-01

    We investigate the solutions, survival probability, and first passage time for a two-dimensional diffusive process subjected to the geometric constraints of a backbone structure. We consider this process governed by a fractional Fokker–Planck equation by taking into account the boundary conditions ρ(0,y;t) = ρ(∞,y;t) = 0, ρ(x, ± ∞;t) = 0, and an arbitrary initial condition. Our results show an anomalous spreading and, consequently, a nonusual behavior for the survival probability and for the first passage time distribution that may be characterized by different regimes. In addition, depending on the choice of the parameters present in the fractional Fokker–Planck equation, the survival probability indicates that part of the system may be trapped in the branches of the backbone structure. (paper)

  15. Dependence of hardness and impact energy on cooling time Δt8/5and temperature for S960QL

    OpenAIRE

    Samardžić, I.; Dunđer, M.; Vuherer, T.

    2015-01-01

    The paper deals with research into dependence of hardness and impact energy of thermal cycle simulated specimens of fine-grained structural steel S960QL on cooling time from 800 to 500 °C and on tested temperature. Results were obtained by measuring hardness of HV 10 and by experimental testing of Charpy notched tubes on instrumented Charpy hammer. Total impact energy, initiation energy and fracture propagation energy needed for occurrence of fracture is also elaborated. Key words:

  16. Unique sodium phosphosilicate glasses designed through extended topological constraint theory.

    Science.gov (United States)

    Zeng, Huidan; Jiang, Qi; Liu, Zhao; Li, Xiang; Ren, Jing; Chen, Guorong; Liu, Fude; Peng, Shou

    2014-05-15

    Sodium phosphosilicate glasses exhibit unique properties with mixed network formers, and have various potential applications. However, proper understanding on the network structures and property-oriented methodology based on compositional changes are lacking. In this study, we have developed an extended topological constraint theory and applied it successfully to analyze the composition dependence of glass transition temperature (Tg) and hardness of sodium phosphosilicate glasses. It was found that the hardness and Tg of glasses do not always increase with the content of SiO2, and there exist maximum hardness and Tg at a certain content of SiO2. In particular, a unique glass (20Na2O-17SiO2-63P2O5) exhibits a low glass transition temperature (589 K) but still has relatively high hardness (4.42 GPa) mainly due to the high fraction of highly coordinated network former Si((6)). Because of its convenient forming and manufacturing, such kind of phosphosilicate glasses has a lot of valuable applications in optical fibers, optical amplifiers, biomaterials, and fuel cells. Also, such methodology can be applied to other types of phosphosilicate glasses with similar structures.

  17. Separation and extension of cover inequalities for second-order conic knapsack constraints with GUBs

    DEFF Research Database (Denmark)

    Atamtürk, Alper; Muller, Laurent Flindt; Pisinger, David

    We consider the second-order conic equivalent of the classic knapsack polytope where the variables are subject to generalized upper bound constraints. We describe and compare a number of separation and extension algorithms which make use of the extra structure implied by the generalized upper bound...... constraints in order to strengthen the second-order conic equivalent of the classic cover cuts. We show that determining whether a cover can be extended with a variable is NP-hard. Computational experiments are performed comparing the proposed separation and extension algorithms. These experiments show...

  18. A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows

    Energy Technology Data Exchange (ETDEWEB)

    Trebotich, D; Miller, G H; Bybee, M D

    2006-08-01

    We present a new method for particle interactions in polymer models of DNA. The DNA is represented by a bead-rod polymer model and is fully-coupled to the fluid. The main objective in this work is to implement short-range forces to properly model polymer-polymer and polymer-surface interactions, specifically, rod-rod and rod-surface uncrossing. Our new method is based on a rigid constraint algorithm whereby rods elastically bounce off one another to prevent crossing, similar to our previous algorithm used to model polymer-surface interactions. We compare this model to a classical (smooth) potential which acts as a repulsive force between rods, and rods and surfaces.

  19. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Valencia Posso, Frank Dan

    2002-01-01

    The ntcc calculus is a model of non-deterministic temporal concurrent constraint programming. In this paper we study behavioral notions for this calculus. In the underlying computational model, concurrent constraint processes are executed in discrete time intervals. The behavioral notions studied...... reflect the reactive interactions between concurrent constraint processes and their environment, as well as internal interactions between individual processes. Relationships between the suggested notions are studied, and they are all proved to be decidable for a substantial fragment of the calculus...

  20. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  1. High spectral efficiency optical CDMA system based on guard-time and optical hard-limiting (OHL)

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, R M; Bennett, C V; Mendez, A J; Hernandez, V J; Lennon, W J

    2003-12-02

    Optical code-division multiple access (OCDMA) is an interesting subject of research because of its potential to support asynchronous, bursty communications. OCDMA has been investigated for local area networks, access networks, and, more recently, as a packet label for emerging networks. Two-dimensional (2-D) OCDMA codes are preferred in current research because of the flexibility of designing the codes and their higher cardinality and spectral efficiency (SE) compared with direct sequence codes based on on-off keying and intensity modulation/direct detection, and because they lend themselves to being implemented with devices developed for wavelength-division-multiplexed (WDM) transmission (the 2-D codes typically combine wavelength and time as the two dimensions of the codes). This paper shows rigorously that 2-D wavelength/time codes have better SE than one-dimensional (1-D) CDMA/WDM combinations (of the same cardinality). Then, the paper describes a specific set of wavelength/time (W/T) codes and their implementation. These 2-D codes are high performance because they simultaneously have high cardinality (/spl Gt/10), per-user high bandwidth (>1 Gb/s), and high SE (>0.10 b/s/Hz). The physical implementation of these W/T codes is described and their performance evaluated by system simulations and measurements on an OCDMA technology demonstrator. This research shows that OCDMA implementation complexity (e.g., incorporating double hard-limiting and interference estimation) can be avoided by using a guard time in the codes and an optical hard limiter in the receiver.

  2. Finite Time Merton Strategy under Drawdown Constraint: A Viscosity Solution Approach

    International Nuclear Information System (INIS)

    Elie, R.

    2008-01-01

    We consider the optimal consumption-investment problem under the drawdown constraint, i.e. the wealth process never falls below a fixed fraction of its running maximum. We assume that the risky asset is driven by the constant coefficients Black and Scholes model and we consider a general class of utility functions. On an infinite time horizon, Elie and Touzi (Preprint, [2006]) provided the value function as well as the optimal consumption and investment strategy in explicit form. In a more realistic setting, we consider here an agent optimizing its consumption-investment strategy on a finite time horizon. The value function interprets as the unique discontinuous viscosity solution of its corresponding Hamilton-Jacobi-Bellman equation. This leads to a numerical approximation of the value function and allows for a comparison with the explicit solution in infinite horizon

  3. Separation and Extension of Cover Inequalities for Conic Quadratic Knapsack Constraints with Generalized Upper Bounds

    DEFF Research Database (Denmark)

    Atamtürk, Alper; Muller, Laurent Flindt; Pisinger, David

    2013-01-01

    Motivated by addressing probabilistic 0-1 programs we study the conic quadratic knapsack polytope with generalized upper bound (GUB) constraints. In particular, we investigate separating and extending GUB cover inequalities. We show that, unlike in the linear case, determining whether a cover can...... be extended with a single variable is NP-hard. We describe and compare a number of exact and heuristic separation and extension algorithms which make use of the structure of the constraints. Computational experiments are performed for comparing the proposed separation and extension algorithms...

  4. Discretionary Time of Chinese College Students: Activities and Impact of SARS-Induced Constraints on Choices

    Science.gov (United States)

    Yang, He; Hutchinson, Susan; Zinn, Harry; Watson, Alan

    2011-01-01

    How people make choices about activity engagement during discretionary time is a topic of increasing interest to those studying quality of life issues. Assuming choices are made to maximize individual welfare, several factors are believed to influence these choices. Constraints theory from the leisure research literature suggests these choices are…

  5. Colored thermal noise driven dynamical system in the presence and absence of non-equilibrium constraint: time dependence of information entropy flux and entropy production

    International Nuclear Information System (INIS)

    Goswami, Gurupada; Mukherjee, Biswajit; Bag, Bidhan Chandra

    2005-01-01

    We have studied the relaxation of non-Markovian and thermodynamically closed system both in the absence and presence of non-equilibrium constraint in terms of the information entropy flux and entropy production based on the Fokker-Planck and the entropy balance equations. Our calculation shows how the relaxation time depends on noise correlation time. It also considers how the non-equilibrium constraint is affected by system parameters such as noise correlation time, strength of dissipation and frequency of dynamical system. The interplay of non-equilibrium constraint, frictional memory kernel, noise correlation time and frequency of dynamical system reveals the extremum nature of the entropy production

  6. Colored thermal noise driven dynamical system in the presence and absence of non-equilibrium constraint: time dependence of information entropy flux and entropy production

    Science.gov (United States)

    Goswami, Gurupada; Mukherjee, Biswajit; Bag, Bidhan Chandra

    2005-06-01

    We have studied the relaxation of non-Markovian and thermodynamically closed system both in the absence and presence of non-equilibrium constraint in terms of the information entropy flux and entropy production based on the Fokker-Planck and the entropy balance equations. Our calculation shows how the relaxation time depends on noise correlation time. It also considers how the non-equilibrium constraint is affected by system parameters such as noise correlation time, strength of dissipation and frequency of dynamical system. The interplay of non-equilibrium constraint, frictional memory kernel, noise correlation time and frequency of dynamical system reveals the extremum nature of the entropy production.

  7. Constraints and Creativity in NPD - Testing the Impact of 'Late Constraints'

    DEFF Research Database (Denmark)

    Onarheim, Balder; Valgeirsdóttir, Dagný

    experiment was conducted, involving 12 teams of industrial designers from three different countries, each team working on two 30 minutes design tasks. In one condition all constraints were given at the start, and in the other one new radical constraint was added after 12 minutes. The output from all 24 tasks......The aim of the presented work is to investigate how the timing of project constraints can influence the creativity of the output in New Product Development (NPD) projects. When seeking to produce a creative output, is it beneficial to know all constraints when initiating a project...... was assessed for creativity using the Consensual Assessment Technique (CAT), and a comparative within-subjects analysis found no significant different between the two conditions. Controlling for task and assessor a small but non-significant effect was found, in favor of the ‘late constraint’ condition. Thus...

  8. Time-resolved hard x-ray studies using third-generation synchrotron radiation sources (abstract)

    International Nuclear Information System (INIS)

    Mills, D.M.

    1992-01-01

    The third-generation, high-brilliance, synchrotron radiation sources currently under construction will usher in a new era of x-ray research in the physical, chemical, and biological sciences. One of the most exciting areas of experimentation will be the extension of static x-ray scattering and diffraction techniques to the study of transient or time-evolving systems. The high repetition rate, short-pulse duration, high-brilliance, variable spectral bandwidth, and large particle beam energies of these sources make them ideal for hard x-ray, time-resolved studies. The primary focus of this presentation will be on the novel instrumentation required for time-resolved studies such as optics which can increase the flux on the sample or disperse the x-ray beam, detectors and electronics for parallel data collection, and methods for altering the natural time structure of the radiation. This work is supported by the U.S. Department of Energy, BES-Materials Science, under Contract No. W-31-109-ENG-38

  9. Effect of light-curing units, post-cured time and shade of resin cement on knoop hardness.

    Science.gov (United States)

    Reges, Rogério Vieira; Costa, Ana Rosa; Correr, Américo Bortolazzo; Piva, Evandro; Puppin-Rontani, Regina Maria; Sinhoreti, Mário Alexandre Coelho; Correr-Sobrinho, Lourenço

    2009-01-01

    The aim of this study was to evaluate the Knoop hardness after 15 min and 24 h of different shades of a dual-cured resin-based cement after indirect photoactivation (ceramic restoration) with 2 light-curing units (LCUs). The resin cement Variolink II (Ivoclar Vivadent) shade XL, A2, A3 and opaque were mixed with the catalyst paste and inserted into a black Teflon mold (5 mm diameter x 1 mm high). A transparent strip was placed over the mold and a ceramic disc (Duceram Plus, shade A3) was positioned over the resin cement. Light-activation was performed through the ceramic for 40 s using quartz-tungsten-halogen (QTH) (XL 2500; 3M ESPE) or light-emitting diode (LED) (Ultrablue Is, DMC) LCUs with power density of 615 and 610 mW/cm(2), respectively. The Koop hardness was measured using a microhardness tester HMV 2 (Shimadzu) after 15 min or 24 h. Four indentations were made in each specimen. Data were subjected to ANOVA and Tukey's test (alpha=0.05). The QTH LCU provided significantly higher (pcement showed lower Knoop hardness than the other shades for both LCUs and post-cure times.

  10. Hard X-ray spectral and timing properties of IGR J17454-2919 consistent with a black hole in the hard state

    DEFF Research Database (Denmark)

    Tendulkar, Shriharsh P.; Bachetti, Matteo; Tomsick, J.

    2014-01-01

    frequencies. The Lorentzian has a width of 2 Hz and a fractional rms of 25+/-3%. The hard power-law index, the high energy of the cutoff, and the level of variability all are consistent with properties expected for an accreting black hole in the hard state. While we cannot completely rule out the possibility...... of a low magnetic field neutron star, a black hole is more likely....

  11. The Optimization of Transportation Costs in Logistics Enterprises with Time-Window Constraints

    Directory of Open Access Journals (Sweden)

    Qingyou Yan

    2015-01-01

    Full Text Available This paper presents a model for solving a multiobjective vehicle routing problem with soft time-window constraints that specify the earliest and latest arrival times of customers. If a customer is serviced before the earliest specified arrival time, extra inventory costs are incurred. If the customer is serviced after the latest arrival time, penalty costs must be paid. Both the total transportation cost and the required fleet size are minimized in this model, which also accounts for the given capacity limitations of each vehicle. The total transportation cost consists of direct transportation costs, extra inventory costs, and penalty costs. This multiobjective optimization is solved by using a modified genetic algorithm approach. The output of the algorithm is a set of optimal solutions that represent the trade-off between total transportation cost and the fleet size required to service customers. The influential impact of these two factors is analyzed through the use of a case study.

  12. Discretionary time of Chinese college students: Activities and impact of SARS-induced constraints on choices

    Science.gov (United States)

    He Yang; Susan Hutchinson; Harry Zinn; Alan Watson

    2011-01-01

    How people make choices about activity engagement during discretionary time is a topic of increasing interest to those studying quality of life issues. Assuming choices are made to maximize individual welfare, several factors are believed to influence these choices. Constraints theory from the leisure research literature suggests these choices are heavily influenced by...

  13. Comparison of time/phase lags in the hard state and plateau state of GRS 1915+105

    Energy Technology Data Exchange (ETDEWEB)

    Pahari, Mayukh; Yadav, J. S. [Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai, India (MP) (India); Neilsen, Joseph [Boston University, Boston, MA 02215 (United States); Misra, Ranjeev [Inter University Center for Astronomy and Astrophysics, Pune (India); Uttley, Phil, E-mail: mp@tifr.res.in [Astronomical Institute, " Anton Pannekoek," University of Amsterdam, Science Park 904, 1098-XH Amsterdam (Netherlands)

    2013-12-01

    We investigate the complex behavior of energy- and frequency-dependent time/phase lags in the plateau state and the radio-quiet hard (χ) state of GRS 1915+105. In our timing analysis, we find that when the source is faint in the radio, quasi-periodic oscillations (QPOs) are observed above 2 Hz and typically exhibit soft lags (soft photons lag hard photons), whereas QPOs in the radio-bright plateau state are found below 2.2 Hz and consistently show hard lags. The phase lag at the QPO frequency is strongly anti-correlated with that frequency, changing sign at 2.2 Hz. However, the phase lag at the frequency of the first harmonic is positive and nearly independent of that frequency at ∼0.172 rad, regardless of the radio emission. The lag energy dependence at the first harmonic is also independent of radio flux. However, the lags at the QPO frequency are negative at all energies during the radio-quiet state, but lags at the QPO frequency during the plateau state are positive at all energies and show a 'reflection-type' evolution of the lag energy spectra with respect to the radio-quiet state. The lag energy dependence is roughly logarithmic, but there is some evidence for a break around 4-6 keV. Finally, the Fourier-frequency-dependent phase lag spectra are fairly flat during the plateau state, but increase from negative to positive during the radio-quiet state. We discuss the implications of our results in light of some generic models.

  14. The effect of immersion time to low carbon steel hardness and microstructure with hot dip galvanizing coating method

    Science.gov (United States)

    Hakim, A. A.; Rajagukguk, T. O.; Sumardi, S.

    2018-01-01

    Along with developing necessities of metal materials, these rise demands of quality improvements and material protections especially the mechanical properties of the material. This research used hot dip galvanizing coating method. The objectives of this research were to find out Rockwell hardness (HRb), layer thickness, micro structure and observation with Scanning Electron Microscope (SEM) from result of coating by using Hot Dip Galvanizing coating method with immersion time of 3, 6, 9, and 12 minutes at 460°C. The result shows that Highest Rockwell hardness test (HRb) was at 3 minutes immersion time with 76.012 HRb. Highest thickness result was 217.3 μm at 12 minutes immersion. Microstructure test result showed that coating was formed at eta, zeta, delta and gamma phases, while Scanning Electron Microscope (SEM) showed Fe, Zn, Mn, Si and S elements at the specimens after coating.

  15. Robust stability in predictive control with soft constraints

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2010-01-01

    In this paper we take advantage of the primary and dual Youla parameterizations for setting up a soft constrained model predictive control (MPC) scheme for which stability is guaranteed in face of norm-bounded uncertainties. Under special conditions guarantees are also given for hard input...... constraints. In more detail, we parameterize the MPC predictions in terms of the primary Youla parameter and use this parameter as the online optimization variable. The uncertainty is parameterized in terms of the dual Youla parameter. Stability can then be guaranteed through small gain arguments on the loop...

  16. A message-passing approach to random constraint satisfaction problems with growing domains

    International Nuclear Information System (INIS)

    Zhao, Chunyan; Zheng, Zhiming; Zhou, Haijun; Xu, Ke

    2011-01-01

    Message-passing algorithms based on belief propagation (BP) are implemented on a random constraint satisfaction problem (CSP) referred to as model RB, which is a prototype of hard random CSPs with growing domain size. In model RB, the number of candidate discrete values (the domain size) of each variable increases polynomially with the variable number N of the problem formula. Although the satisfiability threshold of model RB is exactly known, finding solutions for a single problem formula is quite challenging and attempts have been limited to cases of N ∼ 10 2 . In this paper, we propose two different kinds of message-passing algorithms guided by BP for this problem. Numerical simulations demonstrate that these algorithms allow us to find a solution for random formulas of model RB with constraint tightness slightly less than p cr , the threshold value for the satisfiability phase transition. To evaluate the performance of these algorithms, we also provide a local search algorithm (random walk) as a comparison. Besides this, the simulated time dependence of the problem size N and the entropy of the variables for growing domain size are discussed

  17. Bin-packing problems with load balancing and stability constraints

    DEFF Research Database (Denmark)

    Trivella, Alessio; Pisinger, David

    apper in a wide range of disciplines, including transportation and logistics, computer science, engineering, economics and manufacturing. The problem is well-known to be N P-hard and difficult to solve in practice, especially when dealing with the multi-dimensional cases. Closely connected to the BPP...... realistic constraints related to e.g. load balancing, cargo stability and weight limits, in the multi-dimensional BPP. The BPP poses additional challenges compared to the CLP due to the supplementary objective of minimizing the number of bins. In particular, in section 2 we discuss how to integrate bin......-packing and load balancing of items. The problem has only been considered in the literature in simplified versions, e.g. balancing a single bin or introducing a feasible region for the barycenter. In section 3 we generalize the problem to handle cargo stability and weight constraints....

  18. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Science.gov (United States)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  19. Efficient constraint-based Sequential Pattern Mining (SPM algorithm to understand customers’ buying behaviour from time stamp-based sequence dataset

    Directory of Open Access Journals (Sweden)

    Niti Ashish Kumar Desai

    2015-12-01

    Full Text Available Business Strategies are formulated based on an understanding of customer needs. This requires development of a strategy to understand customer behaviour and buying patterns, both current and future. This involves understanding, first how an organization currently understands customer needs and second predicting future trends to drive growth. This article focuses on purchase trend of customer, where timing of purchase is more important than association of item to be purchased, and which can be found out with Sequential Pattern Mining (SPM methods. Conventional SPM algorithms worked purely on frequency identifying patterns that were more frequent but suffering from challenges like generation of huge number of uninteresting patterns, lack of user’s interested patterns, rare item problem, etc. Article attempts a solution through development of a SPM algorithm based on various constraints like Gap, Compactness, Item, Recency, Profitability and Length along with Frequency constraint. Incorporation of six additional constraints is as well to ensure that all patterns are recently active (Recency, active for certain time span (Compactness, profitable and indicative of next timeline for purchase (Length―Item―Gap. The article also attempts to throw light on how proposed Constraint-based Prefix Span algorithm is helpful to understand buying behaviour of customer which is in formative stage.

  20. Clock gene evolution: seasonal timing, phylogenetic signal, or functional constraint?

    Science.gov (United States)

    Krabbenhoft, Trevor J; Turner, Thomas F

    2014-01-01

    Genetic determinants of seasonal reproduction are not fully understood but may be important predictors of organism responses to climate change. We used a comparative approach to study the evolution of seasonal timing within a fish community in a natural common garden setting. We tested the hypothesis that allelic length variation in the PolyQ domain of a circadian rhythm gene, Clock1a, corresponded to interspecific differences in seasonal reproductive timing across 5 native and 1 introduced cyprinid fishes (n = 425 individuals) that co-occur in the Rio Grande, NM, USA. Most common allele lengths were longer in native species that initiated reproduction earlier (Spearman's r = -0.70, P = 0.23). Clock1a allele length exhibited strong phylogenetic signal and earlier spawners were evolutionarily derived. Aside from length variation in Clock1a, all other amino acids were identical across native species, suggesting functional constraint over evolutionary time. Interestingly, the endangered Rio Grande silvery minnow (Hybognathus amarus) exhibited less allelic variation in Clock1a and observed heterozygosity was 2- to 6-fold lower than the 5 other (nonimperiled) species. Reduced genetic variation in this functionally important gene may impede this species' capacity to respond to ongoing environmental change.

  1. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  2. Constraints or Preferences? Identifying Answers from Part-time Workers’ Transitions in Denmark, France and the United-Kingdom

    OpenAIRE

    Gash, V.

    2008-01-01

    This article investigates whether women work part-time through preference or constraint and argues that different countries provide different opportunities for preference attainment. It argues that women with family responsibilities are unlikely to have their working preferences met without national policies supportive of maternal employment. Using event history analysis the article tracks part-time workers' transitions to both full-time employment and to labour market drop-out.The article co...

  3. Wave packet autocorrelation functions for quantum hard-disk and hard-sphere billiards in the high-energy, diffraction regime.

    Science.gov (United States)

    Goussev, Arseni; Dorfman, J R

    2006-07-01

    We consider the time evolution of a wave packet representing a quantum particle moving in a geometrically open billiard that consists of a number of fixed hard-disk or hard-sphere scatterers. Using the technique of multiple collision expansions we provide a first-principle analytical calculation of the time-dependent autocorrelation function for the wave packet in the high-energy diffraction regime, in which the particle's de Broglie wavelength, while being small compared to the size of the scatterers, is large enough to prevent the formation of geometric shadow over distances of the order of the particle's free flight path. The hard-disk or hard-sphere scattering system must be sufficiently dilute in order for this high-energy diffraction regime to be achievable. Apart from the overall exponential decay, the autocorrelation function exhibits a generally complicated sequence of relatively strong peaks corresponding to partial revivals of the wave packet. Both the exponential decay (or escape) rate and the revival peak structure are predominantly determined by the underlying classical dynamics. A relation between the escape rate, and the Lyapunov exponents and Kolmogorov-Sinai entropy of the counterpart classical system, previously known for hard-disk billiards, is strengthened by generalization to three spatial dimensions. The results of the quantum mechanical calculation of the time-dependent autocorrelation function agree with predictions of the semiclassical periodic orbit theory.

  4. Effect of plasma nitriding time on surface properties of hard chromium electroplated AISI 1010 steel

    Energy Technology Data Exchange (ETDEWEB)

    Kocabas, Mustafa [Yildiz Technical Univ., Istanbul (Turkey). Metallurgical and Materials Engineering Dept.; Danisman, Murat [Gedik Univ., Istanbul (Turkey). Electrical and Electronic Engineering Dept.; Cansever, Nurhan [Yildiz Technical Univ., Istanbul (Turkey); Uelker, Suekrue [Afyon Kocatepe Univ. (Turkey). Dept. of Mechanical Engineering

    2015-06-01

    Properties of steel can be enhanced by surface treatments such as coating. In some cases, further treatments such as nitriding can also be used in order to get even better results. In order to investigate the properties of nitride layer on hard Cr coated AISI 1010 steel, substrates were electroplated to form hard Cr coatings. Then hard Cr coatings were plasma nitrided at 700 C for 3 h, 5 h and 7 h and nitride phases on the coatings were investigated by X-ray diffraction analysis. The layer thickness and surface properties of nitride films were investigated by scanning electron microscopy. The hardness and adhesion properties of Cr-N phases were examined using nano indentation and Daimler-Benz Rockwell C adhesion tests. The highest measured hardness was 24.1 GPa and all the three samples exhibited poor adhesion.

  5. Effect of plasma nitriding time on surface properties of hard chromium electroplated AISI 1010 steel

    International Nuclear Information System (INIS)

    Kocabas, Mustafa; Uelker, Suekrue

    2015-01-01

    Properties of steel can be enhanced by surface treatments such as coating. In some cases, further treatments such as nitriding can also be used in order to get even better results. In order to investigate the properties of nitride layer on hard Cr coated AISI 1010 steel, substrates were electroplated to form hard Cr coatings. Then hard Cr coatings were plasma nitrided at 700 C for 3 h, 5 h and 7 h and nitride phases on the coatings were investigated by X-ray diffraction analysis. The layer thickness and surface properties of nitride films were investigated by scanning electron microscopy. The hardness and adhesion properties of Cr-N phases were examined using nano indentation and Daimler-Benz Rockwell C adhesion tests. The highest measured hardness was 24.1 GPa and all the three samples exhibited poor adhesion.

  6. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    Science.gov (United States)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency

  7. The ESS and replicator equation in matrix games under time constraints.

    Science.gov (United States)

    Garay, József; Cressman, Ross; Móri, Tamás F; Varga, Tamás

    2018-06-01

    Recently, we introduced the class of matrix games under time constraints and characterized the concept of (monomorphic) evolutionarily stable strategy (ESS) in them. We are now interested in how the ESS is related to the existence and stability of equilibria for polymorphic populations. We point out that, although the ESS may no longer be a polymorphic equilibrium, there is a connection between them. Specifically, the polymorphic state at which the average strategy of the active individuals in the population is equal to the ESS is an equilibrium of the polymorphic model. Moreover, in the case when there are only two pure strategies, a polymorphic equilibrium is locally asymptotically stable under the replicator equation for the pure-strategy polymorphic model if and only if it corresponds to an ESS. Finally, we prove that a strict Nash equilibrium is a pure-strategy ESS that is a locally asymptotically stable equilibrium of the replicator equation in n-strategy time-constrained matrix games.

  8. Hard real-time multibody simulations using ARM-based embedded systems

    Energy Technology Data Exchange (ETDEWEB)

    Pastorino, Roland, E-mail: roland.pastorino@kuleuven.be, E-mail: rpastorino@udc.es; Cosco, Francesco, E-mail: francesco.cosco@kuleuven.be; Naets, Frank, E-mail: frank.naets@kuleuven.be; Desmet, Wim, E-mail: wim.desmet@kuleuven.be [KU Leuven, PMA division, Department of Mechanical Engineering (Belgium); Cuadrado, Javier, E-mail: javicuad@cdf.udc.es [Universidad de La Coruña, Laboratorio de Ingeniería Mecánica (Spain)

    2016-05-15

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  9. Hard real-time multibody simulations using ARM-based embedded systems

    International Nuclear Information System (INIS)

    Pastorino, Roland; Cosco, Francesco; Naets, Frank; Desmet, Wim; Cuadrado, Javier

    2016-01-01

    The real-time simulation of multibody models on embedded systems is of particular interest for controllers and observers such as model predictive controllers and state observers, which rely on a dynamic model of the process and are customarily executed in electronic control units. This work first identifies the software techniques and tools required to easily write efficient code for multibody models to be simulated on ARM-based embedded systems. Automatic Programming and Source Code Translation are the two techniques that were chosen to generate source code for multibody models in different programming languages. Automatic Programming is used to generate procedural code in an intermediate representation from an object-oriented library and Source Code Translation is used to translate the intermediate representation automatically to an interpreted language or to a compiled language for efficiency purposes. An implementation of these techniques is proposed. It is based on a Python template engine and AST tree walkers for Source Code Generation and on a model-driven translator for the Source Code Translation. The code is translated from a metalanguage to any of the following four programming languages: Python-Numpy, Matlab, C++-Armadillo, C++-Eigen. Two examples of multibody models were simulated: a four-bar linkage with multiple loops and a 3D vehicle steering system. The code for these examples has been generated and executed on two ARM-based single-board computers. Using compiled languages, both models could be simulated faster than real-time despite the low resources and performance of these embedded systems. Finally, the real-time performance of both models was evaluated when executed in hard real-time on Xenomai for both embedded systems. This work shows through measurements that Automatic Programming and Source Code Translation are valuable techniques to develop real-time multibody models to be used in embedded observers and controllers.

  10. Automatic pickup of arrival time of channel wave based on multi-channel constraints

    Science.gov (United States)

    Wang, Bao-Li

    2018-03-01

    Accurately detecting the arrival time of a channel wave in a coal seam is very important for in-seam seismic data processing. The arrival time greatly affects the accuracy of the channel wave inversion and the computed tomography (CT) result. However, because the signal-to-noise ratio of in-seam seismic data is reduced by the long wavelength and strong frequency dispersion, accurately timing the arrival of channel waves is extremely difficult. For this purpose, we propose a method that automatically picks up the arrival time of channel waves based on multi-channel constraints. We first estimate the Jaccard similarity coefficient of two ray paths, then apply it as a weight coefficient for stacking the multichannel dispersion spectra. The reasonableness and effectiveness of the proposed method is verified in an actual data application. Most importantly, the method increases the degree of automation and the pickup precision of the channel-wave arrival time.

  11. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  12. Ant system for reliability optimization of a series system with multiple-choice and budget constraints

    International Nuclear Information System (INIS)

    Nahas, Nabil; Nourelfath, Mustapha

    2005-01-01

    Many researchers have shown that insect colonies behavior can be seen as a natural model of collective problem solving. The analogy between the way ants look for food and combinatorial optimization problems has given rise to a new computational paradigm, which is called ant system. This paper presents an application of ant system in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. This problem is solved by developing and demonstrating a problem-specific ant system algorithm. In this algorithm, solutions of the reliability optimization problem are repeatedly constructed by considering the trace factor and the desirability factor. A local search is used to improve the quality of the solutions obtained by each ant. A penalty factor is introduced to deal with the budget constraint. Simulations have shown that the proposed ant system is efficient with respect to the quality of solutions and the computing time

  13. Evaluation issues on real-time operating system in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. M.; Jeong, C. H.; Koh, J. S. [Regulatory Research Div., Korea Inst. of Nuclear Safety (Korea, Republic of)

    2006-07-01

    In the recent few years, using the hard real-time operating system (RTOS) of safety-critical applications has gained increased acceptance in the nuclear safety system. Failure of this software could cause catastrophic consequences for human life. The digital I and C systems of nuclear power plants also have used hard RTOSs which are executing a required mission completely within its deadline. Because the nuclear power plants have to maintain a very high level of safety, the hard RTOS software should be reliable and safe. The RTOS used in safety-critical I and C systems is the base software used for the purpose of satisfying the real-time constraints, So, careful evaluation of its safety and functionality is very important, So far, the nuclear power plants of Korea have adopted commercial off-the-shelf (COTS) RTOS software. But, these days the RTOS embedded in safety grade PLC has been developed by KNICS project controlled by Ministry of Commerce, Industry and Energy of Korea. Whether COTS RTOS or newly developed RTOS, it must be evaluated its safety and reliability. (authors)

  14. Evaluation issues on real-time operating system in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Y. M.; Jeong, C. H.; Koh, J. S.

    2006-01-01

    In the recent few years, using the hard real-time operating system (RTOS) of safety-critical applications has gained increased acceptance in the nuclear safety system. Failure of this software could cause catastrophic consequences for human life. The digital I and C systems of nuclear power plants also have used hard RTOSs which are executing a required mission completely within its deadline. Because the nuclear power plants have to maintain a very high level of safety, the hard RTOS software should be reliable and safe. The RTOS used in safety-critical I and C systems is the base software used for the purpose of satisfying the real-time constraints, So, careful evaluation of its safety and functionality is very important, So far, the nuclear power plants of Korea have adopted commercial off-the-shelf (COTS) RTOS software. But, these days the RTOS embedded in safety grade PLC has been developed by KNICS project controlled by Ministry of Commerce, Industry and Energy of Korea. Whether COTS RTOS or newly developed RTOS, it must be evaluated its safety and reliability. (authors)

  15. Two-agent cooperative search using game models with endurance-time constraints

    Science.gov (United States)

    Sujit, P. B.; Ghose, Debasish

    2010-07-01

    In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

  16. Studying Hardness Meter Spring Strength to Understand Hardness Distribution on Body Surfaces.

    Science.gov (United States)

    Arima, Yoshitaka

    2017-10-01

    For developing a hardness multipoint measurement system for understanding hardness distribution on biological body surfaces, we investigated the spring strength of the contact portion main axis of a biological tissue hardness meter (product name: PEK). We measured the hardness of three-layered sheets of six types of gel sheets (90 mm × 60 mm × 6 mm) constituting the acupuncture practice pads, with PEK measurements of 1.96 N, 2.94 N, 3.92 N, 4.90 N, 5.88 N, 6.86 N, 7.84 N, 8.82 N, and 9.81 N of the main axis spring strength. We obtained measurements 10 times for the gel sheets and simultaneously measured the load using a digital scale. We measured the hardness distribution of induration embedded and breast cancer palpation models, with a main axis with 1.96 N, 4.90 N, and 9.81 N spring strengths, to create a two-dimensional Contour Fill Chart. Using 4.90 N spring strength, we could obtain measurement loads of ≤3.0 N, and the mean hardness was 5.14 mm. This was close to the median of the total measurement range 0.0-10.0 mm, making the measurement range the largest for this spring strength. We could image the induration of the induration-embedded model regardless of the spring strength. Overall, 4.90 N spring strength was best suited for imaging cancer in the breast cancer palpation model. Copyright © 2017. Published by Elsevier B.V.

  17. Examining the Effect of Time Constraint on the Online Mastery Learning Approach towards Improving Postgraduate Students' Achievement

    Science.gov (United States)

    Ee, Mong Shan; Yeoh, William; Boo, Yee Ling; Boulter, Terry

    2018-01-01

    Time control plays a critical role within the online mastery learning (OML) approach. This paper examines the two commonly implemented mastery learning strategies--personalised system of instructions and learning for mastery (LFM)--by focusing on what occurs when there is an instructional time constraint. Using a large data set from a postgraduate…

  18. Constraints of a parity-conserving/time-reversal-non-conserving interaction

    International Nuclear Information System (INIS)

    Oers, Willem T.H. van

    2002-01-01

    Time-Reversal-Invariance non-conservation has for the first time been unequivocally demonstrated in a direct measurement at CPLEAR. One then can ask the question: What about tests of time-reversal-invariance in systems other than the kaon system? Tests of time-reversal-invariance can be distinguished as belonging to two classes: the first one deals with time-reversal-invariance-non-conserving (T-odd)/parity violating (P-odd) interactions, while the second one deals with T-odd/P-even interactions (assuming CPT conservation this implies C-conjugation non-conservation). Limits on a T-odd/P-odd interaction follow from measurements of the electric dipole moment of the neutron ( -26 e.cm [95% C.L.]). It provides a limit on a T-odd/P-odd pion-nucleon coupling constant which is less than 10 -4 times the weak interaction strength. Experimental limits on a T-odd/P-even interaction are much less stringent. Following the standard approach of describing the nucleon-nucleon interaction in terms of meson exchanges, it can be shown that only charged ρ-meson exchange and A 1 -meson exchange can lead to a T-odd/P-even interaction. The better constraints stem from measurements of the electric dipole moment of the neutron and from measurements of charge-symmetry breaking in neutron-proton elastic scattering. The latter experiments were executed at TRIUMF (497 and 347 MeV) and at IUCF (183 MeV). All other experiments, like detailed balance experiments, polarization - analyzing power difference determinations, and five-fold correlation experiments with polarized incident nucleons and aligned nuclear targets, have been shown to be at least an order to magnitude less sensitive. Is there room for further experimentation?

  19. Dwell time adjustment for focused ion beam machining

    International Nuclear Information System (INIS)

    Taniguchi, Jun; Satake, Shin-ichi; Oosumi, Takaki; Fukushige, Akihisa; Kogo, Yasuo

    2013-01-01

    Focused ion beam (FIB) machining is potentially useful for micro/nano fabrication of hard brittle materials, because the removal method involves physical sputtering. Usually, micro/nano scale patterning of hard brittle materials is very difficult to achieve by mechanical polishing or dry etching. Furthermore, in most reported examples, FIB machining has been applied to silicon substrates in a limited range of shapes. Therefore, a versatile method for FIB machining is required. We previously established the dwell time adjustment for mechanical polishing. The dwell time adjustment is calculated by using a convolution model derived from Preston’s hypothesis. More specifically, the target removal shape is a convolution of the unit removal shape, and the dwell time is calculated by means of one of four algorithms. We investigate these algorithms for dwell time adjustment in FIB machining, and we found that a combination a fast Fourier transform calculation technique and a constraint-type calculation is suitable. By applying this algorithm, we succeeded in machining a spherical lens shape with a diameter of 2.93 μm and a depth of 203 nm in a glassy carbon substrate by means of FIB with dwell time adjustment

  20. An overview on polynomial approximation of NP-hard problems

    Directory of Open Access Journals (Sweden)

    Paschos Vangelis Th.

    2009-01-01

    Full Text Available The fact that polynomial time algorithm is very unlikely to be devised for an optimal solving of the NP-hard problems strongly motivates both the researchers and the practitioners to try to solve such problems heuristically, by making a trade-off between computational time and solution's quality. In other words, heuristic computation consists of trying to find not the best solution but one solution which is 'close to' the optimal one in reasonable time. Among the classes of heuristic methods for NP-hard problems, the polynomial approximation algorithms aim at solving a given NP-hard problem in poly-nomial time by computing feasible solutions that are, under some predefined criterion, as near to the optimal ones as possible. The polynomial approximation theory deals with the study of such algorithms. This survey first presents and analyzes time approximation algorithms for some classical examples of NP-hard problems. Secondly, it shows how classical notions and tools of complexity theory, such as polynomial reductions, can be matched with polynomial approximation in order to devise structural results for NP-hard optimization problems. Finally, it presents a quick description of what is commonly called inapproximability results. Such results provide limits on the approximability of the problems tackled.

  1. Infinite Runs in Weighted Timed Automata with Energy Constraints

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2008-01-01

    and locations, corresponding to the production and consumption of some resource (e.g. energy). We ask the question whether there exists an infinite path for which the accumulated weight for any finite prefix satisfies certain constraints (e.g. remains between 0 and some given upper-bound). We also consider...

  2. Spatially resolving a starburst galaxy at hard X-ray energies: NuSTAR, CHANDRA, AND VLBA observations of NGC 253

    DEFF Research Database (Denmark)

    Wik, D. R.; Lehmer, B. D.; Hornschemeier, A. E.

    2014-01-01

    for the first time. As a follow up to our initial study of its nuclear region, we present the first results concerning the full galaxy from simultaneous NuSTAR, Chandra, and Very Long Baseline Array monitoring of the local starburst galaxy NGC 253. Above ~10 keV, nearly all the emission is concentrated within...... is detected at E > 40 keV. We report upper limits on diffuse inverse Compton emission for a range of spatial models. For the most extended morphologies considered, these hard X-ray constraints disfavor a dominant inverse Compton component to explain the γ-ray emission detected with Fermi and H.E.S.S. If NGC...

  3. Provocative questions in cancer epidemiology in a time of scientific innovation and budgetary constraints.

    Science.gov (United States)

    Lam, Tram Kim; Schully, Sheri D; Rogers, Scott D; Benkeser, Rachel; Reid, Britt; Khoury, Muin J

    2013-04-01

    In a time of scientific and technological developments and budgetary constraints, the National Cancer Institute's (NCI) Provocative Questions Project offers a novel funding mechanism for cancer epidemiologists. We reviewed the purposes underlying the Provocative Questions Project, present information on the contributions of epidemiologic research to the current Provocative Questions portfolio, and outline opportunities that the cancer epidemiology community might capitalize on to advance a research agenda that spans a translational continuum from scientific discoveries to population health impact.

  4. Analog Approach to Constraint Satisfaction Enabled by Spin Orbit Torque Magnetic Tunnel Junctions.

    Science.gov (United States)

    Wijesinghe, Parami; Liyanagedera, Chamika; Roy, Kaushik

    2018-05-02

    Boolean satisfiability (k-SAT) is an NP-complete (k ≥ 3) problem that constitute one of the hardest classes of constraint satisfaction problems. In this work, we provide a proof of concept hardware based analog k-SAT solver, that is built using Magnetic Tunnel Junctions (MTJs). The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog satisfiability (SAT) solver. In the presence of thermal noise, the MTJ based system can successfully solve Boolean satisfiability problems. Most importantly, our results exhibit that, the proposed MTJ based hardware SAT solver is capable of finding a solution to a significant fraction (at least 85%) of hard 3-SAT problems, within a time that has a polynomial relationship with the number of variables(<50).

  5. Hardness of Clustering

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Hardness of Clustering. Both k-means and k-medians intractable (when n and d are both inputs even for k =2). The best known deterministic algorithms. are based on Voronoi partitioning that. takes about time. Need for approximation – “close” to optimal.

  6. Alternative Forms of Resilience Confronting Hard Economic Times. A South European Perspective

    Directory of Open Access Journals (Sweden)

    Maria Kousis

    2017-05-01

    Full Text Available The aim of this special issue is to contribute to the study of alternative forms of resilience, visible in the economic and noneconomic activities of citizens confronting hard economic times and falling rights in Italy, Spain, Greece and Portugal, since the global financial crisis of 2008. It does so through a set of recent empirical studies which adopt recent theoretical approaches, such as Social Innovation or Sustainable Community Movement Organizations, and offer new evidence on solidarity oriented practices, including their links to social movement activism. The authors of this special issue contribute to the existing recent debates by highlighting key features of alternative forms of resilience, their links to social movements and theoretical orientations influenced by social movement and resilience studies in four Southern European countries and regions.

  7. Design of the data management system for hard X-ray modulation telescope based on real-time Linux

    International Nuclear Information System (INIS)

    Jia Tao; Zhang Zhi

    2004-01-01

    Hard X-ray Modulation Telescope is an electronic subsystem, the data management system for capturing the data of the telescope, then managing and transferring them. The data management system also deals with the communication with the satellite. Because of these functions, it needs highly steady quality and good real-time performance. This paper describes the design of the system. (authors)

  8. Stack Memory Implementation and Analysis of Timing Constraint, Power and Memory using FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Nisha; Pandey, Bishwajeet

    2017-01-01

    real-time output, so that source used to realize the project is not wasted and get an energy efficient design. However, Stack memory is an approach in which information is entered and deleted from the stack memory segment in the pattern of last in first out mechanism. There are several ways...... of implementation of stack memory algorithm but virtex4 and virtex7 low voltage were considered to be the most efficient platforms for its operation. The developed system is energy efficient as the algorim ensures less memory utilization, less power consumption and short time for signal travel.......Abstract— in this work of analysis, stack memory algorithm is implemented on a number of FPGA platforms like virtex4, virtex5, virtex6, virtex6 low power and virtex7 low voltage and very detailed observations/investigations were made about timing constraint, memory and power dissipation. The main...

  9. Micro-computer cards for hard industrial environment

    Energy Technology Data Exchange (ETDEWEB)

    Breton, J M

    1984-03-15

    Approximately 60% of present or future distributed systems have, or will have, operational units installed in hard environments. In these applications, which include canalization and industrial motor control, robotics and process control, systems must be easily applied in environments not made for electronic use. The development of card systems in this hard industrial environment, which is found in petrochemical industry and mines is described. National semiconductor CIM card system CMOS technology allows the real time micro computer application to be efficient and functional in hard industrial environments.

  10. Long distance communication in the human brain: timing constraints for inter-hemispheric synchrony and the origin of brain lateralization

    Directory of Open Access Journals (Sweden)

    FRANCISCO ABOITIZ

    2003-01-01

    Full Text Available Analysis of corpus callosum fiber composition reveals that inter-hemispheric transmission time may put constraints on the development of inter-hemispheric synchronic ensembles, especially in species with large brains like humans. In order to overcome this limitation, a subset of large-diameter callosal fibers are specialized for fast inter-hemispheric transmission, particularly in large-brained species. Nevertheless, the constraints on fast inter-hemispheric communication in large-brained species can somehow contribute to the development of ipsilateral, intrahemispheric networks, which might promote the development of brain lateralization.

  11. A Temporal Concurrent Constraint Programming Calculus

    DEFF Research Database (Denmark)

    Palamidessi, Catuscia; Valencia Posso, Frank Darwin

    2001-01-01

    The tcc model is a formalism for reactive concurrent constraint programming. In this paper we propose a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and non-deterministic timed behavior. We call this tcc extension the ntcc calculus...

  12. Impact Angle and Time Control Guidance Under Field-of-View Constraints and Maneuver Limits

    Science.gov (United States)

    Shim, Sang-Wook; Hong, Seong-Min; Moon, Gun-Hee; Tahk, Min-Jea

    2018-04-01

    This paper proposes a guidance law which considers the constraints of seeker field-of-view (FOV) as well as the requirements on impact angle and time. The proposed guidance law is designed for a constant speed missile against a stationary target. The guidance law consists of two terms of acceleration commands. The first one is to achieve zero-miss distance and the desired impact angle, while the second is to meet the desired impact time. To consider the limits of FOV and lateral maneuver capability, a varying-gain approach is applied on the second term. Reduction of realizable impact times due to these limits is then analyzed by finding the longest course among the feasible ones. The performance of the proposed guidance law is demonstrated by numerical simulation for various engagement conditions.

  13. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    Science.gov (United States)

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  14. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    Science.gov (United States)

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  15. Hard Real-Time Task Scheduling in Cloud Computing Using an Adaptive Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Amjad Mahmood

    2017-04-01

    Full Text Available In the Infrastructure-as-a-Service cloud computing model, virtualized computing resources in the form of virtual machines are provided over the Internet. A user can rent an arbitrary number of computing resources to meet their requirements, making cloud computing an attractive choice for executing real-time tasks. Economical task allocation and scheduling on a set of leased virtual machines is an important problem in the cloud computing environment. This paper proposes a greedy and a genetic algorithm with an adaptive selection of suitable crossover and mutation operations (named as AGA to allocate and schedule real-time tasks with precedence constraint on heterogamous virtual machines. A comprehensive simulation study has been done to evaluate the performance of the proposed algorithms in terms of their solution quality and efficiency. The simulation results show that AGA outperforms the greedy algorithm and non-adaptive genetic algorithm in terms of solution quality.

  16. Constraint Differentiation

    DEFF Research Database (Denmark)

    Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca

    2010-01-01

    We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....

  17. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    International Nuclear Information System (INIS)

    Fernandes, A.; Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J.; Kiptily, V.; Correia, C.M.B.A.; Gonçalves, B.

    2014-01-01

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented

  18. Real-time algorithms for JET hard X-ray and gamma-ray profile monitor

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes, A., E-mail: anaf@ipfn.ist.utl.pt [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Pereira, R.C.; Valcárcel, D.F.; Alves, D.; Carvalho, B.B.; Sousa, J. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal); Kiptily, V. [EURATOM/CCFE Fusion Association, Culham Centre for Fusion Energy, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Correia, C.M.B.A. [Centro de Instrumentação, Dept. de Física, Universidade de Coimbra, 3004-516 Coimbra (Portugal); Gonçalves, B. [Associação EURATOM/IST, Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-03-15

    Highlights: • Real-time tools and mechanisms are required for data handling and machine control. • A new DAQ system, ATCA based, with embedded FPGAs, was installed at JET. • Different real-time algorithms were developed for FPGAs and MARTe application. • MARTe provides the interface to CODAS and to the JET real-time network. • The new DAQ system is capable to process and deliver data in real-time. - Abstract: The steady state operation with high energy content foreseen for future generation of fusion devices will necessarily demand dedicated real-time tools and mechanisms for data handling and machine control. Consequently, the real-time systems for those devices should be carefully selected and their capabilities previously established. The Joint European Torus (JET) is undertaking an enhancement program, which includes tests of relevant real-time tools for the International Thermonuclear Experimental Reactor (ITER), a key experiment for future fusion devices. In these enhancements a new Data AcQuisition (DAQ) system is included, with real-time processing capabilities, for the JET hard X-ray and gamma-ray profile monitor. The DAQ system is composed of dedicated digitizer modules with embedded Field Programmable Gate Array (FPGA) devices. The interface between the DAQ system, the JET control and data acquisition system and the JET real-time data network is provided by the Multithreaded Application Real-Time executor (MARTe). This paper describes the real-time algorithms, developed for both digitizers’ FPGAs and MARTe application, capable of meeting the DAQ real-time requirements. The new DAQ system, including the embedded real-time features, was commissioned during the 2012 experiments. Results achieved with these real-time algorithms during experiments are presented.

  19. Modeling of finite systems irradiated by intense ultrashort hard X-ray pulses

    Energy Technology Data Exchange (ETDEWEB)

    Jurek, Zoltan [Center for Free-Electron Laser Science, Deutsches Elektronen-Synchrotron, Notkestrasse 85, D-22607 Hamburg (Germany); Ziaja, Beata [Center for Free-Electron Laser Science, Deutsches Elektronen-Synchrotron, Notkestrasse 85, D-22607 Hamburg (Germany); Institute of Nuclear Physics, Polish Academy of Sciences, Radzikowskiego 152, 31-342 Krakow (Poland); Santra, Robin [Center for Free-Electron Laser Science, Deutsches Elektronen-Synchrotron, Notkestrasse 85, D-22607 Hamburg (Germany); Department of Physics, University of Hamburg, Jungiusstrasse 9, 20355 Hamburg (Germany)

    2013-07-01

    Large number of experiments have already been carried out at the existing hard X-Ray Free-Electron Laser facilities (LCLS, SACLA) during the recent years. Their great success generates even higher anticipation for the forthcoming X-ray sources (European XFEL). Single molecule imaging and nanoplasma formation are the challenging projects with XFELs that investigate the interaction of finite, small objects, e.g. single molecules, atomic clusters with intense X-ray radiation. Accurate modelling of the time evolution of such irradiated systems is required in order to understand the current experiments and to inspire new directions of experimental investigation. In this presentation we report on our theoretical molecular-dynamics tool able to follow non-equilibrium dynamics within finite systems irradiated by intense X-ray pulses. We introduce the relevant physical processes, present computational methods used, discuss their limitations and also the specific constraints on calculations imposed by experimental conditions. Finally, we conclude with a few simulation examples.

  20. Constraint-based scheduling applying constraint programming to scheduling problems

    CERN Document Server

    Baptiste, Philippe; Nuijten, Wim

    2001-01-01

    Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

  1. Rapid spectral and flux time variations in a solar burst observed at various dm-mm wavelengths and at hard x rays

    International Nuclear Information System (INIS)

    Zodivaz, A.M.; Kaufmann, P.; Correia, E.; Costa, J.E.R.; Takakura, T.; Cliver, E.W.; Tapping, K.F.; Air Force Geophysics Lab., Hanscom AFB, MA; National Research Council of Canada, Ottawa, Ontario)

    1986-01-01

    A solar burst was observed with high sensitivity and time resolution at cm-mm wavelengths by two different radio observatories (Itapetinga and Algonquin), with high spectral time resolution at dm-mm wavelengths by patrol instruments (Sagamore Hill), and at hard x rays (HXM Hinotori). At the onset of the major burst time structure there was a rapid rise in the spectral turnover frequency (from 5 to 15 GHz), in about 10s, coincident to a reduction of the spectral index in the optically thin part of the spectrum. The burst maxima were not time coincident at the optically thin radio frequencies and at the different hard x ray energy ranges. The profiles at higher radio frequencies exhibited better time coincidence to the high energy x rays. The hardest x ray spectrum (-3) coincided with peak radio emission at the higher frequency (44 GHz). The event appeared to be built up by a first major injection of softer particles followed by other injections of harder particles. Ultrafast time structures were identified as superimposed on the burst emission at the cm-mm high sensitivity data at x rays, with predominant repetition rates ranging from 2.0 to 3.5 Hz

  2. Experimental research on the ultimate strength of hard aluminium alloy 2017 subjected to short-time radioactive heating

    International Nuclear Information System (INIS)

    Dafang, Wu; Yuewu, Wang; Bing, Pan; Meng, Mu; Lin, Zhu

    2012-01-01

    Highlights: ► Ultimate strength at transient heating is critical to security design of missiles. ► We measure the ultimate strength of alloy 2017 subjected to transient heating. ► Experimental results at transient heating are lacking in strength design handbook. ► Ultimate strength of alloy 2017 experimented is much higher than handbook value. ► The results provide a new method for optimal design of high-speed flight vehicles. -- Abstract: Alloy 2017 (Al–Cu–Mg) is a hard aluminium alloy strengthened by heat treatment. Because of its higher strength, finer weldability and ductility, hard aluminium alloy 2017 has been widely used in the field of aeronautics and astronautics. However, the ultimate strength and other characteristic mechanical parameters of aluminium alloy 2017 in a transient heating environment are still unclear, as these key mechanical parameters are lacking in the existing strength design handbook. The experimental characterisation of these critical parameters of aluminium alloy 2017 is undoubtedly meaningful for reliably estimating life span of and improving safety in designing high-speed flight vehicles. In this paper, the high-temperature ultimate strength, loading time and other mechanical properties of hard aluminium alloy 2017 under different transient heating temperatures and loading conditions are investigated by combining a transient aerodynamic heating simulation system and a material testing machine. The experimental results reveal that the ultimate strength and loading capability of aluminium alloy 2017 subjected to transient thermal heating are much higher than those tested in a long-time stable high-temperature environment. The research of this work not only provides a substantial basis for the loading capability improvement and optimal design of aerospace materials and structures subject to transient heating but also presents a new research direction with a practical application value.

  3. Optimum filters with time width constraints for liquid argon total-absorption detectors

    International Nuclear Information System (INIS)

    Gatti, E.; Radeka, V.

    1977-10-01

    Optimum filter responses are found for triangular current input pulses occurring in liquid argon ionization chambers used as total absorption detectors. The filters considered are subject to the following constraints: finite width of the output pulse having a prescribed ratio to the width of the triangular input current pulse and zero area of a bipolar antisymmetrical pulse or of a three lobe pulse, as required for high event rates. The feasibility of pulse shaping giving an output equal to, or shorter than, the input one is demonstrated. It is shown that the signal-to-noise ratio remains constant for the chamber interelectrode gap which gives an input pulse width (i.e., electron drift time) greater than one third of the required output pulse width

  4. Unified Research on Network-Based Hard/Soft Information Fusion

    Science.gov (United States)

    2016-02-02

    3.2.1 Hard +Soft Data Association Data gathered during various Counterinsurgency (or COIN) operations is in different formats . For example, the...characteristic, observation time, and related data. Figure 45: Sample snapshot frame from hard sensor data TML formats were developed and...160 Figure 54: Penn State components of overall hard and soft fusion process Summary of Year 1 Accomplishments • Team formation • Initial

  5. Academic Training: Real Time Process Control - Lecture series

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 7, 8 and 9 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Real Time Process Control T. Riesco / CERN-TS What exactly is meant by Real-time? There are several definitions of real-time, most of them contradictory. Unfortunately the topic is controversial, and there does not seem to be 100% agreement over the terminology. Real-time applications are becoming increasingly important in our daily lives and can be found in diverse environments such as the automatic braking system on an automobile, a lottery ticket system, or robotic environmental samplers on a space station. These lectures will introduce concepts and theory like basic concepts timing constraints, task scheduling, periodic server mechanisms, hard and soft real-time.ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  6. Analysis of several digital network technologies for hard real-time communications in nuclear plant

    International Nuclear Information System (INIS)

    Song, Ki Sang; No, Hee Chun

    1999-01-01

    Applying digital network technology for advanced nuclear plant requires deterministic communication for tight safety requirements, timely and reliable data delivery for operation critical and mission-critical characteristics of nuclear plant. Communication protocols, such as IEEE 802/4 Tiken Bus, IEEE 802/5 Token Ring, FDDI, and ARCnet, which have deterministic communication capability are partially applied to several nuclear power plants. Although digital communication technologies have many advantages, it is necessary to consider the noise immunity form electromagnetic interference (EMI), electrical interference, impulse noise, and heat noise before selecting specific digital network technology for nuclear plant. In this paper, we consider the token frame loss and data frame loss rate due to the link error event, frame size, and link data rate in different protocols, and evaluate the possibility of failure to meet the hard real-time requirement in nuclear plant. (author). 11 refs., 3 figs., 4 tabs

  7. A Run-Time Verification Framework for Smart Grid Applications Implemented on Simulation Frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Ciraci, Selim; Sozer, Hasan; Tekinerdogan, Bedir

    2013-05-18

    Smart grid applications are implemented and tested with simulation frameworks as the developers usually do not have access to large sensor networks to be used as a test bed. The developers are forced to map the implementation onto these frameworks which results in a deviation between the architecture and the code. On its turn this deviation makes it hard to verify behavioral constraints that are de- scribed at the architectural level. We have developed the ConArch toolset to support the automated verification of architecture-level behavioral constraints. A key feature of ConArch is programmable mapping for architecture to the implementation. Here, developers implement queries to identify the points in the target program that correspond to architectural interactions. ConArch generates run- time observers that monitor the flow of execution between these points and verifies whether this flow conforms to the behavioral constraints. We illustrate how the programmable mappings can be exploited for verifying behavioral constraints of a smart grid appli- cation that is implemented with two simulation frameworks.

  8. Long time scale hard X-ray variability in Seyfert 1 galaxies

    Science.gov (United States)

    Markowitz, Alex Gary

    This dissertation examines the relationship between long-term X-ray variability characteristics, black hole mass, and luminosity of Seyfert 1 Active Galactic Nuclei. High dynamic range power spectral density functions (PSDs) have been constructed for six Seyfert 1 galaxies. These PSDs show "breaks" or characteristic time scales, typically on the order of a few days. There is resemblance to PSDs of lower-mass Galactic X-ray binaries (XRBs), with the ratios of putative black hole masses and variability time scales approximately the same (106--7) between the two classes of objects. The data are consistent with a linear correlation between Seyfert PSD break time scale and black hole mass estimate; the relation extrapolates reasonably well over 6--7 orders of magnitude to XRBs. All of this strengthens the case for a physical similarity between Seyfert galaxies and XRBs. The first six years of RXTE monitoring of Seyfert 1s have been systematically analyzed to probe hard X-ray variability on multiple time scales in a total of 19 Seyfert is in an expansion of the survey of Markowitz & Edelson (2001). Correlations between variability amplitude, luminosity, and black hole mass are explored, the data support the model of PSD movement with black hole mass suggested by the PSD survey. All of the continuum variability results are consistent with relatively more massive black holes hosting larger X-ray emission regions, resulting in 'slower' observed variability. Nearly all sources in the sample exhibit stronger variability towards softer energies, consistent with softening as they brighten. Direct time-resolved spectral fitting has been performed on continuous RXTE monitoring of seven Seyfert is to study long-term spectral variability and Fe Kalpha variability characteristics. The Fe Kalpha line displays a wide range of behavior but varies less strongly than the broadband continuum. Overall, however, there is no strong evidence for correlated variability between the line and

  9. The active blind spot camera: hard real-time recognition of moving objects from a moving camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera. Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck d...

  10. Constraint-based Attribute and Interval Planning

    Science.gov (United States)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  11. Evaluating scintillator performance in time-resolved hard X-ray studies at synchrotron light sources

    International Nuclear Information System (INIS)

    Rutherford, Michael E.; Chapman, David J.; White, Thomas G.; Drakopoulos, Michael; Rack, Alexander; Eakins, Daniel E.

    2016-01-01

    Scintillator performance in time-resolved, hard, indirect detection X-ray studies on the sub-microsecond timescale at synchrotron light sources is reviewed, modelled and examined experimentally. LYSO:Ce is found to be the only commercially available crystal suitable for these experiments. The short pulse duration, small effective source size and high flux of synchrotron radiation is ideally suited for probing a wide range of transient deformation processes in materials under extreme conditions. In this paper, the challenges of high-resolution time-resolved indirect X-ray detection are reviewed in the context of dynamic synchrotron experiments. In particular, the discussion is targeted at two-dimensional integrating detector methods, such as those focused on dynamic radiography and diffraction experiments. The response of a scintillator to periodic synchrotron X-ray excitation is modelled and validated against experimental data collected at the Diamond Light Source (DLS) and European Synchrotron Radiation Facility (ESRF). An upper bound on the dynamic range accessible in a time-resolved experiment for a given bunch separation is calculated for a range of scintillators. New bunch structures are suggested for DLS and ESRF using the highest-performing commercially available crystal LYSO:Ce, allowing time-resolved experiments with an interframe time of 189 ns and a maximum dynamic range of 98 (6.6 bits)

  12. Evaluating scintillator performance in time-resolved hard X-ray studies at synchrotron light sources

    Energy Technology Data Exchange (ETDEWEB)

    Rutherford, Michael E.; Chapman, David J.; White, Thomas G. [Imperial College London, London (United Kingdom); Drakopoulos, Michael [Diamond Light Source, I12 Joint Engineering, Environmental, Processing (JEEP) Beamline, Didcot, Oxfordshire (United Kingdom); Rack, Alexander [European Synchrotron Radiation Facility, Grenoble (France); Eakins, Daniel E., E-mail: d.eakins@imperial.ac.uk [Imperial College London, London (United Kingdom)

    2016-03-24

    Scintillator performance in time-resolved, hard, indirect detection X-ray studies on the sub-microsecond timescale at synchrotron light sources is reviewed, modelled and examined experimentally. LYSO:Ce is found to be the only commercially available crystal suitable for these experiments. The short pulse duration, small effective source size and high flux of synchrotron radiation is ideally suited for probing a wide range of transient deformation processes in materials under extreme conditions. In this paper, the challenges of high-resolution time-resolved indirect X-ray detection are reviewed in the context of dynamic synchrotron experiments. In particular, the discussion is targeted at two-dimensional integrating detector methods, such as those focused on dynamic radiography and diffraction experiments. The response of a scintillator to periodic synchrotron X-ray excitation is modelled and validated against experimental data collected at the Diamond Light Source (DLS) and European Synchrotron Radiation Facility (ESRF). An upper bound on the dynamic range accessible in a time-resolved experiment for a given bunch separation is calculated for a range of scintillators. New bunch structures are suggested for DLS and ESRF using the highest-performing commercially available crystal LYSO:Ce, allowing time-resolved experiments with an interframe time of 189 ns and a maximum dynamic range of 98 (6.6 bits)

  13. Memory State Feedback RMPC for Multiple Time-Delayed Uncertain Linear Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Wei-Wei Qin

    2014-01-01

    Full Text Available This paper focuses on the problem of asymptotic stabilization for a class of discrete-time multiple time-delayed uncertain linear systems with input constraints. Then, based on the predictive control principle of receding horizon optimization, a delayed state dependent quadratic function is considered for incorporating MPC problem formulation. By developing a memory state feedback controller, the information of the delayed plant states can be taken into full consideration. The MPC problem is formulated to minimize the upper bound of infinite horizon cost that satisfies the sufficient conditions. Then, based on the Lyapunov-Krasovskii function, a delay-dependent sufficient condition in terms of linear matrix inequality (LMI can be derived to design a robust MPC algorithm. Finally, the digital simulation results prove availability of the proposed method.

  14. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  15. Effect of thermal transients on the hardness of Zircaloy fuel cladding

    International Nuclear Information System (INIS)

    Hobson, D.O.

    1976-06-01

    This study is directed toward the determination of the effects of annealing cycles with rapid heating rates, short hold times at specific temperatures, and rapid cool-down rates on the hardness of Zircaloy fuel cladding. These rapid annealing cycles are designed to provide preliminary annealing behavior data on Loss-of-Fluid-Test Reactor cladding samples. Information has been obtained on (1) the time dependence of the hardness as a function of annealing temperature, and (2) a correlation of single- and multitransient annealing relationships. Both single- and triple-cycle transients were used; four hold times at each of five maximum temperatures comprised the data set (each portion of the triple-cycle experiments had isothermal hold times equal to one-third of their analogous single-cycle times). It was found that there was little difference in the hardness response between single- and triple-cycle transients for a given total hold time at a particular temperature. Test temperatures range from 1000 to 1400 0 F (538 to 760 0 C) and hold times from 5 to 135 sec. The 1100 0 F (593 0 C) level was found to be the transition level for hardness changes, with shorter times (5 and 15 sec) effecting little or no hardness decrease and the longer times (45 and 135 sec) producing partially and fully annealed material, respectively. Temperatures equal to or greater than 1300 0 F (704 0 C) resulted in fully annealed material for all hold times. The 1000 0 F (538 0 C) tests produced no measurable softening

  16. A Time-Dependent Λ and G Cosmological Model Consistent with Cosmological Constraints

    Directory of Open Access Journals (Sweden)

    L. Kantha

    2016-01-01

    Full Text Available The prevailing constant Λ-G cosmological model agrees with observational evidence including the observed red shift, Big Bang Nucleosynthesis (BBN, and the current rate of acceleration. It assumes that matter contributes 27% to the current density of the universe, with the rest (73% coming from dark energy represented by the Einstein cosmological parameter Λ in the governing Friedmann-Robertson-Walker equations, derived from Einstein’s equations of general relativity. However, the principal problem is the extremely small value of the cosmological parameter (~10−52 m2. Moreover, the dark energy density represented by Λ is presumed to have remained unchanged as the universe expanded by 26 orders of magnitude. Attempts to overcome this deficiency often invoke a variable Λ-G model. Cosmic constraints from action principles require that either both G and Λ remain time-invariant or both vary in time. Here, we propose a variable Λ-G cosmological model consistent with the latest red shift data, the current acceleration rate, and BBN, provided the split between matter and dark energy is 18% and 82%. Λ decreases (Λ~τ-2, where τ is the normalized cosmic time and G increases (G~τn with cosmic time. The model results depend only on the chosen value of Λ at present and in the far future and not directly on G.

  17. Ensemble Kalman filtering in presence of inequality constraints

    Science.gov (United States)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  18. HOROPLAN: computer-assisted nurse scheduling using constraint-based programming.

    Science.gov (United States)

    Darmoni, S J; Fajner, A; Mahé, N; Leforestier, A; Vondracek, M; Stelian, O; Baldenweck, M

    1995-01-01

    Nurse scheduling is a difficult and time consuming task. The schedule has to determine the day to day shift assignments of each nurse for a specified period of time in a way that satisfies the given requirements as much as possible, taking into account the wishes of nurses as closely as possible. This paper presents a constraint-based, artificial intelligence approach by describing a prototype implementation developed with the Charme language and the first results of its use in the Rouen University Hospital. Horoplan implements a non-cyclical constraint-based scheduling, using some heuristics. Four levels of constraints were defined to give a maximum of flexibility: French level (e.g. number of worked hours in a year), hospital level (e.g. specific day-off), department level (e.g. specific shift) and care unit level (e.g. specific pattern for week-ends). Some constraints must always be verified and can not be overruled and some constraints can be overruled at a certain cost. Rescheduling is possible at any time specially in case of an unscheduled absence.

  19. The perceived constraints, motivation, and physical activity levels of ...

    African Journals Online (AJOL)

    The purpose of this research was threefold; Are Korean youth physically active to promote health during leisure time? What constraints to physical active do youth experience during leisure time? Are there relationships among constraints, motivation, and physical activity level? Of 1 280 youth randomly selected by a ...

  20. Interplay of hard and soft physics in small x deep inelastic processes

    International Nuclear Information System (INIS)

    Abramowicz, H.

    1995-01-01

    Coherence phenomena, the increase with energy of coherence length and the nonuniversality of parton structure of the effective pomeron are explained. New hard phenomena directly calculable in QCD such as diffractive electroproduction of states with M 2 2 and the color transparency phenomenon as well as new options to measure the light-cone wave functions of various hadrons are considered. An analogue of Bjorken scaling is predicted for the diffractive electroproduction of ρ mesons at large momentum transfers and for the production of large rapidity gap events, as observed at HERA. A phenomenological QCD evolution equation is suggested to calculate the basic characteristics of the large rapidity gap events. The increase of parton densities at small x as well as new means to disentangle experimentally soft and hard physics are considered. We discuss constraints on the increase of deep inelastic amplitudes with Q 2 derived from the inconsistency of QCD predictions for inclusive and exclusive processes and from unitarity of the S matrix for collisions of wave packets. New ways to probe QCD physics of hard processes at large longitudinal distances and to answer the long standing problems on the origin of the pomeron are suggested. Unresolved problems and perspectives of small x physics are also outlined. (orig.)

  1. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    Science.gov (United States)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  2. Effect of milling time and CNT concentration on hardness of CNT/Al2024 composites produced by mechanical alloying

    International Nuclear Information System (INIS)

    Pérez-Bustamante, R.; Pérez-Bustamante, F.; Estrada-Guel, I.; Licea-Jiménez, L.; Miki-Yoshida, M.; Martínez-Sánchez, R.

    2013-01-01

    Carbon nanotube/2024 aluminum alloy (CNT/Al 2024 ) composites were fabricated with a combination of mechanical alloying (MA) and powder metallurgy routes. Composites were microstructurally and mechanically evaluated at sintering condition. A homogeneous dispersion of CNTs in the Al matrix was observed by a field emission scanning electron microscopy. High-resolution transmission electron microscopy confirmed not only the presence of well dispersed CNTs but also needle-like shape aluminum carbide (Al 4 C 3 ) crystals in the Al matrix. The formation of Al 4 C 3 was suggested as the interaction between the outer shells of CNTs and the Al matrix during MA process in which crystallization took place after the sintering process. The mechanical behavior of composites was evaluated by Vickers microhardness measurements indicating a significant improvement in hardness as function of the CNT content. This improvement was associated to a homogeneous dispersion of CNTs and the presence of Al 4 C 3 in the aluminum alloy matrix. - Highlights: ► The 2024 aluminum alloy was reinforced by CNTs by mechanical alloying process. ► Composites were microstructural and mechanically evaluated after sintering condition. ► The greater the CNT concentration, the greater the hardness of the composites. ► Higher hardness in composites is achieved at 20 h of milling. ► The formation of Al 4 C 3 does not present a direct relationship with the milling time.

  3. Prepulse dependence in hard x-ray generation from microdroplets

    International Nuclear Information System (INIS)

    Anand, M.; Kahaly, S.; Kumar, G. Ravindra; Sandhu, A. S.; Gibbon, P.; Krishnamurthy, M.

    2006-01-01

    We report on experiments which show that liquid microdroplets are very efficient in hard x-ray generation. We make a comparative study of hard x-ray emission from 15 μm methanol microdroplets and a plain slab target of similar atomic composition at similar laser intensities. The hard X-ray yield from droplet plasmas is about 35 times more than that obtained from solid plasmas. A prepulse that is about 10ns and at least 2% in intensity of the main pulse is essential for hard x-ray generation from the droplets at about 1015 W cm-2. A hot electron temperature of 36 keV is measured from the droplets at 8 x 1014 W cm-2; three times higher intensity is needed to obtain similar hot electron temperature from solid plasmas that have similar atomic composition. We use 1D-PIC simulation to obtain qualitative correlation to the experimental observations

  4. Modifier constraint in alkali borophosphate glasses using topological constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiang [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zeng, Huidan, E-mail: hdzeng@ecust.edu.cn [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Jiang, Qi [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Zhao, Donghui [Unifrax Corporation, Niagara Falls, NY 14305 (United States); Chen, Guorong [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China); Wang, Zhaofeng; Sun, Luyi [Department of Chemical & Biomolecular Engineering and Polymer Program, Institute of Materials Science, University of Connecticut, Storrs, CT 06269 (United States); Chen, Jianding [Key Laboratory for Ultrafine Materials of Ministry of Education, School of Materials Science and Engineering, East China University of Science and Technology, Shanghai 200237 (China)

    2016-12-01

    In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.

  5. Bond-orientational analysis of hard-disk and hard-sphere structures.

    Science.gov (United States)

    Senthil Kumar, V; Kumaran, V

    2006-05-28

    We report the bond-orientational analysis results for the thermodynamic, random, and homogeneously sheared inelastic structures of hard-disks and hard-spheres. The thermodynamic structures show a sharp rise in the order across the freezing transition. The random structures show the absence of crystallization. The homogeneously sheared structures get ordered at a packing fraction higher than the thermodynamic freezing packing fraction, due to the suppression of crystal nucleation. On shear ordering, strings of close-packed hard-disks in two dimensions and close-packed layers of hard-spheres in three dimensions, oriented along the velocity direction, slide past each other. Such a flow creates a considerable amount of fourfold order in two dimensions and body-centered-tetragonal (bct) structure in three dimensions. These transitions are the flow analogs of the martensitic transformations occurring in metals due to the stresses induced by a rapid quench. In hard-disk structures, using the bond-orientational analysis we show the presence of fourfold order. In sheared inelastic hard-sphere structures, even though the global bond-orientational analysis shows that the system is highly ordered, a third-order rotational invariant analysis shows that only about 40% of the spheres have face-centered-cubic (fcc) order, even in the dense and near-elastic limits, clearly indicating the coexistence of multiple crystalline orders. When layers of close-packed spheres slide past each other, in addition to the bct structure, the hexagonal-close-packed (hcp) structure is formed due to the random stacking faults. Using the Honeycutt-Andersen pair analysis and an analysis based on the 14-faceted polyhedra having six quadrilateral and eight hexagonal faces, we show the presence of bct and hcp signatures in shear ordered inelastic hard-spheres. Thus, our analysis shows that the dense sheared inelastic hard-spheres have a mixture of fcc, bct, and hcp structures.

  6. The difference nanocomposite hardness level using LED photoactivation based on curing period variations

    Directory of Open Access Journals (Sweden)

    Hasiana Tatian

    2011-03-01

    Full Text Available Polimerizatian is the critical stage to determine the quality of composites resin, this involves isolated monomer carbon double bonds being converted to an extended network of single bonds. Physical and mechanical properties of composites are influenced by the level of conversion attained during polymerization. An adequate light intensity and light curing time are important to obtain the degree of polymerization. The objective of this study is to evaluate the difference of the hardness nanocomposites which activated by LED LCU based on the variation of curing times. This study is a true experimental research. The samples were made from nanocomposites material with cylinder form of 4 mm in depth, 6 mm in diameter. This samples divided into 3 groups of curing times. Group, I was cured for 20's curing time as a control due to manufactory recommended; Group II was cured for 30's, and Group III was cured for 40's and the hardness (Rebound hardness tester was determined using Rebound scale (RS and converted by Mohs scale (MS. There was a very significant level of hardness rate from each group using ANOVA test. The result of the study concludes that there were the differences on the nanocomposites hardness level cured under different curing times 20, 30 and 40 sec. The longer of curing times, the higher level of hardness.

  7. Photon technology. Hard photon technology; Photon technology. Hard photon gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Research results of hard photon technology have been summarized as a part of novel technology development highly utilizing the quantum nature of photon. Hard photon technology refers to photon beam technologies which use photon in the 0.1 to 200 nm wavelength region. Hard photon has not been used in industry due to the lack of suitable photon sources and optical devices. However, hard photon in this wavelength region is expected to bring about innovations in such areas as ultrafine processing and material synthesis due to its atom selective reaction, inner shell excitation reaction, and spatially high resolution. Then, technological themes and possibility have been surveyed. Although there are principle proposes and their verification of individual technologies for the technologies of hard photon generation, regulation and utilization, they are still far from the practical applications. For the photon source technology, the laser diode pumped driver laser technology, laser plasma photon source technology, synchrotron radiation photon source technology, and vacuum ultraviolet photon source technology are presented. For the optical device technology, the multi-layer film technology for beam mirrors and the non-spherical lens processing technology are introduced. Also are described the reduction lithography technology, hard photon excitation process, and methods of analysis and measurement. 430 refs., 165 figs., 23 tabs.

  8. Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation

    Science.gov (United States)

    Sleesongsom, S.; Bureerat, S.

    2018-03-01

    This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.

  9. A Random Walk Approach to Query Informative Constraints for Clustering.

    Science.gov (United States)

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  10. Graphical constraints: a graphical user interface for constraint problems

    OpenAIRE

    Vieira, Nelson Manuel Marques

    2015-01-01

    A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embrace...

  11. Magnetic hyperthermia with hard-magnetic nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Kashevsky, Bronislav E., E-mail: bekas@itmo.by [A.V Luikov Heat and Mass Transfer Institute, Belarus Academy of Sciences, P. Brovka str. 15, Minsk 220072 (Belarus); Kashevsky, Sergey B.; Korenkov, Victor S. [A.V Luikov Heat and Mass Transfer Institute, Belarus Academy of Sciences, P. Brovka str. 15, Minsk 220072 (Belarus); Istomin, Yuri P. [N. N. Alexandrov National Cancer Center of Belarus, Lesnoy-2, Minsk 223040 (Belarus); Terpinskaya, Tatyana I.; Ulashchik, Vladimir S. [Institute of Physiology, Belarus Academy of Sciences, Akademicheskaya str. 28, Minsk 220072 (Belarus)

    2015-04-15

    Recent clinical trials of magnetic hyperthermia have proved, and even hardened, the Ankinson-Brezovich restriction as upon magnetic field conditions applicable to any site of human body. Subject to this restriction, which is harshly violated in numerous laboratory and small animal studies, magnetic hyperthermia can relay on rather moderate heat source, so that optimization of the whole hyperthermia system remains, after all, the basic problem predetermining its clinical perspectives. We present short account of our complex (theoretical, laboratory and small animal) studies to demonstrate that such perspectives should be related with the hyperthermia system based on hard-magnetic (Stoner–Wohlfarth type) nanoparticles and strong low-frequency fields rather than with superparamagnetic (Brownian or Neél) nanoparticles and weak high-frequency fields. This conclusion is backed by an analytical evaluation of the maximum absorption rates possible under the field restriction in the ideal hard-magnetic (Stoner–Wohlarth) and the ideal superparamagnetic (single relaxation time) systems, by theoretical and experimental studies of the dynamic magnetic hysteresis in suspensions of movable hard-magnetic particles, by producing nanoparticles with adjusted coercivity and suspensions of such particles capable of effective energy absorption and intratumoral penetration, and finally, by successful treatment of a mice model tumor under field conditions acceptable for whole human body. - Highlights: • Hard-magnetic nanoparticles are shown superior for hyperthetmia to superparamagnetic. • Optimal system parameters are found from magnetic reversal model in movable particle. • Penetrating suspension of HM particles with aggregation-independent SAR is developed. • For the first time, mice with tumors are healed in AC field acceptable for human body.

  12. Comprehensive hard materials

    CERN Document Server

    2014-01-01

    Comprehensive Hard Materials deals with the production, uses and properties of the carbides, nitrides and borides of these metals and those of titanium, as well as tools of ceramics, the superhard boron nitrides and diamond and related compounds. Articles include the technologies of powder production (including their precursor materials), milling, granulation, cold and hot compaction, sintering, hot isostatic pressing, hot-pressing, injection moulding, as well as on the coating technologies for refractory metals, hard metals and hard materials. The characterization, testing, quality assurance and applications are also covered. Comprehensive Hard Materials provides meaningful insights on materials at the leading edge of technology. It aids continued research and development of these materials and as such it is a critical information resource to academics and industry professionals facing the technological challenges of the future. Hard materials operate at the leading edge of technology, and continued res...

  13. "Work smart, wear your hard hat"

    CERN Multimedia

    2003-01-01

    Falling objects and collisions are frequent occurrences in work sites and hazardous areas. Hard hats can help prevent many types of accident and can even save lives. Just imagine an 800 g spanner falling from a 13 m high scaffold onto the head of someone standing below - a nightmare scenario! The impact to the head is equivalent to that of a 5 kg weight falling from 2 metres. That is just what happened to Gerd Fetchenhauer when he was working on the UA1 experiment. Fortunately, he was wearing a hard hat at the time. "That hat saved my life," he explains. "It punched a hole right through the hat and I was a bit dazed for a couple of hours but otherwise I was OK." Since that day, Gerd Fetchenhauer, now working on CMS, is never seen on a work site without his hard hat on. Work sites have proliferated at CERN with the construction of the LHC and its detectors, and the wearing of hard hats is compulsory (not to mention life-saving). In the underground caverns and experiment halls, where gantry cranes and other h...

  14. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    Science.gov (United States)

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  15. Solving University Scheduling Problem Using Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Shaikh

    2011-10-01

    Full Text Available In universities scheduling curriculum activity is an essential job. Primarily, scheduling is a distribution of limited resources under interrelated constraints. The set of hard constraints demand the highest priority and should not to be violated at any cost, while the maximum soft constraints satisfaction mounts the quality scale of solution. In this research paper, a novel bisected approach is introduced that is comprisesd of GA (Genetic Algorithm as well as Backtracking Recursive Search. The employed technique deals with both hard and soft constraints successively. The first phase decisively is focused over elimination of all the hard constraints bounded violations and eventually produces partial solution for subsequent step. The second phase is supposed to draw the best possible solution on the search space. Promising results are obtained by implementation on the real dataset. The key points of the research approach are to get assurance of hard constraints removal from the dataset and minimizing computational time for GA by initializing pre-processed set of chromosomes.

  16. Using Accretionary Hard Parts to Study Changes in Seasonality over Geologic Time

    Science.gov (United States)

    Ivany, L. C.; Judd, E. J.

    2017-12-01

    Seasonality has been an enigma for deep-time research. Proxies for mean annual temperature (MAT) are the mainstay of paleoclimate studies, and while these are tremendously informative, seasonal extremes are the variables that matter most for many paleoclimatic, paleoceanographic, and physiologic processes. Seasonality has been difficult to constrain in the rock record, however, because of the need for subannual resolution - very few such archives exist. One of the most promising comes in the form of the mineralized hard parts of organisms that grow by accretion, e.g., mollusks, corals, fish otoliths. Such materials carry a chemical signature of temperature at the time of precipitation, allowing for assessment of the seasonal temperature extremes experienced by the organism. Interpretation of these records in the context of climate, however, are complicated by the overprint of biology - organisms don't necessarily grow all year long, resulting in a truncation of the seasonal cycle regardless of sampling resolution. Furthermore, unrecognized differences in depositional environment or taxon ecology among samples can make comparisons over time even more tenuous. Even with internally consistent datasets, assessment of pattern is rarely based on more than visual inspection. An iterative computational procedure predicated on the assumption of sinusoidal variation in temperature and growth rate can circumvent these concerns. Deviations in the shape of oxygen isotope profiles from the predicted sinusoid allow recovery of the mean and amplitude of temperature variation as well as the timing and duration of growth within years. Estimates of such parameters from multiple specimens allow for meaningful comparisons over time, both for seasonality and the growth response of organisms. We apply this approach to datasets of seasonal variation through the Paleogene of the US Gulf Coastal Plain and the Eocene of Antarctica derived largely from marine bivalve mollusks. In the

  17. Hard processes in hadronic interactions

    International Nuclear Information System (INIS)

    Satz, H.; Wang, X.N.

    1995-01-01

    Quantum chromodynamics is today accepted as the fundamental theory of strong interactions, even though most hadronic collisions lead to final states for which quantitative QCD predictions are still lacking. It therefore seems worthwhile to take stock of where we stand today and to what extent the presently available data on hard processes in hadronic collisions can be accounted for in terms of QCD. This is one reason for this work. The second reason - and in fact its original trigger - is the search for the quark-gluon plasma in high energy nuclear collisions. The hard processes to be considered here are the production of prompt photons, Drell-Yan dileptons, open charm, quarkonium states, and hard jets. For each of these, we discuss the present theoretical understanding, compare the resulting predictions to available data, and then show what behaviour it leads to at RHIC and LHC energies. All of these processes have the structure mentioned above: they contain a hard partonic interaction, calculable perturbatively, but also the non-perturbative parton distribution within a hadron. These parton distributions, however, can be studied theoretically in terms of counting rule arguments, and they can be checked independently by measurements of the parton structure functions in deep inelastic lepton-hadron scattering. The present volume is the work of Hard Probe Collaboration, a group of theorists who are interested in the problem and were willing to dedicate a considerable amount of their time and work on it. The necessary preparation, planning and coordination of the project were carried out in two workshops of two weeks' duration each, in February 1994 at CERn in Geneva andin July 1994 at LBL in Berkeley

  18. Applications of NTNU/SINTEF Drillability Indices in Hard Rock Tunneling

    Science.gov (United States)

    Zare, S.; Bruland, A.

    2013-01-01

    Drillability indices, i.e., the Drilling Rate Index™ (DRI), Bit Wear Index™ (BWI), Cutter Life Index™ (CLI), and Vickers Hardness Number Rock (VHNR), are indirect measures of rock drillability. These indices are recognized as providing practical characterization of rock properties used in the Norwegian University of Science and Technology (NTNU) time and cost prediction models available for hard rock tunneling and surface excavation. The tests form the foundation of various hard rock equipment capacity and performance prediction methods. In this paper, application of the tests for tunnel boring machine (TBM) and drill and blast (D&B) tunneling is investigated and the impact of the indices on excavation time and costs is presented.

  19. Rigid Body Time Integration by Convected Base Vectors with Implicit Constraints

    DEFF Research Database (Denmark)

    Krenk, Steen; Nielsen, Martin Bjerre

    2013-01-01

    of the kinetic energy used in the present formulation is deliberately chosen to correspond to a rigid body rotation, and the orthonormality constraints are introduced via the equivalent Green strain components of the base vectors. The particular form of the extended inertia tensor used here implies a set...

  20. Einstein constraints in the Yang-Mills form

    International Nuclear Information System (INIS)

    Ashtekar, A.

    1987-01-01

    It is pointed out that constraints of Einstein's theory play a powerful role in both classical and quantum theory because they generate motions in spacetime, rather than in an internal space. New variables are then introduced on the Einstein phase space in terms of which constraints simplify considerably. In particular, the use of these variables enables one to imbed the constraint surface of Einstein's theory into that of Yang-Mills. The imbedding suggests new lines of attack to a number of problems in classical and quantum gravity and provides new concepts and tools to investigate the microscopic structure of space-time geometry

  1. Influence of timing of delayed hard palate closure on articulation skills in 3-year-old Danish children with unilateral cleft lip and palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Boers, Maria; Schöps, Antje

    2017-01-01

    Background Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. Aims To assess the influence of different timing of hard palate closure in a two-stage procedure on articu......Background Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. Aims To assess the influence of different timing of hard palate closure in a two-stage procedure...... on articulation skills in 3-year-olds born with unilateral cleft lip and palate (UCLP). Secondary aims were to compare results with peers without CP, and to investigate if there are gender differences in articulation skills. Furthermore, burden of treatment was to be estimated in terms of secondary surgery...

  2. Constraint theory multidimensional mathematical model management

    CERN Document Server

    Friedman, George J

    2017-01-01

    Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...

  3. The impact of weight classification on safety: timing steps to adapt to external constraints

    Science.gov (United States)

    Gill, S.V.

    2015-01-01

    Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all psmetronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658

  4. Remember Hard but Think Softly: Metaphorical Effects of Hardness/Softness on Cognitive Functions

    Directory of Open Access Journals (Sweden)

    Jiushu Xie

    2016-09-01

    Full Text Available Previous studies have found that bodily stimulation, such as hardness, biases social judgment and evaluation via metaphorical association; however, it remains unclear whether bodily stimulation also affects cognitive functions, such as memory and creativity. The current study used metaphorical associations between hard and rigid and between soft and flexible in Chinese, to investigate whether the experience of hardness affected cognitive functions requiring either rigidity (memory or flexibility (creativity. In Experiment 1, we found that Chinese-speaking participants performed better at recalling previously memorized words while sitting on a hard-surface stool (the hard condition than a cushioned one (the soft condition. In Experiment 2, participants sitting on a cushioned stool outperformed those sitting on a hard-surface stool on a Chinese riddle task, which required creative/flexible thinking, but not on an analogical reasoning task, which required both rigid and flexible thinking. The results suggest the hardness experience affects cognitive functions that are metaphorically associated with rigidity and flexibility. They support the embodiment proposition that cognitive functions and representations could be grounded via metaphorical association in bodily states.

  5. Signature Schemes Secure against Hard-to-Invert Leakage

    DEFF Research Database (Denmark)

    Faust, Sebastian; Hazay, Carmit; Nielsen, Jesper Buus

    2012-01-01

    of the secret key. As a second contribution, we construct a signature scheme that achieves security for random messages assuming that the adversary is given a polynomial-time hard to invert function. Here, polynomial-hardness is required even when given the entire public-key – so called weak auxiliary input......-theoretically reveal the entire secret key. In this work, we propose the first constructions of digital signature schemes that are secure in the auxiliary input model. Our main contribution is a digital signature scheme that is secure against chosen message attacks when given an exponentially hard-to-invert function...... security. We show that such signature schemes readily give us auxiliary input secure identification schemes...

  6. Latin hypercube sampling with inequality constraints

    International Nuclear Information System (INIS)

    Iooss, B.; Petelet, M.; Asserin, O.; Loredo, A.

    2010-01-01

    In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature. (authors)

  7. New formulation of Horava-Lifshitz quantum gravity as a master constraint theory

    Energy Technology Data Exchange (ETDEWEB)

    Soo, Chopin, E-mail: cpsoo@mail.ncku.edu.tw [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China); Yang Jinsong, E-mail: Yangksong@gmail.com [Department of Physics, National Cheng Kung University, Tainan 70101, Taiwan (China); Yu, Hoi-Lai, E-mail: hlyu@phys.sinica.edu.tw [Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan (China)

    2011-07-04

    Both projectable and non-projectable versions of Horava-Lifshitz gravity face serious challenges. In the non-projectable version, the constraint algebra is seemingly inconsistent. The projectable version lacks a local Hamiltonian constraint, thus allowing for an extra scalar mode which can be problematic. A new formulation of non-projectable Horava-Lifshitz gravity, naturally realized as a representation of the master constraint algebra studied by loop quantum gravity researchers, is presented. This yields a consistent canonical theory with first class constraints. It captures the essence of Horava-Lifshitz gravity in retaining only spatial diffeomorphisms (instead of full space-time covariance) as the physically relevant non-trivial gauge symmetry; at the same time the local Hamiltonian constraint needed to eliminate the extra mode is equivalently enforced by the master constraint.

  8. Variance-Constrained Robust Estimation for Discrete-Time Systems with Communication Constraints

    Directory of Open Access Journals (Sweden)

    Baofeng Wang

    2014-01-01

    Full Text Available This paper is concerned with a new filtering problem in networked control systems (NCSs subject to limited communication capacity, which includes measurement quantization, random transmission delay, and packets loss. The measurements are first quantized via a logarithmic quantizer and then transmitted through a digital communication network with random delay and packet loss. The three communication constraints phenomena which can be seen as a class of uncertainties are formulated by a stochastic parameter uncertainty system. The purpose of the paper is to design a linear filter such that, for all the communication constraints, the error state of the filtering process is mean square bounded and the steady-state variance of the estimation error for each state is not more than the individual prescribed upper bound. It is shown that the desired filtering can effectively be solved if there are positive definite solutions to a couple of algebraic Riccati-like inequalities or linear matrix inequalities. Finally, an illustrative numerical example is presented to demonstrate the effectiveness and flexibility of the proposed design approach.

  9. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    Science.gov (United States)

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  10. Development of hard X-ray spectrometer with high time resolution on the J-TEXT tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Ma, T.K.; Chen, Z.Y., E-mail: zychen@hust.edu.cn; Huang, D.W.; Tong, R.H.; Yan, W.; Wang, S.Y.; Dai, A.J.; Wang, X.L.

    2017-06-01

    A hard X-ray (HXR) spectrometer has been developed to study the runaway electrons during the sawtooth activities and during the runaway current plateau phase on the J-TEXT tokamak. The spectrometer system contains four NaI scintillator detectors and a multi-channel analyzer (MCA) with 0.5 ms time resolution. The dedicated peak detection circuit embedded in the MCA provides a pulse height analysis at count rate up to 1.2 million counts per second (Mcps), which is the key to reach the high time resolution. The accuracy and reliability of the system have been verified by comparing with the hardware integrator of HXR flux. The temporal evolution of HXR flux in different energy ranges can be obtained with high time resolution by this dedicated HXR spectrometer. The response of runaway electron transport with different energy during the sawtooth activities can be studied. The energy evolution of runaway electrons during the plateau phase of runaway current can be obtained. - Highlights: • A HXR spectrometer with high time resolution has been developed on J-TEXT tokamak. • The response of REs transport during the sawtooth activities can be investigated. • The energy evolution of REs following the disruptions can be monitored.

  11. Making a good group decision (low risk) in Singapore under an environment that has time and cost constraints

    OpenAIRE

    Loo, Sok Hiang Candy

    2014-01-01

    Approved for public release; distribution is unlimited Organizations in Singapore operate in a highly competitive and fast-paced work environment that presents decision-making challenges at the individual, group, and organization levels. A key problem is achieving good decision fitness within time and cost constraints. While many decision-making theories and processes address the fundamental decision-making process, there is limited research on improving the group decision-making framework...

  12. From physical dose constraints to equivalent uniform dose constraints in inverse radiotherapy planning

    International Nuclear Information System (INIS)

    Thieke, Christian; Bortfeld, Thomas; Niemierko, Andrzej; Nill, Simeon

    2003-01-01

    Optimization algorithms in inverse radiotherapy planning need information about the desired dose distribution. Usually the planner defines physical dose constraints for each structure of the treatment plan, either in form of minimum and maximum doses or as dose-volume constraints. The concept of equivalent uniform dose (EUD) was designed to describe dose distributions with a higher clinical relevance. In this paper, we present a method to consider the EUD as an optimization constraint by using the method of projections onto convex sets (POCS). In each iteration of the optimization loop, for the actual dose distribution of an organ that violates an EUD constraint a new dose distribution is calculated that satisfies the EUD constraint, leading to voxel-based physical dose constraints. The new dose distribution is found by projecting the current one onto the convex set of all dose distributions fulfilling the EUD constraint. The algorithm is easy to integrate into existing inverse planning systems, and it allows the planner to choose between physical and EUD constraints separately for each structure. A clinical case of a head and neck tumor is optimized using three different sets of constraints: physical constraints for all structures, physical constraints for the target and EUD constraints for the organs at risk, and EUD constraints for all structures. The results show that the POCS method converges stable and given EUD constraints are reached closely

  13. Providing reliable energy in a time of constraints : a North American concern

    International Nuclear Information System (INIS)

    Egan, T.; Turk, E.

    2008-04-01

    The reliability of the North American electricity grid was discussed. Government initiatives designed to control carbon dioxide (CO 2 ) and other emissions in some regions of Canada may lead to electricity supply constraints in other regions. A lack of investment in transmission infrastructure has resulted in constraints within the North American transmission grid, and the growth of smaller projects is now raising concerns about transmission capacity. Labour supply shortages in the electricity industry are also creating concerns about the long-term security of the electricity market. Measures to address constraints must be considered in the current context of the North American electricity system. The extensive transmission interconnects and integration between the United States and Canada will provide a framework for greater trade and market opportunities between the 2 countries. Coordinated actions and increased integration will enable Canada and the United States to increase the reliability of electricity supply. However, both countries must work cooperatively to increase generation supply using both mature and emerging technologies. The cross-border transmission grid must be enhanced by increasing transmission capacity as well as by implementing new reliability rules, building new infrastructure, and ensuring infrastructure protection. Barriers to cross-border electricity trade must be identified and avoided. Demand-side and energy efficiency measures must also be implemented. It was concluded that both countries must focus on developing strategies for addressing the environmental concerns related to electricity production. 6 figs

  14. [Computer-assisted phacoemulsification for hard cataracts].

    Science.gov (United States)

    Zemba, M; Papadatu, Adriana-Camelia; Sîrbu, Laura-Nicoleta; Avram, Corina

    2012-01-01

    to evaluate the efficiency of new torsional phacoemulsification software (Ozil IP system) in hard nucleus cataract extraction. 45 eyes with hard senile cataract (degree III and IV) underwent phacoemulsification performed by the same surgeon, using the same technique (stop and chop). Infiniti (Alcon) platform was used, with Ozil IP software and Kelman phaco tip miniflared, 45 degrees. The nucleus was split into two and after that the first half was phacoemulsificated with IP-on (group 1) and the second half with IP-off (group 2). For every group we measured: cumulative dissipated energy (CDE), numbers of tip closure that needed manual desobstruction the amount of BSS used. The mean CDE was the same in group 1 and in group 2 (between 6.2 and 14.9). The incidence of occlusion that needed manual desobstruction was lower in group 1 (5 times) than in group 2 (13 times). Group 2 used more BSS compared to group 1. The new torsional software (IP system) significantly decreased occlusion time and balanced salt solution use over standard torsional software, particularly with denser cataracts.

  15. The Impact of Resource Constraints on the Psychological Well-Being of Survivors of Intimate Partner Violence over Time

    Science.gov (United States)

    Beeble, Marisa L.; Bybee, Deborah; Sullivan, Cris M.

    2010-01-01

    This study examined the impact of resource constraints on the psychological well-being of survivors of intimate partner violence (IPV), testing whether resource constraints is one mechanism that partially mediates the relationship between IPV and women's well-being. Although within-woman changes in resource constraints did not mediate the…

  16. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    International Nuclear Information System (INIS)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-01-01

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-λR c [1-(R c /R) 2n ] in the regime R>>R c . From the solar system constraints on the post-Newtonian parameter γ, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the Λ-cold dark matter (ΛCDM) cosmological model. For the model f(R)=R-λR c (R/R c ) p with 0 -10 , which shows that this model is hardly distinguishable from the ΛCDM cosmology

  17. Impact of optimal load response to real-time electricity price on power system constraints in Denmark

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Bak-Jensen, Birgitte

    2010-01-01

    Since the hourly spot market price is available one day ahead in Denmark, the price could be transferred to the consumers and they may shift their loads from high price periods to the low price periods in order to save their energy costs. The optimal load response to a real-time electricity price...... and may represent the future of electricity markets in some ways, is chosen as the studied power system in this paper. A distribution system where wind power capacity is 126% of maximum loads is chosen as the study case. This paper presents a nonlinear load optimization method to real-time power price...... for demand side management in order to save the energy costs as much as possible. Simulation results show that the optimal load response to a real-time electricity price has some good impacts on power system constraints in a distribution system with high wind power penetrations....

  18. Constraint-based reachability

    Directory of Open Access Journals (Sweden)

    Arnaud Gotlieb

    2013-02-01

    Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.

  19. 30 CFR 75.1720-1 - Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats, or hard caps... STANDARDS-UNDERGROUND COAL MINES Miscellaneous § 75.1720-1 Distinctively colored hard hats, or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color...

  20. Faddeev-Jackiw quantization and constraints

    International Nuclear Information System (INIS)

    Barcelos-Neto, J.; Wotzasek, C.

    1992-01-01

    In a recent Letter, Faddeev and Jackiw have shown that the reduction of constrained systems into its canonical, first-order form, can bring some new insight into the research of this field. For sympletic manifolds the geometrical structure, called Dirac or generalized bracket, is obtained directly from the inverse of the nonsingular sympletic two-form matrix. In the cases of nonsympletic manifolds, this two-form is degenerated and cannot be inverted to provide the generalized brackets. This singular behavior of the sympletic matrix is indicative of the presence of constraints that have to be carefully considered to yield to consistent results. One has two possible routes to treat this problem: Dirac has taught us how to implement the constraints into the potential part (Hamiltonian) of the canonical Lagrangian, leading to the well-known Dirac brackets, which are consistent with the constraints and can be mapped into quantum commutators (modulo ordering terms). The second route, suggested by Faddeev and Jackiw, and followed in this paper, is to implement the constraints directly into the canonical part of the first order Lagrangian, using the fact that the consistence condition for the stability of the constrained manifold is linear in the time derivative. This algorithm may lead to an invertible two-form sympletic matrix from where the Dirac brackets are readily obtained. This algorithm is used in this paper to investigate some aspects of the quantization of constrained systems with first- and second-class constraints in the sympletic approach

  1. Constraint-Muse: A Soft-Constraint Based System for Music Therapy

    Science.gov (United States)

    Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin

    Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.

  2. Accuracy Constraint Determination in Fixed-Point System Design

    Directory of Open Access Journals (Sweden)

    Serizel R

    2008-01-01

    Full Text Available Most of digital signal processing applications are specified and designed with floatingpoint arithmetic but are finally implemented using fixed-point architectures. Thus, the design flow requires a floating-point to fixed-point conversion stage which optimizes the implementation cost under execution time and accuracy constraints. This accuracy constraint is linked to the application performances and the determination of this constraint is one of the key issues of the conversion process. In this paper, a method is proposed to determine the accuracy constraint from the application performance. The fixed-point system is modeled with an infinite precision version of the system and a single noise source located at the system output. Then, an iterative approach for optimizing the fixed-point specification under the application performance constraint is defined and detailed. Finally the efficiency of our approach is demonstrated by experiments on an MP3 encoder.

  3. Surface Morphology and Hardness Analysis of TiCN Coated AA7075 Aluminium Alloy

    Science.gov (United States)

    Srinath, M. K.; Ganesha Prasad, M. S.

    2017-12-01

    Successful titanium carbonitride (TiCN) coating on AA7075 plates using the PVD technique depends upon many variables, including temperature, pressure, incident angle and energy of the reactive ions. Coated specimens have shown an increase in their surface hardness of 2.566 GPa. In this work, an attempt to further augment the surface hardness and understand its effects on the surface morphology was performed through heat treatments at 500°C for different duration of times. Specimen's heat treated at 500°C for 1 h exhibited a maximum surface hardness of 6.433 GPa, corresponding to an increase of 92.07%. The XRD results showed the presence of Al2Ti and AlTi3N and indicate the bond created between them. Unit cell lattice parameters in the XRD data are calculated using Bragg's law. The SEM images exhibit increasing crack sizes as the heat treatment time is increased. From the studies, the heat treatment duration can be optimized to 1 h, which exhibited an augmented surface hardness, as further increases in durations caused a drop in the surface hardness. The heat treatment effectively modified the surface hardness. Equations providing the relationships that temperature and time have with the reaction parameters are presented.

  4. 30 CFR 77.1710-1 - Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Distinctively colored hard hats or hard caps... Distinctively colored hard hats or hard caps; identification for newly employed, inexperienced miners. Hard hats or hard caps distinctively different in color from those worn by experienced miners shall be worn at...

  5. Influence of skin hardness on dehydration kinetics of wine grapes.

    Science.gov (United States)

    Rolle, Luca; Caudana, Alberto; Giacosa, Simone; Gerbi, Vincenzo; Río Segade, Susana

    2011-02-01

    Knowledge of the influence of initial mechanical properties on the evolution of the weight loss of berries through the drying process is scarce. Therefore, the main purpose of this work was to investigate the effect of skin hardness at two different physiological stages of off-vine drying kinetics of grapes. Skin hardness was evaluated as the berry skin-break force parameter, measured by the texture analysis test. The decrease of berry weight as a function of the drying time was linear, indicating that the drying rates were constant within each cultivar studied (Moscato bianco and Erbaluce), and for each ripening stage and berry skin hardness. The drying rates decreased as berry skin hardness increased for the ripest grapes in the cultivars studied. The study allowed the assessment of the correlation between the skin hardness of fresh berries and the weight loss determined for different drying days. 2010 Society of Chemical Industry.

  6. Induction surface hardening of hard coated steels

    Energy Technology Data Exchange (ETDEWEB)

    Pantleon, K.; Kessler, O.; Hoffann, F.; Mayr, P. [Stiftung Inst. fuer Werkstofftechnik, Bremen (Germany)

    1999-11-01

    The properties of hard coatings deposited using CVD processes are usually excellent. However, high deposition temperatures negatively influence the substrate properties, especially in the case of low alloyed steels. Therefore, a subsequent heat treatment is necessary to restore the properties of steel substrates. Here, induction surface hardening is used as a method of heat treatment after the deposition of TiN hard coatings on AISI 4140 (DIN42CrMo4) substrates. The influences of the heat treatment on both the coating and the substrate properties are discussed in relation to the parameters of induction heating. Thereby, the heating time, heating atmosphere and the power input into the coating-substrate compounds are varied. As a result of induction surface hardening, the properties of the substrates are improved without losing good coating properties. High hardness values in the substrate near the interface allow the AISI 4140 substrates to support TiN hard coatings very well. Consequently, higher critical loads are measured in scratch tests after the heat treatment. Also, compressive residual stresses in the substrate are generated. In addition, only a very low distortion appears. (orig.)

  7. A call of duty in hard times: Duty to vote and the Spanish Economic Crisis

    Directory of Open Access Journals (Sweden)

    Carol Galais

    2014-06-01

    Full Text Available Although scarce, the literature addressing the effects of the economy on voter turnout and political attitudes has yielded mixed results. By using individual, longitudinal data from Spain—a country devastated by the Great Recession—our study illuminates how the latest economic crisis has impacted citizens’ perceptions of voting. We analyze how economic conditions and perceptions of the economy have transformed the belief that voting is a civic duty, which is one of the strongest attitudinal predictors of turnout. Our results suggest that hard times slightly weaken citizens’ sense of civic duty, particularly among the youngest. However, the adverse effects of the economic crisis are compensated by the positive effects of the electoral context, and as a consequence there is no aggregate decline in civic duty during the period examined (2010–2012.

  8. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  9. Fault-Tolerant Topology and Routing Synthesis for IEEE Time-Sensitive Networking

    DEFF Research Database (Denmark)

    Gavrilut, Voica Maria; Zarrin, Bahram; Pop, Paul

    2017-01-01

    of the applications are satisffied. We propose three approaches to solve this optimization problem: (1) a heuristic solution, (2) a Greedy Randomized Adaptive Search Procedure (GRASP) metaheuristic, and (3) a Constraint Programmingbased model. The approaches are evaluated on several test cases, including a test case......Time-Sensitive Networking (TSN) is a set of IEEE standards that extend Ethernet for safety-critical and real-time applications. TSN is envisioned to be widely used in several applications areas, from industrial automation to in-vehicle networking. A TSN network is composed of end systems...... interconnected by physical links and bridges (switches). The data in TSN is exchanged via streams. We address safety-critical real-time systems, and we consider that the streams use the Urgency-Based Scheduler (UBS) traffic-type, suitable for hard real-time traffic. We are interested in determining a fault...

  10. Two-agent scheduling in open shops subject to machine availability and eligibility constraints

    Directory of Open Access Journals (Sweden)

    Ling-Huey Su

    2015-09-01

    Full Text Available Purpose: The aims of this article are to develop a new mathematical formulation and a new heuristic for the problem of preemptive two-agent scheduling in open shops subject to machine maintenance and eligibility constraints. Design/methodology: Using the ideas of minimum cost flow network and constraint programming, a heuristic and a network based linear programming are proposed to solve the problem. Findings: Computational experiments show that the heuristic generates a good quality schedule with a deviation of 0.25% on average from the optimum and the network based linear programming model can solve problems up to 110 jobs combined with 10 machines without considering the constraint that each operation can be processed on at most one machine at a time. In order to satisfy this constraint, a time consuming Constraint Programming is proposed. For n = 80 and m = 10, the average execution time for the combined models (linear programming model combined with Constraint programming exceeds two hours. Therefore, the heuristic algorithm we developed is very efficient and is in need. Practical implications: Its practical implication occurs in TFT-LCD and E-paper manufacturing wherein units go through a series of diagnostic tests that do not have to be performed in any specified order. Originality/value: The main contribution of the article is to split the time horizon into many time intervals and use the dispatching rule for each time interval in the heuristic algorithm, and also to combine the minimum cost flow network with the Constraint Programming to solve the problem optimally. 

  11. Constraints on Short, Hard Gamma-Ray Burst Beaming Angles from Gravitational Wave Observations

    Science.gov (United States)

    Williams, D.; Clark, J. A.; Williamson, A. R.; Heng, I. S.

    2018-05-01

    The first detection of a binary neutron star merger, GW170817, and an associated short gamma-ray burst confirmed that neutron star mergers are responsible for at least some of these bursts. The prompt gamma-ray emission from these events is thought to be highly relativistically beamed. We present a method for inferring limits on the extent of this beaming by comparing the number of short gamma-ray bursts (SGRBs) observed electromagnetically with the number of neutron star binary mergers detected in gravitational waves. We demonstrate that an observing run comparable to the expected Advanced LIGO (aLIGO) 2016–2017 run would be capable of placing limits on the beaming angle of approximately θ \\in (2\\buildrel{\\circ}\\over{.} 88,14\\buildrel{\\circ}\\over{.} 15), given one binary neutron star detection, under the assumption that all mergers produce a gamma-ray burst, and that SGRBs occur at an illustrative rate of {{ \\mathcal R }}grb}=10 {Gpc}}-3 {yr}}-1. We anticipate that after a year of observations with aLIGO at design sensitivity in 2020, these constraints will improve to θ \\in (8\\buildrel{\\circ}\\over{.} 10,14\\buildrel{\\circ}\\over{.} 95), under the same efficiency and SGRB rate assumptions.

  12. Spectro-Timing Study of GX 339-4 in a Hard Intermediate State

    Science.gov (United States)

    Furst, F.; Grinberg, V.; Tomsick, J. A.; Bachetti, M.; Boggs, S. E.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Ghandi, P.; Zhang, William W.

    2016-01-01

    We present an analysis of Nuclear Spectroscopic Telescope Array observations of a hard intermediate state of the transient black hole GX 339-4 taken in 2015 January. With the source softening significantly over the course of the 1.3 day long observation we split the data into 21 sub-sets and find that the spectrum of all of them can be well described by a power-law continuum with an additional relativistically blurred reflection component. The photon index increases from approx. 1.69 to approx. 1.77 over the course of the observation. The accretion disk is truncated at around nine gravitational radii in all spectra. We also perform timing analysis on the same 21 individual data sets, and find a strong type-C quasi-periodic oscillation (QPO), which increases in frequency from approx. 0.68 to approx. 1.05 Hz with time. The frequency change is well correlated with the softening of the spectrum. We discuss possible scenarios for the production of the QPO and calculate predicted inner radii in the relativistic precession model as well as the global disk mode oscillations model. We find discrepancies with respect to the observed values in both models unless we allow for a black hole mass of approx. 100 Mass compared to the Sun, which is highly unlikely. We discuss possible systematic uncertainties, in particular with the measurement of the inner accretion disk radius in the relativistic reflection model. We conclude that the combination of observed QPO frequencies and inner accretion disk radii, as obtained from spectral fitting, is difficult to reconcile with current models.

  13. The Emerging Population of Pulsar Wind Nebulae in Hard X-rays

    Science.gov (United States)

    Mattana, F.; Götz, D.; Terrier, R.; Renaud, M.; Falanga, M.

    2009-05-01

    The hard X-ray synchrotron emission from Pulsar Wind Nebulae probes energetic particles, closely related to the pulsar injection power at the present time. INTEGRAL has disclosed the yet poorly known population of hard X-ray pulsar/PWN systems. We summarize the properties of the class, with emphasys on the first hard X-ray bow-shock (CTB 80 powered by PSR B1951+32), and highlight some prospects for the study of Pulsar Wind Nebulae with the Simbol-X mission.

  14. Thermodynamic perturbation theory for fused hard-sphere and hard-disk chain fluids

    International Nuclear Information System (INIS)

    Zhou, Y.; Hall, C.K.; Stell, G.

    1995-01-01

    We find that first-order thermodynamic perturbation theory (TPT1) which incorporates the reference monomer fluid used in the generalized Flory--AB (GF--AB) theory yields an equation of state for fused hard-sphere (FHS) chain fluids that has accuracy comparable to the GF--AB and GF--dimer--AC theories. The new TPT1 equation of state is significantly more accurate than other extensions of the TPT1 theory to FHS chain fluids. The TPT1 is also extended to two-dimensional fused hard-disk chain fluids. For the fused hard-disk dimer fluid, the extended TPT1 equation of state is found to be more accurate than the Boublik hard-disk dimer equation of state. copyright 1995 American Institute of Physics

  15. Comparative study of carp otolith hardness: lapillus and asteriscus.

    Science.gov (United States)

    Ren, Dongni; Meyers, Marc André; Zhou, Bo; Feng, Qingling

    2013-05-01

    Otoliths are calcium carbonate biominerals in the inner ear of vertebrates; they play a role in balance, movement, and sound perception. Two types of otoliths in freshwater carp are investigated using nano- and micro-indentation: asteriscus and lapillus. The hardness, modulus, and creep of asteriscus (vaterite crystals) and lapillus (aragonite crystals) are compared. The hardness and modulus of lapillus are higher than those of asteriscus both in nano- and micro-testing, which is attributed to the different crystal polymorphs. Both materials exhibit a certain degree of creep, which indicates some time dependence of the mechanical behavior and is attributed to the organic components. The nano-indentation hardnesses are higher than micro-hardnesses for both otoliths, a direct result of the scale dependence of strength; fewer flaws are encountered by the nano than by the microindenter. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  17. Reduction of Constraints: Applicability of the Homogeneity Constraint for Macrobatch 3

    International Nuclear Information System (INIS)

    Peeler, D.K.

    2001-01-01

    The Product Composition Control System (PCCS) is used to determine the acceptability of each batch of Defense Waste Processing Facility (DWPF) melter feed in the Slurry Mix Evaporator (SME). This control system imposes several constraints on the composition of the contents of the SME to define acceptability. These constraints relate process or product properties to composition via prediction models. A SME batch is deemed acceptable if its sample composition measurements lead to acceptable property predictions after accounting for modeling, measurement and analytic uncertainties. The baseline document guiding the use of these data and models is ''SME Acceptability Determination for DWPF Process Control (U)'' by Brown and Postles [1996]. A minimum of three PCCS constraints support the prediction of the glass durability from a given SME batch. The Savannah River Technology Center (SRTC) is reviewing all of the PCCS constraints associated with durability. The purpose of this review is to revisit these constraints in light of the additional knowledge gained since the beginning of radioactive operations at DWPF and to identify any supplemental studies needed to amplify this knowledge so that redundant or overly conservative constraints can be eliminated or replaced by more appropriate constraints

  18. Distance Constraint Satisfaction Problems

    Science.gov (United States)

    Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael

    We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.

  19. Hardness enhancement and crosslinking mechanisms in polystyrene irradiated with high energy ion-beams

    International Nuclear Information System (INIS)

    Lee, E.H.; Rao, G.R.; Mansur, L.K.

    1996-01-01

    Surface hardness values several times larger than steel were produced using high energy ion beams at several hundred keV to MeV. High LET is important for crosslinking. Crosslinking is studied by analyzing hardness variations in response to irradiation parameter such as ion species, energy, and fluence. Effective crosslinking radii at hardness saturation are derived base on experimental data for 350 keV H + and 1 MeV Ar + irradiation of polystyrene. Saturation value for surface hardness is about 20 GPa

  20. Technology for planning and scheduling under complex constraints

    Science.gov (United States)

    Alguire, Karen M.; Pedro Gomes, Carla O.

    1997-02-01

    Within the context of law enforcement, several problems fall into the category of planning and scheduling under constraints. Examples include resource and personnel scheduling, and court scheduling. In the case of court scheduling, a schedule must be generated considering available resources, e.g., court rooms and personnel. Additionally, there are constraints on individual court cases, e.g., temporal and spatial, and between different cases, e.g., precedence. Finally, there are overall objectives that the schedule should satisfy such as timely processing of cases and optimal use of court facilities. Manually generating a schedule that satisfies all of the constraints is a very time consuming task. As the number of court cases and constraints increases, this becomes increasingly harder to handle without the assistance of automatic scheduling techniques. This paper describes artificial intelligence (AI) technology that has been used to develop several high performance scheduling applications including a military transportation scheduler, a military in-theater airlift scheduler, and a nuclear power plant outage scheduler. We discuss possible law enforcement applications where we feel the same technology could provide long-term benefits to law enforcement agencies and their operations personnel.

  1. Changes in hardness of magnesium alloys due to precipitation hardening

    Directory of Open Access Journals (Sweden)

    Tatiana Oršulová

    2018-04-01

    Full Text Available This paper deals with the evaluation of changes in hardness of magnesium alloys during precipitation hardening that are nowadays widely used in different fields of industry. It focuses exactly on AZ31, AZ61 and AZ91 alloys. Observing material hardness changes serves as an effective tool for determining precipitation hardening parameters, such as temperature and time. Brinell hardness measurement was chosen based on experimental needs. There was also necessary to make chemical composition analysis and to observe the microstructures of tested materials. The obtained results are presented and discussed in this paper.

  2. Hard-on-hard lubrication in the artificial hip under dynamic loading conditions.

    Directory of Open Access Journals (Sweden)

    Robert Sonntag

    Full Text Available The tribological performance of an artificial hip joint has a particularly strong influence on its success. The principle causes for failure are adverse short- and long-term reactions to wear debris and high frictional torque in the case of poor lubrication that may cause loosening of the implant. Therefore, using experimental and theoretical approaches models have been developed to evaluate lubrication under standardized conditions. A steady-state numerical model has been extended with dynamic experimental data for hard-on-hard bearings used in total hip replacements to verify the tribological relevance of the ISO 14242-1 gait cycle in comparison to experimental data from the Orthoload database and instrumented gait analysis for three additional loading conditions: normal walking, climbing stairs and descending stairs. Ceramic-on-ceramic bearing partners show superior lubrication potential compared to hard-on-hard bearings that work with at least one articulating metal component. Lubrication regimes during the investigated activities are shown to strongly depend on the kinematics and loading conditions. The outcome from the ISO gait is not fully confirmed by the normal walking data and more challenging conditions show evidence of inferior lubrication. These findings may help to explain the differences between the in vitro predictions using the ISO gait cycle and the clinical outcome of some hard-on-hard bearings, e.g., using metal-on-metal.

  3. Diffusion Processes Satisfying a Conservation Law Constraint

    Directory of Open Access Journals (Sweden)

    J. Bakosi

    2014-01-01

    Full Text Available We investigate coupled stochastic differential equations governing N nonnegative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires a set of fluctuating variables to be nonnegative and (if appropriately normalized sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the nonnegativity and the unit-sum conservation law constraints are satisfied as the variables evolve in time. We investigate the consequences of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.

  4. Remember Hard But Think Softly: Metaphorical Effects of Hardness/Softness on Cognitive Functions

    Science.gov (United States)

    Xie, Jiushu; Lu, Zhi; Wang, Ruiming; Cai, Zhenguang G.

    2016-01-01

    Previous studies have found that bodily stimulation, such as hardness biases social judgment and evaluation via metaphorical association; however, it remains unclear whether bodily stimulation also affects cognitive functions, such as memory and creativity. The current study used metaphorical associations between “hard” and “rigid” and between “soft” and “flexible” in Chinese, to investigate whether the experience of hardness affects cognitive functions whose performance depends prospectively on rigidity (memory) and flexibility (creativity). In Experiment 1, we found that Chinese-speaking participants performed better at recalling previously memorized words while sitting on a hard-surface stool (the hard condition) than a cushioned one (the soft condition). In Experiment 2, participants sitting on a cushioned stool outperformed those sitting on a hard-surface stool on a Chinese riddle task, which required creative/flexible thinking, but not on an analogical reasoning task, which required both rigid and flexible thinking. The results suggest the hardness experience affects cognitive functions that are metaphorically associated with rigidity or flexibility. They support the embodiment proposition that cognitive functions and representations can be grounded in bodily states via metaphorical associations. PMID:27672373

  5. Reduction Of Constraints For Coupled Operations

    International Nuclear Information System (INIS)

    Raszewski, F.; Edwards, T.

    2009-01-01

    The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated much of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al 2 O 3 (ge) 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% (ΣM 2 O 2 O 3 constraint to 4 wt% (Al 2 O 3 (ge) 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al 2 O 3 and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of

  6. Momentum constraint relaxation

    International Nuclear Information System (INIS)

    Marronetti, Pedro

    2006-01-01

    Full relativistic simulations in three dimensions invariably develop runaway modes that grow exponentially and are accompanied by violations of the Hamiltonian and momentum constraints. Recently, we introduced a numerical method (Hamiltonian relaxation) that greatly reduces the Hamiltonian constraint violation and helps improve the quality of the numerical model. We present here a method that controls the violation of the momentum constraint. The method is based on the addition of a longitudinal component to the traceless extrinsic curvature A ij -tilde, generated by a vector potential w i , as outlined by York. The components of w i are relaxed to solve approximately the momentum constraint equations, slowly pushing the evolution towards the space of solutions of the constraint equations. We test this method with simulations of binary neutron stars in circular orbits and show that it effectively controls the growth of the aforementioned violations. We also show that a full numerical enforcement of the constraints, as opposed to the gentle correction of the momentum relaxation scheme, results in the development of instabilities that stop the runs shortly

  7. 4-channel rad-hard delay generation ASIC with 1ns timing resolution for LHC

    International Nuclear Information System (INIS)

    Toifl, T.; Moreira, P.; Marchioro, A.; Vari, R.

    1999-01-01

    An ASIC was developed to precisely delay digital signals within the range of 0--24ns in steps of 1ns. To obtain well defined delay values independent of variations in process, supply voltage and temperature, four independent delay channels are controlled by a common control voltage derived from a delay-locked loop (DLL), which is synchronized to an external 40 MHz clock signal. The delay values of the four signal channels and the clock channel can be individually programmed via an I 2 C interface. Due to an automatic reset logic the chip does not need an external reset signal. A first version of the chip was developed in a non-rad-hard 0.8 microm technology and the successful prototype was then transferred to a radiation hard process (DMILL). Measurement results for both chip variants will be presented

  8. Finding the optimal Bayesian network given a constraint graph

    Directory of Open Access Journals (Sweden)

    Jacob M. Schreiber

    2017-07-01

    Full Text Available Despite recent algorithmic improvements, learning the optimal structure of a Bayesian network from data is typically infeasible past a few dozen variables. Fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. Several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. While super-structures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. We introduce the concept of a “constraint graph” as an intuitive method for incorporating rich prior knowledge into the structure learning task. We describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. In particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. These subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost.

  9. Interaction between sodium chloride and texture in semi-hard Danish cheese as affected by brining time, DL-starter culture, chymosin type and cheese ripening

    DEFF Research Database (Denmark)

    Akkerman, Marije; Søndergaard Kristensen, Lise; Jespersen, Lene

    2017-01-01

    Reduced NaCl in semi-hard cheeses greatly affects textural and sensory properties. The interaction between cheese NaCl concentration and texture was affected by brining time (0-28 h), . dl-starter cultures (C1, C2, and C3), chymosin type (bovine or camel), and ripening time (1-12 weeks). Cheese Na...... is reducible without significant textural impact using well-defined starter cultures and camel chymosin....

  10. Software-Enabled Project Management Techniques and Their Relationship to the Triple Constraints

    Science.gov (United States)

    Elleh, Festus U.

    2013-01-01

    This study investigated the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). There was the dearth of academic literature that focused on the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). Based on the gap…

  11. Microbiological quality of soft, semi-hard and hard cheeses during the shelf-life

    Directory of Open Access Journals (Sweden)

    Josip Vrdoljak

    2016-03-01

    Full Text Available Cheeses as ready-to-eat food should be considered as a potential source of foodborne pathogens, primarily Listeria monocytogenes. The aim of present study was to determine the microbiological quality of soft, semi-hard and hard cheeses during the shelf-life, with particular reference to L. monocytogenes. Five types of cheeses were sampled at different timepoints during the cold storage and analyzed for presence of Salmonella and L. monocytogenes, as well as lactic acid bacteria, Escherichia coli, coagulase-positive staphylococci, yeasts, molds, sulfite-reducing clostridia and L. monocytogenes counts. Water activity, pH and NaCl content were monitored in order to evaluate the possibility of L. monocytogenes growth. Challenge test for L. monocytogenes was performed in soft whey cheese, to determine the growth potential of pathogen during the shelf-life of product. All analyzed cheeses were compliant with microbiological criteria during the shelf-life. In soft cheeses, lactic acid bacteria increased in the course of the shelf-life period (1.2-2.6 log increase, while in semi-hard and hard cheeses it decreased (1.6 and 5.2 log decrease, respectively. Soft cheeses support the growth of L. monocytogenes according to determined pH values (5.8-6.5, water activity (0.99-0.94, and NaCl content (0.3-1.2%. Challenge test showed that L. monocytogenes growth potential in selected soft cheese was 0.43 log10 cfu/g during 8 days at 4°C. Water activity in semi-hard and hard cheeses was a limiting factor for Listeria growth during the shelf-life. Soft, semi-hard and hard cheeses were microbiologically stable during their defined shelf-life. Good manufacturing and hygienic practices must be strictly followed in the production of soft cheeses as Listeria-supporting food and be focused on preventing (recontamination.

  12. Current constraints on the cosmic growth history

    International Nuclear Information System (INIS)

    Bean, Rachel; Tangmatitham, Matipon

    2010-01-01

    We present constraints on the cosmic growth history with recent cosmological data, allowing for deviations from ΛCDM as might arise if cosmic acceleration is due to modifications to general relativity or inhomogeneous dark energy. We combine measures of the cosmic expansion history, from Type 1a supernovae, baryon acoustic oscillations, and the cosmic microwave background (CMB), with constraints on the growth of structure from recent galaxy, CMB, and weak lensing surveys along with integated Sachs Wolfe-galaxy cross correlations. Deviations from ΛCDM are parameterized by phenomenological modifications to the Poisson equation and the relationship between the two Newtonian potentials. We find modifications that are present at the time the CMB is formed are tightly constrained through their impact on the well-measured CMB acoustic peaks. By contrast, constraints on late-time modifications to the growth history, as might arise if modifications are related to the onset of cosmic acceleration, are far weaker, but remain consistent with ΛCDM at the 95% confidence level. For these late-time modifications we find that differences in the evolution on large and small scales could provide an interesting signature by which to search for modified growth histories with future wide angular coverage, large scale structure surveys.

  13. Analytical Modeling of Hard-Coating Cantilever Composite Plate considering the Material Nonlinearity of Hard Coating

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2015-01-01

    Full Text Available Due to the material nonlinearity of hard coating, the coated structure produces the nonlinear dynamical behaviors of variable stiffness and damping, which make the modeling of hard-coating composite structure become a challenging task. In this study, the polynomial was adopted to characterize this material nonlinearity and an analytical modeling method was developed for the hard-coating composite plate. Firstly, to relate the hard-coating material parameters obtained by test and the analytical model, the expression of equivalent strain of composite plate was derived. Then, the analytical model of hard-coating composite plate was created by energy method considering the material nonlinearity of hard coating. Next, using the Newton-Raphson method to solve the vibration response and resonant frequencies of composite plate and a specific calculation procedure was also proposed. Finally, a cantilever plate coated with MgO + Al2O3 hard coating was chosen as study case; the vibration response and resonant frequencies of composite plate were calculated using the proposed method. The calculation results were compared with the experiment and general linear calculation, and the correctness of the created model was verified. The study shows the proposed method can still maintain an acceptable precision when the material nonlinearity of hard coating is stronger.

  14. Influence of Timing of Delayed Hard Palate Closure on Articulation Skills in 3-Year-Old Danish Children with Unilateral Cleft Lip and Palate

    Science.gov (United States)

    Willadsen, Elisabeth; Boers, Maria; Schöps, Antje; Kisling-Møller, Mia; Nielsen, Joan Bogh; Jørgensen, Line Dahl; Andersen, Mikael; Bolund, Stig; Andersen, Helene Søgaard

    2018-01-01

    Background: Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. Aims: To assess the influence of different timing of hard palate closure in a two-stage procedure on articulation skills in 3-year-olds born with unilateral…

  15. Giant Panda Maternal Care: A Test of the Experience Constraint Hypothesis

    Science.gov (United States)

    Snyder, Rebecca J.; Perdue, Bonnie M.; Zhang, Zhihe; Maple, Terry L.; Charlton, Benjamin D.

    2016-01-01

    The body condition constraint and the experience condition constraint hypotheses have both been proposed to account for differences in reproductive success between multiparous (experienced) and primiparous (first-time) mothers. However, because primiparous mothers are typically characterized by both inferior body condition and lack of experience when compared to multiparous mothers, interpreting experience related differences in maternal care as support for either the body condition constraint hypothesis or the experience constraint hypothesis is extremely difficult. Here, we examined maternal behaviour in captive giant pandas, allowing us to simultaneously control for body condition and provide a rigorous test of the experience constraint hypothesis in this endangered animal. We found that multiparous mothers spent more time engaged in key maternal behaviours (nursing, grooming, and holding cubs) and had significantly less vocal cubs than primiparous mothers. This study provides the first evidence supporting the experience constraint hypothesis in the order Carnivora, and may have utility for captive breeding programs in which it is important to monitor the welfare of this species’ highly altricial cubs, whose survival is almost entirely dependent on receiving adequate maternal care during the first few weeks of life. PMID:27272352

  16. Effective constraint algebras with structure functions

    International Nuclear Information System (INIS)

    Bojowald, Martin; Brahma, Suddhasattwa

    2016-01-01

    This article presents the result that fluctuations and higher moments of a state, by themselves, do not imply quantum corrections in structure functions of constrained systems. Moment corrections are isolated from other types of quantum effects, such as factor-ordering choices and regularization, by introducing a new condition with two parts: (i) having a direct (or faithful) quantization of the classical structure functions, (ii) free of factor-ordering ambiguities. In particular, it is assumed that the classical constraints can be quantized in an anomaly free way, so that properties of the resulting constraint algebras can be derived. If the two-part condition is not satisfied, effective constraints can still be evaluated, but quantum effects may be stronger. Consequences for canonical quantum gravity, whose structure functions encode space–time structure, are discussed. In particular, deformed algebras found in models of loop quantum gravity provide reliable information even in the Planck regime. (paper)

  17. Hardness survey of cold-worked and heat-treated JBK-75 stainless steel alloy

    International Nuclear Information System (INIS)

    Jackson, R.J.; Lucas, R.L.

    1977-01-01

    The alloy JBK-75, an age-hardenable austenitic stainless steel, is similar to commercial A-286, but has certain chemistry modifications to improve weldability and hydrogen compatibility. The principal changes are an increase in nickel and a decrease in manganese with lower limits on carbon, phosphorus, sulfur, silicon, and boron. In this study, the effects of solutionizing time and temperature, quench rate, cold working, and the effects of cold working on precipitation kinetics were examined. Findings show that the solutionizing temperature has a moderate effect on the as-quenched hardness, while times greater than that required for solutionizing do not significantly affect hardness. Quench rate was found to have a small effect on as-quenched hardness, however, hardness gradients did not develop in small bars. It was found that JBK-75 can be significantly strengthened by cold working. Cold working alone produced hardness increases from Rockwell-A 49 to R/sub A/ 68. A recovery-related hardness change was noted on heat treating at 300 and 400 0 C for both as-quenched and as-worked JBK-75. Significant age-hardening was observed at temperatures as low as 500 0 C for as-worked metal. Aging at 600 0 C resulted in maximum hardness in the 75 percent worked sample at about 6 hours (R/sub A/ 73.5) while the 50 percent worked sample was near maximum hardness (R/sub A 72.5) after seven days. THE 25 and 0 percent worked samples were considerably underaged after seven days. Similar type kinetic data were obtained for worked and nonworked metal at 650, 700, 800, 850, 900, 1000, and 1100 0 C for times from 10 minutes to 10,000 minutes (6.7 days). The overall purpose of the hardness survey was to better define the effects of cold work on the stress-relieving range, coherent precipitation range, incoherent precipitation range, recrystallization range, solutionizing range, and grain-growth range

  18. Control of Petri nets subject to strict temporal constraints using Max-Plus algebra

    Science.gov (United States)

    Tebani, K.; Amari, S.; Kara, R.

    2018-04-01

    In this paper, we treat the control problem of timed discrete event systems under temporal constraints. This type of constraint is very frequent in production systems, transportation network and in networked automation systems. Precisely, we are interested in the validation of strict temporal constraints imposed on the paths in a timed event graph (TEG) by using Max-Plus algebra. Not all the transitions of the considered TEG model are controllable, i.e. only the input transitions are controllable. An analytical approach for computing state feedback controllers is developed. Sufficient condition is given for the existence of causal control laws satisfying the temporal constraints. In the first, a TEG with observable transitions is considered. Then, the proposed approach is extended to the partially observable TEG. The synthesised feedback can be interpreted by places of control connected to the TEG to guarantee the respect of the time constraints. The proposed method is illustrated in the assembly system example.

  19. Hard Real-Time Networking on Firewire

    NARCIS (Netherlands)

    Zhang, Yuchen; Orlic, Bojan; Visser, Peter; Broenink, Jan

    2005-01-01

    This paper investigates the possibility of using standard, low-cost, widely used FireWire as a new generation fieldbus medium for real-time distributed control applications. A real-time software subsys- tem, RT-FireWire was designed that can, in combination with Linux-based real-time operating

  20. Chemical hardness and density functional theory

    Indian Academy of Sciences (India)

    Unknown

    RALPH G PEARSON. Chemistry Department, University of California, Santa Barbara, CA 93106, USA. Abstract. The concept of chemical hardness is reviewed from a personal point of view. Keywords. Hardness; softness; hard & soft acids bases (HSAB); principle of maximum hardness. (PMH) density functional theory (DFT) ...

  1. Hardness variability in commercial technologies

    International Nuclear Information System (INIS)

    Shaneyfelt, M.R.; Winokur, P.S.; Meisenheimer, T.L.; Sexton, F.W.; Roeske, S.B.; Knoll, M.G.

    1994-01-01

    The radiation hardness of commercial Floating Gate 256K E 2 PROMs from a single diffusion lot was observed to vary between 5 to 25 krad(Si) when irradiated at a low dose rate of 64 mrad(Si)/s. Additional variations in E 2 PROM hardness were found to depend on bias condition and failure mode (i.e., inability to read or write the memory), as well as the foundry at which the part was manufactured. This variability is related to system requirements, and it is shown that hardness level and variability affect the allowable mode of operation for E 2 PROMs in space applications. The radiation hardness of commercial 1-Mbit CMOS SRAMs from Micron, Hitachi, and Sony irradiated at 147 rad(Si)/s was approximately 12, 13, and 19 krad(Si), respectively. These failure levels appear to be related to increases in leakage current during irradiation. Hardness of SRAMs from each manufacturer varied by less than 20%, but differences between manufacturers are significant. The Qualified Manufacturer's List approach to radiation hardness assurance is suggested as a way to reduce variability and to improve the hardness level of commercial technologies

  2. Spatial Mapping and Quantification of Soft and Hard Protein Coronas at Silver Nanocubes

    DEFF Research Database (Denmark)

    Miclaus, Teodora; Bochenkov, Vladimir; Ogaki, Ryosuke

    2014-01-01

    kinetics of the corona-formation at cube edges/corners versus facets at short incubation times, where the polymer stabilization agent delayed corona hardening. The soft corona contained more protein than the hard corona at all time-points (8-fold difference with 10% serum conditions).......Protein coronas around silver nanocubes were quantified in serum-containing media using localized surface plasmon resonances. Both soft and hard coronas showed exposure-time and concentration-dependent changes in protein surface density with time-dependent hardening. We observed spatially dependent...

  3. Tactile sensor of hardness recognition based on magnetic anomaly detection

    Science.gov (United States)

    Xue, Lingyun; Zhang, Dongfang; Chen, Qingguang; Rao, Huanle; Xu, Ping

    2018-03-01

    Hardness, as one kind of tactile sensing, plays an important role in the field of intelligent robot application such as gripping, agricultural harvesting, prosthetic hand and so on. Recently, with the rapid development of magnetic field sensing technology with high performance, a number of magnetic sensors have been developed for intelligent application. The tunnel Magnetoresistance(TMR) based on magnetoresistance principal works as the sensitive element to detect the magnetic field and it has proven its excellent ability of weak magnetic detection. In the paper, a new method based on magnetic anomaly detection was proposed to detect the hardness in the tactile way. The sensor is composed of elastic body, ferrous probe, TMR element, permanent magnet. When the elastic body embedded with ferrous probe touches the object under the certain size of force, deformation of elastic body will produce. Correspondingly, the ferrous probe will be forced to displace and the background magnetic field will be distorted. The distorted magnetic field was detected by TMR elements and the output signal at different time can be sampled. The slope of magnetic signal with the sampling time is different for object with different hardness. The result indicated that the magnetic anomaly sensor can recognize the hardness rapidly within 150ms after the tactile moment. The hardness sensor based on magnetic anomaly detection principal proposed in the paper has the advantages of simple structure, low cost, rapid response and it has shown great application potential in the field of intelligent robot.

  4. Transportation Energy Futures Series: Vehicle Technology Deployment Pathways: An Examination of Timing and Investment Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Plotkin, S.; Stephens, T.; McManus, W.

    2013-03-01

    Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  5. Transportation Energy Futures Series. Vehicle Technology Deployment Pathways. An Examination of Timing and Investment Constraints

    Energy Technology Data Exchange (ETDEWEB)

    Plotkin, Steve [Argonne National Lab. (ANL), Argonne, IL (United States); Stephens, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); McManus, Walter [Oakland Univ., Rochester, MI (United States)

    2013-03-01

    Scenarios of new vehicle technology deployment serve various purposes; some will seek to establish plausibility. This report proposes two reality checks for scenarios: (1) implications of manufacturing constraints on timing of vehicle deployment and (2) investment decisions required to bring new vehicle technologies to market. An estimated timeline of 12 to more than 22 years from initial market introduction to saturation is supported by historical examples and based on the product development process. Researchers also consider the series of investment decisions to develop and build the vehicles and their associated fueling infrastructure. A proposed decision tree analysis structure could be used to systematically examine investors' decisions and the potential outcomes, including consideration of cash flow and return on investment. This method requires data or assumptions about capital cost, variable cost, revenue, timing, and probability of success/failure, and would result in a detailed consideration of the value proposition of large investments and long lead times. This is one of a series of reports produced as a result of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency effort to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  6. The Time-Course of Morphological Constraints: Evidence from Eye-Movements during Reading

    Science.gov (United States)

    Cunnings, Ian; Clahsen, Harald

    2007-01-01

    Lexical compounds in English are constrained in that the non-head noun can be an irregular but not a regular plural (e.g. mice eater vs. *rats eater), a contrast that has been argued to derive from a morphological constraint on modifiers inside compounds. In addition, bare nouns are preferred over plural forms inside compounds (e.g. mouse eater…

  7. The Manpower Allocation Problem with Time Windows and Job-Teaming Constraints: A Branch-and-Price Approach

    DEFF Research Database (Denmark)

    Hansen, Anders Dohn; Kolind, Esben; Clausen, Jens

    2009-01-01

    In this paper, we consider the Manpower Allocation Problem with Time Windows, Job-Teaming Constraints and a limited number of teams (m-MAPTWTC). Given a set of teams and a set of tasks, the problem is to assign to each team a sequential order of tasks to maximize the total number of assigned tasks....... Both teams and tasks may be restricted by time windows outside which operation is not possible. Some tasks require cooperation between teams, and all teams cooperating must initiate execution simultaneously. We present an IP-model for the problem, which is decomposed using Dantzig-Wolfe decomposition....... The problem is solved by column generation in a Branch-and-Price framework. Simultaneous execution of tasks is enforced by the branching scheme. To test the efficiency of the proposed algorithm, 12 realistic test instances are introduced. The algorithm is able to find the optimal solution in 11 of the test...

  8. Bimanual microincision phacoemulsification in treating hard cataracts using different power modes.

    Science.gov (United States)

    Liu, Yizhi; Jiang, Yuzhen; Wu, Mingxing; Liu, Yuhua; Zhang, Tieying

    2008-07-01

    To compare the performance of the Multiburst mode, the Shortpulse mode and the Whitestar technology of the Sovereign platform in treating hard cataracts with bimanual microincision phacoemulsification. 101 eyes with hard cataracts (nuclear density Grade 3 and Grade 4 or above) were randomized into three groups. Bimanual microincision phacoemulsification was performed using the Multiburst mode, the Shortpulse mode and the Whitestar technology of the Sovereign phacoemulsification machine respectively. The average power, total duration of ultrasonic power release (US Time), effective phaco time, complications, best-corrected visual acuity and rate of corneal endothelial cell loss were measured and compared among the study groups. For hard cataracts of various nuclear densities, average ultrasonic power was highest in the Whitestar group followed by the Shortpulse group. The Multiburst group had the highest US Time, effective phaco time and rate of corneal endothelial cell loss whereas the Whitestar Group had the lowest.The differences between the groups were found to be statistically significant by variation analysis and the Fisher's least significant difference procedure. However, there were no significant differences between the USTime values of the Shortpulse Group and the Whitestar Group (P = 0.051). In the Multiburst Group, wound burn occurred in one eye, and three eyes had abnormal fluctuations in the anterior chamber depth. The Whitestar technology showed the best performance in this study.The Multiburst mode was proved to be a relatively unsuitable ultrasonic power mode in treating hard cataracts with bimanual microincision phacoemulsification.

  9. An Extensive Evaluation of Portfolio Approaches for Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Roberto Amadini

    2016-06-01

    Full Text Available In the context of Constraint Programming, a portfolio approach exploits the complementary strengths of a portfolio of different constraint solvers. The goal is to predict and run the best solver(s of the portfolio for solving a new, unseen problem. In this work we reproduce, simulate, and evaluate the performance of different portfolio approaches on extensive benchmarks of Constraint Satisfaction Problems. Empirical results clearly show the benefits of portfolio solvers in terms of both solved instances and solving time.

  10. The Influence Of Nitridation Temperature And Time On The Surface Hardness Of AISI 1010 Low Carbon Steels Nitrided By Means Of Plasma Glow Discharge Technique

    International Nuclear Information System (INIS)

    Sujitno, Tjipto; Mujiman, Supardjono

    1996-01-01

    The results of the influence of nitridation temperature and time on the surface hardness of AISI 1010 low carbon steels nitrided by means of plasma glow discharge technique are presented in this paper. The results are the changing of surface hardiness, the changing of surface microstructure and the penetration profile depth. The experiment has been carried out at the temperature 400 o C, 450 o C, 500 o C, 550 o C, 570 o C and 600 o C, whereas the time is 5 minutes, 15 minutes, 40 minutes, 90 minutes and 180 minutes. All the experiments have been carried out at the optimum plasma density condition. The optimum plasma density condition is achieved at the pressure of p = 0.2 torr, when thr gas flow of nitrogen is 0.6 liter/minute and the distance of electrode plate is 4.5 cm. It was found that the optimum hardness of the surface was achieved at the temperature of 570 o C and the time of nitridation was 90 minutes, i.e. 190 KHN

  11. Towards optimized suppression of dephasing in systems subject to pulse timing constraints

    International Nuclear Information System (INIS)

    Hodgson, Thomas E.; D'Amico, Irene; Viola, Lorenza

    2010-01-01

    We investigate the effectiveness of different dynamical decoupling protocols for storage of a single qubit in the presence of a purely dephasing bosonic bath, with emphasis on comparing quantum coherence preservation under uniform versus nonuniform delay times between pulses. In the limit of instantaneous bit-flip pulses, this is accomplished by establishing a different representation of the controlled qubit evolution, where the decoherence behavior after an arbitrary number of pulses is directly expressed in terms of the uncontrolled decoherence function. In particular, analytical expressions are obtained for approximation of the long- and short-term coherence behavior for both Ohmic and supra-Ohmic environments. By focusing on the realistic case of pure dephasing in an excitonic qubit, we quantitatively assess the impact of physical constraints on achievable pulse separations, and show that little advantage of high-level decoupling schemes based on concatenated or optimal design may be expected if pulses cannot be applied sufficiently fast. In such constrained scenarios, we demonstrate how simple modifications of repeated periodic-echo protocols can offer significantly improved coherence preservation in realistic parameter regimes. We expect similar conclusions to be relevant to other constrained qubit devices exposed to quantum or classical phase noise.

  12. The effect of gamma radiation on hardness evolution in high density polyethylene at elevated temperatures

    International Nuclear Information System (INIS)

    Chen, Pei-Yun; Chen, C.C.; Harmon, Julie P.; Lee, Sanboh

    2014-01-01

    This research focuses on characterizing hardness evolution in irradiated high density polyethylene (HDPE) at elevated temperatures. Hardness increases with increasing gamma ray dose, annealing temperature and annealing time. The hardness change is attributed to the variation of defects in microstructure and molecular structure. The kinetics of defects that control the hardness are assumed to follow the first order structure relaxation. The experimental data are in good agreement with the predicted model. The rate constant follows the Arrhenius equation, and the corresponding activation energy decreases with increasing dose. The defects that control hardness in post-annealed HDPE increase with increasing dose and annealing temperature. The structure relaxation of HDPE has a lower energy of mixing in crystalline regions than in amorphous regions. Further, the energy of mixing for defects that influence hardness in HDPE is lower than those observed in polycarbonate (PC), poly(methyl methacrylate) (PMMA) and poly (hydroxyethyl methacrylate) (HEMA). This is due to the fact that polyethylene is a semi-crystalline material, while PC, PMMA and PHEMA are amorphous. - Highlights: • Hardness of HDPE increases with increasing gamma ray dose, annealing time and temperature. • The hardness change arises from defects in microstructure and molecular structure. • Defects affecting hardness follow a kinetics of structure relaxation. • The structure relaxation has a low energy of mixing in crystalline regime

  13. The effect of gamma radiation on hardness evolution in high density polyethylene at elevated temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Pei-Yun [Department of Materials Science and Engineering, National Tsing Hua University, Hsinchu 300, Taiwan (China); Chen, C.C. [Institute of Nuclear Energy Research, Longtan, Taoyuan 325, Taiwan (China); Harmon, Julie P. [Department of Chemistry, University of South Florida, Tampa, FL 33620 (United States); Lee, Sanboh, E-mail: sblee@mx.nthu.edu.tw [Department of Materials Science and Engineering, National Tsing Hua University, Hsinchu 300, Taiwan (China)

    2014-08-01

    This research focuses on characterizing hardness evolution in irradiated high density polyethylene (HDPE) at elevated temperatures. Hardness increases with increasing gamma ray dose, annealing temperature and annealing time. The hardness change is attributed to the variation of defects in microstructure and molecular structure. The kinetics of defects that control the hardness are assumed to follow the first order structure relaxation. The experimental data are in good agreement with the predicted model. The rate constant follows the Arrhenius equation, and the corresponding activation energy decreases with increasing dose. The defects that control hardness in post-annealed HDPE increase with increasing dose and annealing temperature. The structure relaxation of HDPE has a lower energy of mixing in crystalline regions than in amorphous regions. Further, the energy of mixing for defects that influence hardness in HDPE is lower than those observed in polycarbonate (PC), poly(methyl methacrylate) (PMMA) and poly (hydroxyethyl methacrylate) (HEMA). This is due to the fact that polyethylene is a semi-crystalline material, while PC, PMMA and PHEMA are amorphous. - Highlights: • Hardness of HDPE increases with increasing gamma ray dose, annealing time and temperature. • The hardness change arises from defects in microstructure and molecular structure. • Defects affecting hardness follow a kinetics of structure relaxation. • The structure relaxation has a low energy of mixing in crystalline regime.

  14. Real-Time Optimization of a maturing North Sea gas asset with production constraints

    NARCIS (Netherlands)

    Linden, R.J.P. van der; Busking, T.E.

    2013-01-01

    As gas and oil fields mature their operation becomes increasingly more complex, due to complex process dynamics, like slugging, gas coning, water breakthrough, salt or hydrate deposition. Moreover these phenomena also lead to production constraints in the upstream facilities. This complexity asks

  15. A Selfish Constraint Satisfaction Genetic Algorithms for Planning a Long-Distance Transportation Network

    Science.gov (United States)

    Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa

    To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.

  16. An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage

    Directory of Open Access Journals (Sweden)

    German Gramajo

    2017-01-01

    Full Text Available A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of flight. Comparisons of this formulation to a path planning algorithm based on those with time constraint show equivalent coverage performance but improvement in prediction of overall mission duration and accuracy of the terminal position of the vehicle.

  17. Strong Bisimilarity and Regularity of Basic Parallel Processes is PSPACE-Hard

    DEFF Research Database (Denmark)

    Srba, Jirí

    2002-01-01

    We show that the problem of checking whether two processes definable in the syntax of Basic Parallel Processes (BPP) are strongly bisimilar is PSPACE-hard. We also demonstrate that there is a polynomial time reduction from the strong bisimilarity checking problem of regular BPP to the strong...... regularity (finiteness) checking of BPP. This implies that strong regularity of BPP is also PSPACE-hard....

  18. Distributed Task Rescheduling With Time Constraints for the Optimization of Total Task Allocations in a Multirobot System.

    Science.gov (United States)

    Turner, Joanna; Meng, Qinggang; Schaefer, Gerald; Whitbrook, Amanda; Soltoggio, Andrea

    2017-09-28

    This paper considers the problem of maximizing the number of task allocations in a distributed multirobot system under strict time constraints, where other optimization objectives need also be considered. It builds upon existing distributed task allocation algorithms, extending them with a novel method for maximizing the number of task assignments. The fundamental idea is that a task assignment to a robot has a high cost if its reassignment to another robot creates a feasible time slot for unallocated tasks. Multiple reassignments among networked robots may be required to create a feasible time slot and an upper limit to this number of reassignments can be adjusted according to performance requirements. A simulated rescue scenario with task deadlines and fuel limits is used to demonstrate the performance of the proposed method compared with existing methods, the consensus-based bundle algorithm and the performance impact (PI) algorithm. Starting from existing (PI-generated) solutions, results show up to a 20% increase in task allocations using the proposed method.

  19. Effect of gum hardness on chewing pattern.

    Science.gov (United States)

    Plesh, O; Bishop, B; McCall, W

    1986-06-01

    Chewing rhythms are set by a putative central pattern generator whose output is influenced by sensory feedback. In this study we assessed how an altered feedback imposed by changing the hardness of a gum bolus modifies the timing of chewing, the maximal gape, and the activity in the masseter muscle on the chewing side. Ten adult subjects with no orofacial dysfunction chewed a standard piece of soft or hard gum for at least 3 min in random order. Vertical jaw movements were recorded with a kinesiograph and activity of the masseter muscle was recorded and integrated from surface EMG electrodes. The subjects sat in a dental chair and viewed a video lecture to distract their attention from chewing; they were instructed to chew on the right molars. Cycle-by-cycle analysis showed that 9 of the 10 subjects chewed the hard gum more slowly than the soft with no significant change in gape. The increases in cycle duration were due to changes in the duration of the opening and occlusal phases. The duration of closing was not significantly changed even though the duration and level of masseter activity were both significantly increased. We conclude that gum hardness by altering proprioceptive feedback modifies the output of the masticatory central pattern generator in such a way that the temporal aspects of chewing and the output of the masseteric motor pool are affected.

  20. Computational search for rare-earth free hard-magnetic materials

    Science.gov (United States)

    Flores Livas, José A.; Sharma, Sangeeta; Dewhurst, John Kay; Gross, Eberhard; MagMat Team

    2015-03-01

    It is difficult to over state the importance of hard magnets for human life in modern times; they enter every walk of our life from medical equipments (NMR) to transport (trains, planes, cars, etc) to electronic appliances (for house hold use to computers). All the known hard magnets in use today contain rare-earth elements, extraction of which is expensive and environmentally harmful. Rare-earths are also instrumental in tipping the balance of world economy as most of them are mined in limited specific parts of the world. Hence it would be ideal to have similar characteristics as a hard magnet but without or at least with reduced amount of rare-earths. This is the main goal of our work: search for rare-earth-free magnets. To do so we employ a combination of density functional theory and crystal prediction methods. The quantities which define a hard magnet are magnetic anisotropy energy (MAE) and saturation magnetization (Ms), which are the quantities we maximize in search for an ideal magnet. In my talk I will present details of the computation search algorithm together with some potential newly discovered rare-earth free hard magnet. J.A.F.L. acknowledge financial support from EU's 7th Framework Marie-Curie scholarship program within the ``ExMaMa'' Project (329386).

  1. A NICER Look at the Aql X-1 Hard State

    Science.gov (United States)

    Bult, Peter; Arzoumanian, Zaven; Cackett, Edward M.; Chakrabarty, Deepto; Gendreau, Keith C.; Guillot, Sebastien; Homan, Jeroen; Jaisawal, Gaurava K.; Keek, Laurens; Kenyon, Steve; Lamb, Frederick K.; Ludlam, Renee; Mahmoodifar, Simin; Markwardt, Craig; Miller, Jon M.; Prigozhin, Gregory; Soong, Yang; Strohmayer, Tod E.; Uttley, Phil

    2018-05-01

    We report on a spectral-timing analysis of the neutron star low-mass X-ray binary (LMXB) Aql X-1 with the Neutron Star Interior Composition Explorer (NICER) on the International Space Station (ISS). Aql X-1 was observed with NICER during a dim outburst in 2017 July, collecting approximately 50 ks of good exposure. The spectral and timing properties of the source correspond to that of a (hard) extreme island state in the atoll classification. We find that the fractional amplitude of the low-frequency (soft thermal emission and the power-law emission. Additionally, we measure hard time lags, indicating the thermal emission at 0.5 keV leads the power-law emission at 10 keV on a timescale of ∼100 ms at 0.3 Hz to ∼10 ms at 3 Hz. Our results demonstrate that the thermal emission in the hard state is intrinsically variable, and is driving the modulation of the higher energy power-law. Interpreting the thermal spectrum as disk emission, we find that our results are consistent with the disk propagation model proposed for accretion onto black holes.

  2. Data Driven Constraints for the SVM

    DEFF Research Database (Denmark)

    Darkner, Sune; Clemmensen, Line Katrine Harder

    2012-01-01

    We propose a generalized data driven constraint for support vector machines exemplified by classification of paired observations in general and specifically on the human ear canal. This is particularly interesting in dynamic cases such as tissue movement or pathologies developing over time. Assum...

  3. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  4. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  5. General constraints on the age and chemical evolution of the Galaxy

    International Nuclear Information System (INIS)

    Meyer, B.S.; Schramm, D.N.

    1986-05-01

    The formalism of Schramm and Wasserburg (1970) for determining the mean age of the elements is extended. Model-independent constraints (constraints that are independent of a specific form for the effective nucleosynthesis rate and Galactic chemical evolution over time) are derived on the first four terms in the expansion giving the mean age of the elements, and from these constraints limits are derived on the total duration of nucleosynthesis. These limits require only input of the Schramm-Wasserburg parameter Δ/sup max/ and of the ratio of the mean time for formation of the elements to the total duration of nucleosynthesis, t/sub nu//T. The former quantity is a function of nuclear input parameters. Limits on the latter are obtained from constraints on the relative rate of nucleosynthesis derived from the 232 Th/ 238 U, 235 U/ 238 U, and shorter-lived chronometric pairs. 65 refs

  6. Vibronic Rabi resonances in harmonic and hard-wall ion traps for arbitrary laser intensity and detuning

    International Nuclear Information System (INIS)

    Lizuain, I.; Muga, J. G.

    2007-01-01

    We investigate laser-driven vibronic transitions of a single two-level atomic ion in harmonic and hard-wall traps. In the Lamb-Dicke regime, for tuned or detuned lasers with respect to the internal frequency of the ion, and weak or strong laser intensities, the vibronic transitions occur at well-isolated Rabi resonances, where the detuning-adapted Rabi frequency coincides with the transition frequency between vibrational modes. These vibronic resonances are characterized as avoided crossings of the dressed levels (eigenvalues of the full Hamiltonian). Their peculiarities due to symmetry constraints and trapping potential are also examined

  7. Robust and Energy-Efficient Ultra-Low-Voltage Circuit Design under Timing Constraints in 65/45 nm CMOS

    Directory of Open Access Journals (Sweden)

    David Bol

    2011-01-01

    Full Text Available Ultra-low-voltage operation improves energy efficiency of logic circuits by a factor of 10×, at the expense of speed, which is acceptable for applications with low-to-medium performance requirements such as RFID, biomedical devices and wireless sensors. However, in 65/45 nm CMOS, variability and short-channel effects significantly harm robustness and timing closure of ultra-low-voltage circuits by reducing noise margins and jeopardizing gate delays. The consequent guardband on the supply voltage to meet a reasonable manufacturing yield potentially ruins energy efficiency. Moreover, high leakage currents in these technologies degrade energy efficiency in case of long stand-by periods. In this paper, we review recently published techniques to design robust and energy-efficient ultra-low-voltage circuits in 65/45 nm CMOS under relaxed yet strict timing constraints.

  8. Induced spherococcoid hard wheat

    International Nuclear Information System (INIS)

    Yanev, Sh.

    1981-01-01

    A mutant has been obtained - a spheroccocoid line -through irradiation of hard wheat seed with fast neutrons. It is distinguished by semispherical glumes and smaller grain; the plants have low stem with erect leaves but with shorter spikes and with lesser number of spikelets than those of the initial cultivar. Good productive tillering and resistance to lodging contributed to 23.5% higher yield. The line was superior to the standard and the initial cultivars by 14.2% as regards protein content, and by up to 22.8% - as to flour gluten. It has been successfully used in hybridization producing high-yielding hard wheat lines resistant to lodging, with good technological and other indicators. The possibility stated is of obtaining a spherococcoid mutant in tetraploid (hard) wheat out of the D-genome as well as its being suited to hard wheat breeding to enhance protein content, resistance to lodging, etc. (author)

  9. System Integration for Real-Time Mobile Manipulation

    Directory of Open Access Journals (Sweden)

    Reza Oftadeh

    2014-03-01

    Full Text Available Mobile manipulators are one of the most complicated types of mechatronics systems. The performance of these robots in performing complex manipulation tasks is highly correlated with the synchronization and integration of their low-level components. This paper discusses in detail the mechatronics design of a four wheel steered mobile manipulator. It presents the manipulator's mechanical structure and electrical interfaces, designs low-level software architecture based on embedded PC-based controls, and proposes a systematic solution based on code generation products of MATLAB and Simulink. The remote development environment described here is used to develop real-time controller software and modules for the mobile manipulator under a POSIX-compliant, real-time Linux operating system. Our approach enables developers to reliably design controller modules that meet the hard real-time constraints of the entire low-level system architecture. Moreover, it provides a systematic framework for the development and integration of hardware devices with various communication mediums and protocols, which facilitates the development and integration process of the software controller.

  10. The effects of perceived leisure constraints among Korean university students

    Science.gov (United States)

    Sae-Sook Oh; Sei-Yi Oh; Linda L. Caldwell

    2002-01-01

    This study is based on Crawford, Jackson, and Godbey's model of leisure constraints (1991), and examines the relationships between the influences of perceived constraints, frequency of participation, and health status in the context of leisure-time outdoor activities. The study was based on a sample of 234 Korean university students. This study provides further...

  11. Time-lapse electrical surveys to locate infiltration zones in weathered hard rock tropical areas

    Science.gov (United States)

    Wubda, M.; Descloitres, M.; Yalo, N.; Ribolzi, O.; Vouillamoz, J. M.; Boukari, M.; Hector, B.; Séguis, L.

    2017-07-01

    In West Africa, infiltration and groundwater recharge processes in hard rock areas are depending on climatic, surface and subsurface conditions, and are poorly documented. Part of the reason is that identification, location and monitoring of these processes is still a challenge. Here, we explore the potential for time-lapse electrical surveys to bring additional information on these processes for two different climate situations: a semi-arid Sahelian site (north of Burkina and a humid Sudanian site (north of Benin), respectively focusing on indirect (localized) and direct (diffuse) recharge processes. The methodology is based on surveys in dry season and rainy season on typical pond or gully using Electrical Resistivity Tomography (ERT) and frequency electromagnetic (FEM) apparent conductivity mapping. The results show that in the Sahelian zone an indirect recharge occurs as expected, but infiltration doesn't takes place at the center of the pond to the aquifer, but occurs laterally in the banks. In Sudanian zone, the ERT survey shows a direct recharge process as expected, but also a complicated behavior of groundwater dilution, as well as the role of hardpans for fast infiltration. These processes are ascertained by groundwater monitoring in adjacent observing wells. At last, FEM time lapse mapping is found to be difficult to quantitatively interpreted due to the non-uniqueness of the model, clearly evidenced comparing FEM result to auger holes monitoring. Finally, we found that time-lapse ERT can be an efficient way to track infiltration processes across ponds and gullies in both climatic conditions, the Sahelian setting providing results easier to interpret, due to significant resistivity contrasts between dry and rain seasons. Both methods can be used for efficient implementation of punctual sensors for complementary studies. However, FEM time-lapse mapping remains difficult to practice without external information that renders this method less attractive for

  12. What drives the efficiency of hard coal fuelled electricity generation? : an empirical assessment

    OpenAIRE

    Hoffmann, Tim; Voigt, Sebastian

    2009-01-01

    The efficiency of electricity generation in hard coal fired power plants varies considerably from country to country and over time. These differences occur both between developing and developed countries and between industrialised nations. The econometric analysis presented in this paper tests for the reasons of these discrepancies. In this examination abundance of hard coal and the price of hard coal are the two variables of our major interest. We assume that countries with an abundance of h...

  13. Open heavy flavor and other hard probes in ultra-relativistic heavy-ion collisions

    OpenAIRE

    Uphoff, Jan

    2014-01-01

    In this thesis hard probes are studied in the partonic transport model BAMPS (Boltzmann Approach to MultiParton Scatterings). Employing Monte Carlo techniques, this model describes the 3+1 dimensional evolution of the quark gluon plasma phase in ultra-relativistic heavy-ion collisions by propagating all particles in space and time and carrying out their collisions according to the Boltzmann equation. Since hard probes are produced in hard processes with a large momentum transfer, the value of...

  14. Synergy of modeling processes in the area of soft and hard modeling

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available High complexity of production processes results in more frequent use of computer systems for their modeling and simulation. Process modeling helps to find optimal solution, verify some assumptions before implementation and eliminate errors. In practice, modeling of production processes concerns two areas: hard modeling (based on differential equations of mathematical physics and soft (based on existing data. In the paper the possibility of synergistic connection of these two approaches was indicated: it means hard modeling support based on the tools used in soft modeling. It aims at significant reducing the time in order to obtain final results with the use of hard modeling. Some test were carried out in the Calibrate module of NovaFlow&Solid (NF&S simulation system in the frame of thermal analysis (ATAS-cup. The authors tested output values forecasting in NF&S system (solidification time on the basis of variable parameters of the thermal model (heat conduction, specific heat, density. Collected data was used as an input to prepare soft model with the use of MLP (Multi-Layer Perceptron neural network regression model. The approach described above enable to reduce the time of production process modeling with use of hard modeling and should encourage production companies to use it.

  15. Knoop hardness of ten resin composites irradiated with high-power LED and quartz-tungsten-halogen lights.

    Science.gov (United States)

    Price, Richard B T; Felix, Corey A; Andreou, Pantelis

    2005-05-01

    This study compared a high-power light-emitting-diode (LED) curing light (FreeLight 2, 3M ESPE) with a quartz-tungsten-halogen (QTH) light (TriLight, 3M ESPE) to determine which was the better at photo-polymerising 10 resin composites. Class I preparations were prepared 4-mm deep into human teeth and filled with 10 different composites. The composites were irradiated for 50% or 100% of their recommended times using the LED light, and for 100% of their recommended times with the QTH light on either the high or medium power setting. Fifteen minutes later, the Knoop hardness of the composites was measured to a depth of 3.5 mm from the surface. When irradiated by the LED light for their recommended curing times, the Knoop hardness of all 10 composites stayed above 80% of the maximum hardness of the composite to a depth of at least 1.5 mm; three composites maintained a Knoop hardness that was more than 80% of their maximum hardness to a depth of 3.5 mm. Repeated measurements analysis of variance indicated that all the two-way and three-way interactions between the curing light, depth, and composite were significant (p hardness values. The LED light, used for the composite manufacturer's recommended time, was ranked the best at curing the composites to a depth of 3mm (p power setting.

  16. Combining experimental and cosmological constraints on heavy neutrinos

    Directory of Open Access Journals (Sweden)

    Marco Drewes

    2017-08-01

    Full Text Available We study experimental and cosmological constraints on the extension of the Standard Model by three right handed neutrinos with masses between those of the pion and W boson. We combine for the first time direct, indirect and cosmological constraints in this mass range. This includes experimental constraints from neutrino oscillation data, neutrinoless double β decay, electroweak precision data, lepton universality, searches for rare lepton decays, tests of CKM unitarity and past direct searches at colliders or fixed target experiments. On the cosmological side, big bang nucleosynthesis has the most pronounced impact. Our results can be used to evaluate the discovery potential of searches for heavy neutrinos at LHCb, BELLE II, SHiP, ATLAS, CMS or a future lepton collider.

  17. 2TB hard disk drive

    CERN Multimedia

    This particular object was used up until 2012 in the Data Centre. It slots into one of the Disk Server trays. Hard disks were invented in the 1950s. They started as large disks up to 20 inches in diameter holding just a few megabytes (link is external). They were originally called "fixed disks" or "Winchesters" (a code name used for a popular IBM product). They later became known as "hard disks" to distinguish them from "floppy disks (link is external)." Hard disks have a hard platter that holds the magnetic medium, as opposed to the flexible plastic film found in tapes and floppies.

  18. Chance-Constrained Guidance With Non-Convex Constraints

    Science.gov (United States)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of

  19. Janka hardness using nonstandard specimens

    Science.gov (United States)

    David W. Green; Marshall Begel; William Nelson

    2006-01-01

    Janka hardness determined on 1.5- by 3.5-in. specimens (2×4s) was found to be equivalent to that determined using the 2- by 2-in. specimen specified in ASTM D 143. Data are presented on the relationship between Janka hardness and the strength of clear wood. Analysis of historical data determined using standard specimens indicated no difference between side hardness...

  20. On squaring the primary constraints in a generalized Hamiltonian dynamics

    International Nuclear Information System (INIS)

    Nesterenko, V.V.

    1993-01-01

    Consideration of the model of the relativistic particle with curvature and torsion in the three-dimensional space-time shows that the squaring of the primary constraints entails a wrong result. The complete set of the Hamiltonian constraints arising here corresponds to another model with an action similar but not identical with the initial action. 16 refs

  1. Hardness of high-pressure high-temperature treated single-walled carbon nanotubes

    International Nuclear Information System (INIS)

    Kawasaki, S.; Nojima, Y.; Yokomae, T.; Okino, F.; Touhara, H.

    2007-01-01

    We have performed high-pressure high-temperature (HPHT) treatments of high quality single-walled carbon nanotubes (SWCNTs) over a wide pressure-temperature range up to 13 GPa-873 K and have investigated the hardness of the HPHT-treated SWCNTs using a nanoindentation technique. It was found that the hardness of the SWCNTs treated at pressures greater than 11 GPa and at temperatures higher than 773 K is about 10 times greater than that of the SWCNTs treated at low temperature. It was also found that the hardness change of the SWCNTs is related to the structural change by the HPHT treatments which was based on synchrotron X-ray diffraction measurements

  2. Crack-tip constraint analyses and constraint-dependent LBB curves for circumferential through-wall cracked pipes

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.L.; Wang, G.Z., E-mail: gzwang@ecust.edu.cn; Xuan, F.Z.; Tu, S.T.

    2015-04-15

    Highlights: • Solution of constraint parameter τ* for through-wall cracked pipes has been obtained. • Constraint increases with increasing crack length and radius–thickness ratio of pipes. • Constraint-dependent LBB curve for through-wall cracked pipes has been constructed. • For increasing accuracy of LBB assessments, constraint effect should be considered. - Abstract: The leak-before-break (LBB) concept has been widely applied in the structural integrity assessments of pressured pipes in nuclear power plants. However, the crack-tip constraint effects in LBB analyses and designs cannot be incorporated. In this paper, by using three-dimensional finite element calculations, the modified load-independent T-stress constraint parameter τ* for circumferential through-wall cracked pipes with different geometries and crack sizes has been analyzed under different loading conditions, and the solutions of the crack-tip constraint parameter τ* have been obtained. Based on the τ* solutions and constraint-dependent J–R curves of a steel, the constraint-dependent LBB (leak-before-break) curves have been constructed. The results show that the constraint τ* increases with increasing crack length θ, mean radius R{sub m} and radius–thickness ratio R{sub m}/t of the pipes. In LBB analyses, the critical crack length calculated by the J–R curve of the standard high constraint specimen for pipes with shorter cracks is over-conservative, and the degree of conservatism increases with decreasing crack length θ, R{sub m} and R{sub m}/t. Therefore, the constraint-dependent LBB curves should be constructed to modify the over-conservatism and increase accuracy of LBB assessments.

  3. Determining the Effect of Material Hardness During the Hard Turning of AISI4340 Steel

    Science.gov (United States)

    Kambagowni, Venkatasubbaiah; Chitla, Raju; Challa, Suresh

    2018-05-01

    In the present manufacturing industries hardened steels are most widely used in the applications like tool design and mould design. It enhances the application range of hard turning of hardened steels in manufacturing industries. This study discusses the impact of workpiece hardness, feed and depth of cut on Arithmetic mean roughness (Ra), root mean square roughness (Rq), mean depth of roughness (Rz) and total roughness (Rt) during the hard turning. Experiments have been planned according to the Box-Behnken design and conducted on hardened AISI4340 steel at 45, 50 and 55 HRC with wiper ceramic cutting inserts. Cutting speed is kept constant during this study. The analysis of variance was used to determine the effects of the machining parameters. 3-D response surface plots drawn based on RSM were utilized to set up the input-output relationships. The results indicated that the feed rate has the most significant parameter for Ra, Rq and Rz and hardness has the most critical parameter for the Rt. Further, hardness shows its influence over all the surface roughness characteristics.

  4. Hardness and microstructure analysis of damaged gear caused by adhesive wear

    Science.gov (United States)

    Mahendra, Rizky Budi; Nugroho, Sri; Ismail, Rifky

    2018-03-01

    This study was a result from research on repairing project of damaged elevator gear box. The objective of this research is to analyze the failure part on elevator gearbox at flourmill factory. The equipment was damaged after one year installed and running on factory. Severe wear was occurred on high speed helical gear. These helical gear was one of main part of elevator gearbox in flour mill manufacture. Visually, plastic deformation didn't occurred and not visible on the failure helical gear shaft. Some test would be performed to check the chemical composition, microstructure and hardness of failure helical gear. The material of failure helical gear shaft was a medium carbon steel alloy. The microstructure was showed a martensitic phase formed on the surface to the center area of gear shaft. Otherwise, the depth of hardness layer slight formed on surface and lack depth of hardness layer was a main trigger of severe wear. It was not enough to resist wear due to friction caused by rolling and sliding on surface between high speed gear and low speed gear. Enhancement of hardness layer on surface and depth of hardness layer will make the component has more long life time. Furthermore, to perform next research is needed to analyze the reliability of enhanced hardness on layer and depth of hardness layer on helical gear shaft.

  5. Task Mapping and Bandwidth Reservation for Mixed Hard/Soft Fault-Tolerant Embedded Systems

    DEFF Research Database (Denmark)

    Saraswat, Prabhat Kumar; Pop, Paul; Madsen, Jan

    2010-01-01

    reserved for the servers determines the quality of service (QoS) for soft tasks. CBS enforces temporal isolation, such that soft task overruns do not affect the timing guarantees of hard tasks. Transient faults in hard tasks are tolerated using checkpointing with rollback recovery. We have proposed a Tabu...

  6. Exact and Heuristic Algorithms for Routing AGV on Path with Precedence Constraints

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2016-01-01

    Full Text Available A new problem arises when an automated guided vehicle (AGV is dispatched to visit a set of customers, which are usually located along a fixed wire transmitting signal to navigate the AGV. An optimal visiting sequence is desired with the objective of minimizing the total travelling distance (or time. When precedence constraints are restricted on customers, the problem is referred to as traveling salesman problem on path with precedence constraints (TSPP-PC. Whether or not it is NP-complete has no answer in the literature. In this paper, we design dynamic programming for the TSPP-PC, which is the first polynomial-time exact algorithm when the number of precedence constraints is a constant. For the problem with number of precedence constraints, part of the input can be arbitrarily large, so we provide an efficient heuristic based on the exact algorithm.

  7. Misconceptions and constraints

    International Nuclear Information System (INIS)

    Whitten, M.; Mahon, R.

    2005-01-01

    In theory, the sterile insect technique (SIT) is applicable to a wide variety of invertebrate pests. However, in practice, the approach has been successfully applied to only a few major pests. Chapters in this volume address possible reasons for this discrepancy, e.g. Klassen, Lance and McInnis, and Robinson and Hendrichs. The shortfall between theory and practice is partly due to the persistence of some common misconceptions, but it is mainly due to one constraint, or a combination of constraints, that are biological, financial, social or political in nature. This chapter's goal is to dispel some major misconceptions, and view the constraints as challenges to overcome, seeing them as opportunities to exploit. Some of the common misconceptions include: (1) released insects retain residual radiation, (2) females must be monogamous, (3) released males must be fully sterile, (4) eradication is the only goal, (5) the SIT is too sophisticated for developing countries, and (6) the SIT is not a component of an area-wide integrated pest management (AW-IPM) strategy. The more obvious constraints are the perceived high costs of the SIT, and the low competitiveness of released sterile males. The perceived high up-front costs of the SIT, their visibility, and the lack of private investment (compared with alternative suppression measures) emerge as serious constraints. Failure to appreciate the true nature of genetic approaches, such as the SIT, may pose a significant constraint to the wider adoption of the SIT and other genetically-based tactics, e.g. transgenic genetically modified organisms (GMOs). Lack of support for the necessary underpinning strategic research also appears to be an important constraint. Hence the case for extensive strategic research in ecology, population dynamics, genetics, and insect behaviour and nutrition is a compelling one. Raising the competitiveness of released sterile males remains the major research objective of the SIT. (author)

  8. Linguistic embodiment and verbal constraints: human cognition and the scales of time

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2014-01-01

    Using radical embodied cognitive science, the paper offers the hypothesis that language is symbiotic: its agent-environment dynamics arise as linguistic embodiment is managed under verbal constraints. As a result, co-action grants human agents the ability to use a unique form of phenomenal......, linguistic symbiosis grants access to diachronic resources. On this distributed-ecological view, language can thus be redefined as: “activity in which wordings play a part.”...

  9. Sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage.

    Science.gov (United States)

    Weng, Falu; Liu, Mingxin; Mao, Weijie; Ding, Yuanchun; Liu, Feifei

    2018-05-10

    The problem of sampled-data-based vibration control for structural systems with finite-time state constraint and sensor outage is investigated in this paper. The objective of designing controllers is to guarantee the stability and anti-disturbance performance of the closed-loop systems while some sensor outages happen. Firstly, based on matrix transformation, the state-space model of structural systems with sensor outages and uncertainties appearing in the mass, damping and stiffness matrices is established. Secondly, by considering most of those earthquakes or strong winds happen in a very short time, and it is often the peak values make the structures damaged, the finite-time stability analysis method is introduced to constrain the state responses in a given time interval, and the H-infinity stability is adopted in the controller design to make sure that the closed-loop system has a prescribed level of disturbance attenuation performance during the whole control process. Furthermore, all stabilization conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using the LMI Toolbox. Finally, numerical examples are given to demonstrate the effectiveness of the proposed theorems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. State of the Art of Hard and Soft Ionization Mass Spectrometry

    International Nuclear Information System (INIS)

    Helal, A.I.

    2008-01-01

    The principles of hard and soft ionization sources, providing some details on the practical aspects of their uses as well as ionization mechanisms are discussed. The conditions and uses of hard ionization methods such as electron impact, thermal ionization and inductively coupled plasma techniques are discussed. Moreover, new generation of soft ionization methods such as matrix-assisted laser desorption/ionization, electro spray ionization and direct analysis in real time are illustrated

  11. Analyzing the effect of gain time on soft task scheduling policies in real-time systems

    OpenAIRE

    Búrdalo Rapa, Luis Antonio; Terrasa Barrena, Andrés Martín; Espinosa Minguet, Agustín Rafael; García Fornes, Ana María

    2012-01-01

    In hard real-time systems, gain time is defined as the difference between the Worst Case Execution Time (WCET) of a hard task and its actual processor consumption at runtime. This paper presents the results of an empirical study about how the presence of a significant amount of gain time in a hard real-time system questions the advantages of using the most representative scheduling algorithms or policies for aperiodic or soft tasks in fixed-priority preemptive systems. The work presented here...

  12. Imposing motion constraints to a force reflecting tele-robot through real-time simulation of a virtual mechanism

    Energy Technology Data Exchange (ETDEWEB)

    Joly, L.; Andriot, C.

    1995-12-31

    In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs.

  13. Imposing motion constraints to a force reflecting tele-robot through real-time simulation of a virtual mechanism

    International Nuclear Information System (INIS)

    Joly, L.; Andriot, C.

    1995-01-01

    In a tele-operation system, assistance can be given to the operator by constraining the tele-robot position to remain within a restricted subspace of its workspace. A new approach to motion constraint is presented in this paper. The control law is established simulating a virtual ideal mechanism acting as a jig, and connected to the master and slave arms via springs and dampers. Using this approach, it is possible to impose any (sufficiently smooth) motion constraint to the system, including non linear constraints (complex surfaces) involving coupling between translations and rotations and physical equivalence ensures that the controller is passive. Experimental results obtained with a 6-DOF tele-operation system are given. Other applications of the virtual mechanism concept include hybrid position-force control and haptic interfaces. (authors). 11 refs., 7 figs

  14. On the Use of Time-Limited Information for Maintenance Decision Support: A Predictive Approach under Maintenance Constraints

    Directory of Open Access Journals (Sweden)

    E. Khoury

    2013-01-01

    Full Text Available This paper deals with a gradually deteriorating system operating under an uncertain environment whose state is only known on a finite rolling horizon. As such, the system is subject to constraints. Maintenance actions can only be planned at imposed times called maintenance opportunities that are available on a limited visibility horizon. This system can, for example, be a commercial vehicle with a monitored critical component that can be maintained only in some specific workshops. Based on the considered system, we aim to use the monitoring data and the time-limited information for maintenance decision support in order to reduce its costs. We propose two predictive maintenance policies based, respectively, on cost and reliability criteria. Classical age-based and condition-based policies are considered as benchmarks. The performance assessment shows the value of the different types of information and the best way to use them in maintenance decision making.

  15. Mechanism by Which Magnesium Oxide Suppresses Tablet Hardness Reduction during Storage.

    Science.gov (United States)

    Sakamoto, Takatoshi; Kachi, Shigeto; Nakamura, Shohei; Miki, Shinsuke; Kitajima, Hideaki; Yuasa, Hiroshi

    2016-01-01

    This study investigated how the inclusion of magnesium oxide (MgO) maintained tablet hardness during storage in an unpackaged state. Tablets were prepared with a range of MgO levels and stored at 40°C with 75% relative humidity for up to 14 d. The hardness of tablets prepared without MgO decreased over time. The amount of added MgO was positively associated with tablet hardness and mass from an early stage during storage. Investigation of the water sorption properties of the tablet components showed that carmellose water sorption correlated positively with the relative humidity, while MgO absorbed and retained moisture, even when the relative humidity was reduced. In tablets prepared using only MgO, a petal- or plate-like material was observed during storage. Fourier transform infrared spectrophotometry showed that this material was hydromagnesite, produced when MgO reacts with water and CO2. The estimated level of hydromagnesite at each time-point showed a significant negative correlation with tablet porosity. These results suggested that MgO suppressed storage-associated softening by absorbing moisture from the environment. The conversion of MgO to hydromagnesite results in solid bridge formation between the powder particles comprising the tablets, suppressing the storage-related increase in volume and increasing tablet hardness.

  16. Nordic congestion's arrangement as a model for Europe? Physical constraints vs. economic incentives

    International Nuclear Information System (INIS)

    Glachant, J.-M.; Pignon, V.

    2005-01-01

    Congestion on power grids seems a physical reality, a 'hard' fact easy to check. Our paper models a different idea: congestion signal may be distorted by transmission system operators (TSOs). Indeed, congestion signals are not physical data but 'home made' conventions directly set by the TSOs in charge of the security of the system. These security norms are not stable and invariable because lines capacity limits are not constant. TSOs, therefore, define the congestion signal on a variable, complex and non-transparent constraint and may manipulate it for monetary purposes or for other personal agenda. In Nordic countries the coexistence of two congestion management methods in a 'Light Handed Regulation' framework makes this opportunistic behaviour even more likely. (author)

  17. Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint

    Science.gov (United States)

    Rodrigues, José Francisco; Santos, Lisa

    2012-08-01

    We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.

  18. Engineering design constraints of the lunar surface environment

    Science.gov (United States)

    Morrison, D. A.

    1992-01-01

    Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.

  19. Constraints on communication in classrooms for the deaf.

    Science.gov (United States)

    Matthews, T J; Reich, C F

    1993-03-01

    One explanation for the relatively low scholastic achievement of deaf students is the character of communication in the classroom. Unlike aural communication methods, line-of-sight methods share the limitation that the receiver of the message must look at the sender. To assess the magnitude of this constraint, we measured the amount of time signers were looked at by potential receivers in typical secondary school classes for the deaf. Videotaped segments indicated that on average the messages sent by teachers and students were seen less than half the time. Students frequently engaged in collateral conversations. The constraints of line-of-sight communication are profound and should be addressed by teaching techniques, classroom layout, and possibly, the use of computer-communication technology.

  20. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  1. Soft- and hard-agglomerate aerosols made at high temperatures.

    Science.gov (United States)

    Tsantilis, Stavros; Pratsinis, Sotiris E

    2004-07-06

    Criteria for aerosol synthesis of soft-agglomerate, hard-agglomerate, or even nonagglomerate particles are developed on the basis of particle sintering and coalescence. Agglomerate (or aggregate) particles are held together by weak, physical van der Waals forces (soft agglomerates) or by stronger chemical or sintering bonds (hard agglomerates). Accounting for simultaneous gas phase chemical reaction, coagulation, and sintering during the formation and growth of silica (SiO2) nanoparticles by silicon tetrachloride (SiCl4) oxidation and neglecting the spread of particle size distribution, the onset of hard-agglomerate formation is identified at the end of full coalescence, while the onset of soft-agglomerate formation is identified at the end of sintering. Process conditions such as the precursor initial volume fraction, maximum temperature, residence time, and cooling rate are explored, identifying regions for the synthesis of particles with a controlled degree of agglomeration (ratio of collision to primary particle diameters).

  2. Solving the Mystery of the Short-Hard Gamma-Ray Bursts

    Science.gov (United States)

    Fox, Derek

    2005-07-01

    Eight years after the afterglow detections that revolutionized studies of the long-soft gamma-ray bursts, not even one afterglow of a short-hard GRB has been seen, and the nature of these events has become one of the most important problems in GRB research. The Swift satellite, expected to be in full operation throughout Cycle 14, will report few-arcsecond localizations for short-hard bursts in minutes, enabling prompt, deep optical afterglow searches for the first time. Discovery and observation of the first short-hard optical afterglows will answer most of the critical questions about these events: What are their distances and energies? Do they occur in distant galaxies, and if so, in which regions of those galaxies? Are they the result of collimated or quasi-spherical explosions? In combination with an extensive rapid-response ground-based campaign, we propose to make the critical high-sensitivity HST TOO observations that will allow us to answer these questions. If theorists are correct in attributing the short-hard bursts to binary neutron star coalescence events, then they will serve as signposts to the primary targeted source population for ground-based gravitational-wave detectors, and short-hard burst studies will have a vital role to play in guiding those observations.

  3. Exact sampling hardness of Ising spin models

    Science.gov (United States)

    Fefferman, B.; Foss-Feig, M.; Gorshkov, A. V.

    2017-09-01

    We study the complexity of classically sampling from the output distribution of an Ising spin model, which can be implemented naturally in a variety of atomic, molecular, and optical systems. In particular, we construct a specific example of an Ising Hamiltonian that, after time evolution starting from a trivial initial state, produces a particular output configuration with probability very nearly proportional to the square of the permanent of a matrix with arbitrary integer entries. In a similar spirit to boson sampling, the ability to sample classically from the probability distribution induced by time evolution under this Hamiltonian would imply unlikely complexity theoretic consequences, suggesting that the dynamics of such a spin model cannot be efficiently simulated with a classical computer. Physical Ising spin systems capable of achieving problem-size instances (i.e., qubit numbers) large enough so that classical sampling of the output distribution is classically difficult in practice may be achievable in the near future. Unlike boson sampling, our current results only imply hardness of exact classical sampling, leaving open the important question of whether a much stronger approximate-sampling hardness result holds in this context. The latter is most likely necessary to enable a convincing experimental demonstration of quantum supremacy. As referenced in a recent paper [A. Bouland, L. Mancinska, and X. Zhang, in Proceedings of the 31st Conference on Computational Complexity (CCC 2016), Leibniz International Proceedings in Informatics (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, Dagstuhl, 2016)], our result completes the sampling hardness classification of two-qubit commuting Hamiltonians.

  4. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    Science.gov (United States)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  5. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    Science.gov (United States)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  6. Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations.

    Science.gov (United States)

    König, Gerhard; Brooks, Bernard R

    2015-05-01

    Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. This article is part of a Special Issue

  7. The Enskog Equation for Confined Elastic Hard Spheres

    Science.gov (United States)

    Maynar, P.; García de Soria, M. I.; Brey, J. Javier

    2018-03-01

    A kinetic equation for a system of elastic hard spheres or disks confined by a hard wall of arbitrary shape is derived. It is a generalization of the modified Enskog equation in which the effects of the confinement are taken into account and it is supposed to be valid up to moderate densities. From the equation, balance equations for the hydrodynamic fields are derived, identifying the collisional transfer contributions to the pressure tensor and heat flux. A Lyapunov functional, H[f], is identified. For any solution of the kinetic equation, H decays monotonically in time until the system reaches the inhomogeneous equilibrium distribution, that is a Maxwellian distribution with a density field consistent with equilibrium statistical mechanics.

  8. Post-irradiation hardness of resin-modified glass ionomer cements and a polyacid-modified composite resin

    International Nuclear Information System (INIS)

    Yap, A.U.J.

    1997-01-01

    This study examined the post-irradiation hardness of resin-modified glass ionomer cements and a polyacid-modified composite resin using a digital microhardness tester. Change in hardness of these materials over a period of 6 months was compared to that of conventional glass ionomer cements and a composite resin. With the exception of the composite resin, all materials showed a significant increase in hardness over 24 h after their initial set. Dual-cure resin-modified glass ionomer cements showed decreased hardness with increased storage time in saline at 37 o C. Results suggest that the addition of resins to glass ionomer cements does not improve initial hardness and does not negate the acid-base reaction of conventional cements. Resin addition may, however, lead to increased water sorption and decreased hardness. (author)

  9. Modelling hard and soft states of Cygnus X-1 with propagating mass accretion rate fluctuations

    Science.gov (United States)

    Rapisarda, S.; Ingram, A.; van der Klis, M.

    2017-12-01

    We present a timing analysis of three Rossi X-ray Timing Explorer observations of the black hole binary Cygnus X-1 with the propagating mass accretion rate fluctuations model PROPFLUC. The model simultaneously predicts power spectra, time lags and coherence of the variability as a function of energy. The observations cover the soft and hard states of the source, and the transition between the two. We find good agreement between model predictions and data in the hard and soft states. Our analysis suggests that in the soft state the fluctuations propagate in an optically thin hot flow extending up to large radii above and below a stable optically thick disc. In the hard state, our results are consistent with a truncated disc geometry, where the hot flow extends radially inside the inner radius of the disc. In the transition from soft to hard state, the characteristics of the rapid variability are too complex to be successfully described with PROPFLUC. The surface density profile of the hot flow predicted by our model and the lack of quasi-periodic oscillations in the soft and hard states suggest that the spin of the black hole is aligned with the inner accretion disc and therefore probably with the rotational axis of the binary system.

  10. On scale dependence of hardness

    International Nuclear Information System (INIS)

    Shorshorov, M.Kh.; Alekhin, V.P.; Bulychev, S.I.

    1977-01-01

    The concept of hardness as a structure-sensitive characteristic of a material is considered. It is shown that in conditions of a decreasing stress field under the inventor the hardness function is determined by the average distance, Lsub(a), between the stops (fixed and sessile dislocations, segregation particles, etc.). In the general case, Lsub(a) depends on the size of the impression and explains the great diversity of hardness functions. The concept of average true deformation rate on depression is introduced

  11. Reasoning about Strategies under Partial Observability and Fairness Constraints

    Directory of Open Access Journals (Sweden)

    Simon Busard

    2013-03-01

    Full Text Available A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it.

  12. Benefits and constraints in the use of solar cooker

    International Nuclear Information System (INIS)

    Ilyas, S.Z.

    2008-01-01

    Women in Pakistan have been overlooked during and after the planning anti implementation of household energy projects for decades. The immediate impact of domestic household energy projects falls on the women first. Since women are the ones who deal mostly with energy at the domestic level A sample of 100 women users of solar cookers was selected randomly. Majority of the respondents were in the age group 30-55 years (80%) and possessed solar cooker for more than one year (74%). Nutritional aspects (preserving nutritive value and food flavors) environmental aspects (keeping environment clean) and economical aspects (saving fuel and money) were perceived at most beneficial. Personal benefits (saving of time me and convenience) ranked low under benefit. Situational constraints like no cooking after evening find seasonal use of the cooker were perceived as severe constraints followed by technical constraints (device not being durable) and personal constraints (shifting of device). The paper also highlights the modification desired in the design of the solar cooker. (author)

  13. Influence of timing of delayed hard palate closure on articulation skills in 3-year-old Danish children with unilateral cleft lip and palate.

    Science.gov (United States)

    Willadsen, Elisabeth; Boers, Maria; Schöps, Antje; Kisling-Møller, Mia; Nielsen, Joan Bogh; Jørgensen, Line Dahl; Andersen, Mikael; Bolund, Stig; Andersen, Helene Søgaard

    2018-01-01

    Differing results regarding articulation skills in young children with cleft palate (CP) have been reported and often interpreted as a consequence of different surgical protocols. To assess the influence of different timing of hard palate closure in a two-stage procedure on articulation skills in 3-year-olds born with unilateral cleft lip and palate (UCLP). Secondary aims were to compare results with peers without CP, and to investigate if there are gender differences in articulation skills. Furthermore, burden of treatment was to be estimated in terms of secondary surgery, hearing and speech therapy. A randomized controlled trial (RCT). Early hard palate closure (EHPC) at 12 months versus late hard palate closure (LHPC) at 36 months in a two-stage procedure was tested in a cohort of 126 Danish-speaking children born with non-syndromic UCLP. All participants had the lip and soft palate closed around 4 months of age. Audio and video recordings of a naming test were available from 113 children (32 girls and 81 boys) and were transcribed phonetically. Recordings were obtained prior to hard palate closure in the LHPC group. The main outcome measures were percentage consonants correct adjusted (PCC-A) and consonant errors from blinded assessments. Results from 36 Danish-speaking children without CP obtained previously by Willadsen in 2012 were used for comparison. Children with EHPC produced significantly more target consonants correctly (83%) than children with LHPC (48%; p < .001). In addition, children with LHPC produced significantly more active cleft speech characteristics than children with EHPC (p < .001). Boys achieved significantly lower PCC-A scores than girls (p = .04) and produced significantly more consonant errors than girls (p = .02). No significant differences were found between groups regarding burden of treatment. The control group performed significantly better than the EHPC and LHPC groups on all compared variables. © 2017 Royal College of Speech

  14. Micro-hardness of non-irradiated uranium dioxide

    International Nuclear Information System (INIS)

    Kim, Sung-Sik; Takagi, Osamu; Obata, Naomi; Kirihara, Tomoo.

    1983-01-01

    In order to obtain the optimum conditions for micro-hardness measurements of sintered UO 2 , two kinds of hardness tests (Vickers and Knoop) were examined with non-irradiated UO 2 of 2.5 and 5 μm in grain size. The hardness values were obtained as a function of the applied load in the load range of 25 -- 1,000 g. In the Vickers test, cracks were generated around the periphery of an indentation even at lower load of 50 g, which means the Vickers hardness is not suitable for UO 2 specimens. In the Knoop test, three stages of load dependence were observed for sintered pellet as well as for a single crystal by Bates. Load dependence of Knoop hardness and crack formation were discussed. In the range of applied load around 70 -- 100 g there were plateau region where hardness values were nearly unchanged and did not contain any cracks in the indentation. The plateau region represents a hardness of a specimen. From a comparison between the hardness values of 2.5 μm and those of 5 μm UO 2 , it was approved that the degree of sintering controls the hardness in the plateau region. (author)

  15. Healthy Living in Hard Times

    OpenAIRE

    Christopher J. Ruhm

    2003-01-01

    Using microdata for adults from the 1987-2000 years of the Behavioral Risk Factor Surveillance System, I show that smoking and height-adjusted weight decline during temporary economic downturns while leisure-time physical activity rises. The drop in tobacco use occurs disproportionately among heavy smokers, the fall in body weight among the severely obese, and the increase in exercise among those who were completely inactive. Declining work hours may provide one reason why behaviors become he...

  16. An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage

    OpenAIRE

    Gramajo, German; Shankar, Praveen

    2017-01-01

    A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of ...

  17. Financing Constraints and Entrepreneurship

    OpenAIRE

    William R. Kerr; Ramana Nanda

    2009-01-01

    Financing constraints are one of the biggest concerns impacting potential entrepreneurs around the world. Given the important role that entrepreneurship is believed to play in the process of economic growth, alleviating financing constraints for would-be entrepreneurs is also an important goal for policymakers worldwide. We review two major streams of research examining the relevance of financing constraints for entrepreneurship. We then introduce a framework that provides a unified perspecti...

  18. SU-F-T-342: Dosimetric Constraint Prediction Guided Automatic Mulit-Objective Optimization for Intensity Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Song, T; Zhou, L; Li, Y

    2016-01-01

    Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specific dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive

  19. SU-F-T-342: Dosimetric Constraint Prediction Guided Automatic Mulit-Objective Optimization for Intensity Modulated Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Song, T; Zhou, L [Southern Medical University, Guangzhou, Guangdong (China); Li, Y [Beihang University, Beijing, Beijing (China)

    2016-06-15

    Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specific dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive

  20. Hard-hat day

    CERN Multimedia

    2003-01-01

    CERN will be organizing a special information day on Friday, 27th June, designed to promote the wearing of hard hats and ensure that they are worn correctly. A new prevention campaign will also be launched.The event will take place in the hall of the Main Building from 11.30 a.m. to 2.00 p.m., when you will be able to come and try on various models of hard hat, including some of the very latest innovative designs, ask questions and pass on any comments and suggestions.

  1. Creativity from Constraints in Engineering Design

    DEFF Research Database (Denmark)

    Onarheim, Balder

    2012-01-01

    This paper investigates the role of constraints in limiting and enhancing creativity in engineering design. Based on a review of literature relating constraints to creativity, the paper presents a longitudinal participatory study from Coloplast A/S, a major international producer of disposable...... and ownership of formal constraints played a crucial role in defining their influence on creativity – along with the tacit constraints held by the designers. The designers were found to be highly constraint focused, and four main creative strategies for constraint manipulation were observed: blackboxing...

  2. Spectro-Timing Study of GX 339-4 in a Hard Intermediate State

    DEFF Research Database (Denmark)

    Fürst, F.; Grinberg, V.; Tomsick, J. A.

    2016-01-01

    We present an analysis of Nuclear Spectroscopic Telescope Array observations of a hard intermediate state of the transient black hole GX 339-4 taken in 2015 January. With the source softening significantly over the course of the 1.3 day long observation we split the data into 21 sub-sets and find...... that the spectrum of all of them can be well described by a power-law continuum with an additional relativistically blurred reflection component. The photon index increases from ∼1.69 to ∼1.77 over the course of the observation. The accretion disk is truncated at around nine gravitational radii in all spectra. We...

  3. Neutron star cooling constraints for color superconductivity in hybrid stars

    International Nuclear Information System (INIS)

    Popov, S.; Grigoryan, Kh.; Blaschke, D.

    2005-01-01

    We apply the recently developed LogN-LogS test of compact star cooling theories for the first time to hybrid stars with a color superconducting quark matter core. While there is not yet a microscopically founded superconducting quark matter phase which would fulfill constraints from cooling phenomenology, we explore the hypothetical 2SC+X phase and show that the magnitude and density-dependence of the X-gap can be chosen to satisfy a set of tests: temperature-age (T-t), the brightness constraint, LogN-LogS, and the mass spectrum constraint. The latter test appears as a new conjecture from the present investigation

  4. Removal of mineral oil and wastewater pollutants using hard coal

    Directory of Open Access Journals (Sweden)

    BRANISLAV R. SIMONOVIĆ

    2009-05-01

    Full Text Available This study investigates the use of hard coal as an adsorbent for removal of mineral oil from wastewater. In order to determine the efficiency of hard coal as an adsorbent of mineral oil, process parameters such as sorption capacity (in static and dynamic conditions, temperature, pH, contact time, flow rate, and chemical pretreatment were evaluated in a series of batch and continuous flow experiments. There were significant differences in the mineral oil removal for various pH values examined. The adsorption of mineral oil increased as pH values diverged from 7 (neutral. At lower temperatures, the adsorption was notably higher. The wastewater flow rate was adjusted to achieve optimal water purification. Equilibrium was reached after 10 h in static conditions. At that time, more than 99% of mineral oil had been removed. At the beginning of the filtering process, the adsorption rate increased rapidly, only to show a minor decrease afterwards. Equilibrium data were fitted to Freundlich models to determine the water-hard coal partitioning coefficient. Physical adsorption caused by properties of the compounds was the predominant mechanism in the removal process.

  5. Effect of intracrystalline water on micro-Vickers hardness in tetragonal hen egg-white lysozyme single crystals

    International Nuclear Information System (INIS)

    Koizumi, H; Kawamoto, H; Tachibana, M; Kojima, K

    2008-01-01

    Mechanical properties of high quality tetragonal hen egg-white lysozyme single crystals which are one type of protein crystal were investigated by the indentation method. The indentation marks were clearly observed on the crystal surface and no elastic recovery of them occurred. The value of the micro-Vickers hardness in the wet condition was estimated to be about 20 MPa at room temperature. The hardness greatly depended on the amount of intracrystalline water (mobile water) contained in the crystals. The hardness increased with increasing evaporation time to air at room temperature. It reached the maximum at about 260 MPa, which is 13 times as much as that in the wet condition. The origin of such a change in hardness was explained in terms of the dislocation mechanisms in lysozyme single crystals

  6. Hard paths, soft paths or no paths? Cross-cultural perceptions of water solutions

    Science.gov (United States)

    Wutich, A.; White, A. C.; White, D. D.; Larson, K. L.; Brewis, A.; Roberts, C.

    2014-01-01

    In this study, we examine how development status and water scarcity shape people's perceptions of "hard path" and "soft path" water solutions. Based on ethnographic research conducted in four semi-rural/peri-urban sites (in Bolivia, Fiji, New Zealand, and the US), we use content analysis to conduct statistical and thematic comparisons of interview data. Our results indicate clear differences associated with development status and, to a lesser extent, water scarcity. People in the two less developed sites were more likely to suggest hard path solutions, less likely to suggest soft path solutions, and more likely to see no path to solutions than people in the more developed sites. Thematically, people in the two less developed sites envisioned solutions that involve small-scale water infrastructure and decentralized, community-based solutions, while people in the more developed sites envisioned solutions that involve large-scale infrastructure and centralized, regulatory water solutions. People in the two water-scarce sites were less likely to suggest soft path solutions and more likely to see no path to solutions (but no more likely to suggest hard path solutions) than people in the water-rich sites. Thematically, people in the two water-rich sites seemed to perceive a wider array of unrealized potential soft path solutions than those in the water-scarce sites. On balance, our findings are encouraging in that they indicate that people are receptive to soft path solutions in a range of sites, even those with limited financial or water resources. Our research points to the need for more studies that investigate the social feasibility of soft path water solutions, particularly in sites with significant financial and natural resource constraints.

  7. On the canonical treatment of Lagrangian constraints

    International Nuclear Information System (INIS)

    Barbashov, B.M.

    2001-01-01

    The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a special Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge

  8. On the canonical treatment of Lagrangian constraints

    International Nuclear Information System (INIS)

    Barbashov, B.M.

    2001-01-01

    The canonical treatment of dynamic systems with manifest Lagrangian constraints proposed by Berezin is applied to concrete examples: a specific Lagrangian linear in velocities, relativistic particles in proper time gauge, a relativistic string in orthonormal gauge, and the Maxwell field in the Lorentz gauge

  9. A 6 device SOI new technology for mixed analog-digital and rad-hard applications

    International Nuclear Information System (INIS)

    Blanc, J.P.; Bonaime, J.; Delevoye, E.; Pontcharra, J. de; Gautier, J.; Truche, R.

    1993-01-01

    DMILL technology is being developed for very rad-hard analog-digital applications, such as space and military circuits or as electronics for the future generation of high energy collider (LHC, CERN, Geneva). Both CMOS and junction (JFET and bipolar) transistors are needed. A new process has been integrated, based on a 1.2μm thick silicon film on insulator (SIMOX plus epitaxy), a complete dielectric isolation and low temperature process. The mean feature is that six different components are fabricated on the same wafer, taking into account the 12 volts supply voltage constraint for some analog applications. The first electrical characteristics are presented in this paper. The optimization capabilities of such a hardened CBi-CJ-CMOS technology are discussed

  10. Integrable Hamiltonian systems and interactions through quadratic constraints

    International Nuclear Information System (INIS)

    Pohlmeyer, K.

    1975-08-01

    Osub(n)-invariant classical relativistic field theories in one time and one space dimension with interactions that are entirely due to quadratic constraints are shown to be closely related to integrable Hamiltonian systems. (orig.) [de

  11. Research and Application of Remote Sensing Monitoring Method for Desertification Land Under Time and Space Constraints

    Science.gov (United States)

    Zhang, Nannnan; Wang, Rongbao; Zhang, Feng

    2018-04-01

    Serious land desertification and sandified threaten the urban ecological security and the sustainable economic and social development. In recent years, a large number of mobile sand dunes in Horqin sandy land flow into the northwest of Liaoning Province under the monsoon, make local agriculture suffer serious harm. According to the characteristics of desertification land in northwestern Liaoning, based on the First National Geographical Survey data, the Second National Land Survey data and the 1984-2014 Landsat satellite long time sequence data and other multi-source data, we constructed a remote sensing monitoring index system of desertification land in Northwest Liaoning. Through the analysis of space-time-spectral characteristics of desertification land, a method for multi-spectral remote sensing image recognition of desertification land under time-space constraints is proposed. This method was used to identify and extract the distribution and classification of desertification land of Chaoyang City (a typical citie of desertification in northwestern Liaoning) in 2008 and 2014, and monitored the changes and transfers of desertification land from 2008 to 2014. Sandification information was added to the analysis of traditional landscape changes, improved the analysis model of desertification land landscape index, and the characteristics and laws of landscape dynamics and landscape pattern change of desertification land from 2008 to 2014 were analyzed and revealed.

  12. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    Science.gov (United States)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  13. Morphology, topography, and hardness of diffusion bonded sialon to AISI 420 at different bonding time

    Science.gov (United States)

    Ibrahim, Nor Nurulhuda Md.; Hussain, Patthi; Awang, Mokhtar

    2015-07-01

    Sialon and AISI 420 martensitic stainless steel were diffusion bonded in order to study the effect of bonding time on reaction layer's growth. Joining of these materials was conducted at 1200°C under a uniaxial pressure of 17 MPa in a vacuum ranging from 5.0 to 8.0×10-6 Torr with bonding time varied for 0.5, 2, and 3 h. Thicker reaction layer was formed in longer bonded sample since the elements from sialon could diffuse further into the steel. Sialon retained its microstructure but it was affected at the initial contact with the steel to form the new interface layer. Diffusion layer grew toward the steel and it was segregated with the parent steel as a result of the difference in properties between these regions. The segregation formed a stream-like structure and its depth decreased when the bonding time was increased. The microstructure of the steel transformed into large grain size with precipitates. Prolonging the bonding time produced more precipitates in the steel and reduced the steel thickness as well. Interdiffusions of elements occurred between the joined materials and the concentrations were decreasing toward the steel and vice versa. Silicon easily diffused into the steel because it possessed lower ionization potential compared to nitrogen. Formation of silicide and other compounds such as carbides were detected in the interface layer and steel grain boundary, respectively. These compounds were harmful due to silicide brittleness and precipitation of carbides in the grain boundary might cause intergranular corrosion cracking. Sialon retained its hardness but it dropped very low at the interface layer. The absence of crack at the joint in all samples could be contributed from the ductility characteristic of the reaction layer which compensated the residual stress that was formed upon the cooling process.

  14. Deepening Contractions and Collateral Constraints

    DEFF Research Database (Denmark)

    Jensen, Henrik; Ravn, Søren Hove; Santoro, Emiliano

    and occasionally non-binding credit constraints. Easier credit access increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplified due to tighter constraints. As a result, busts gradually become deeper than booms. Based...

  15. Temporal Concurrent Constraint Programming

    DEFF Research Database (Denmark)

    Valencia, Frank Dan

    Concurrent constraint programming (ccp) is a formalism for concurrency in which agents interact with one another by telling (adding) and asking (reading) information in a shared medium. Temporal ccp extends ccp by allowing agents to be constrained by time conditions. This dissertation studies...... temporal ccp by developing a process calculus called ntcc. The ntcc calculus generalizes the tcc model, the latter being a temporal ccp model for deterministic and synchronouss timed reactive systems. The calculus is built upon few basic ideas but it captures several aspects of timed systems. As tcc, ntcc...... structures, robotic devises, multi-agent systems and music applications. The calculus is provided with a denotational semantics that captures the reactive computations of processes in the presence of arbitrary environments. The denotation is proven to be fully-abstract for a substantial fragment...

  16. New constraints on time-dependent variations of fundamental constants using Planck data

    Science.gov (United States)

    Hart, Luke; Chluba, Jens

    2018-02-01

    Observations of the cosmic microwave background (CMB) today allow us to answer detailed questions about the properties of our Universe, targeting both standard and non-standard physics. In this paper, we study the effects of varying fundamental constants (i.e. the fine-structure constant, αEM, and electron rest mass, me) around last scattering using the recombination codes COSMOREC and RECFAST++. We approach the problem in a pedagogical manner, illustrating the importance of various effects on the free electron fraction, Thomson visibility function and CMB power spectra, highlighting various degeneracies. We demonstrate that the simpler RECFAST++ treatment (based on a three-level atom approach) can be used to accurately represent the full computation of COSMOREC. We also include explicit time-dependent variations using a phenomenological power-law description. We reproduce previous Planck 2013 results in our analysis. Assuming constant variations relative to the standard values, we find the improved constraints αEM/αEM, 0 = 0.9993 ± 0.0025 (CMB only) and me/me, 0 = 1.0039 ± 0.0074 (including BAO) using Planck 2015 data. For a redshift-dependent variation, αEM(z) = αEM(z0) [(1 + z)/1100]p with αEM(z0) ≡ αEM, 0 at z0 = 1100, we obtain p = 0.0008 ± 0.0025. Allowing simultaneous variations of αEM(z0) and p yields αEM(z0)/αEM, 0 = 0.9998 ± 0.0036 and p = 0.0006 ± 0.0036. We also discuss combined limits on αEM and me. Our analysis shows that existing data are not only sensitive to the value of the fundamental constants around recombination but also its first time derivative. This suggests that a wider class of varying fundamental constant models can be probed using the CMB.

  17. The material co-construction of hard science fiction and physics

    Science.gov (United States)

    Hasse, Cathrine

    2015-12-01

    This article explores the relationship between hard science fiction and physics and a gendered culture of science. Empirical studies indicate that science fiction references might spur some students' interest in physics and help develop this interest throughout school, into a university education and even further later inspire the practice of doing science. There are many kinds of fiction within the science fiction genre. In the presented empirical exploration physics students seem particularly fond of what is called `hard science fiction': a particular type of science fiction dealing with technological developments (Hartwell and Cramer in The hard SF renaissance, Orb/TOR, New York, 2002). Especially hard science fiction as a motivating fantasy may, however, also come with a gender bias. The locally materialized techno-fantasies spurring dreams of the terraforming of planets like Mars and travels in time and space may not be shared by all physics students. Especially female students express a need for other concerns in science. The entanglement of physics with hard science fiction may thus help develop some students' interest in learning school physics and help create an interest for studying physics at university level. But research indicates that especially female students are not captured by the hard techno-fantasies to the same extent as some of their male colleagues. Other visions (e.g. inspired by soft science fiction) are not materialized as a resource in the local educational culture. It calls for an argument of how teaching science is also teaching cultural values, ethics and concerns, which may be gendered. Teaching materials, like the use of hard science fiction in education, may not just be (yet another) gender bias in science education but also carrier of particular visions for scientific endeavours.

  18. First hard X-ray detection of the non-thermal emission around the Arches cluster: morphology and spectral studies with NuSTAR

    DEFF Research Database (Denmark)

    Krivonos, Roman A.; Tomsick, John A.; Bauer, Franz E.

    2014-01-01

    The Arches cluster is a young, densely packed massive star cluster in our Galaxy that shows a high level of star formation activity. The nature of the extended non-thermal X-ray emission around the cluster remains unclear. The observed bright Fe Ku line emission at 6.4 keV from material that is n......The Arches cluster is a young, densely packed massive star cluster in our Galaxy that shows a high level of star formation activity. The nature of the extended non-thermal X-ray emission around the cluster remains unclear. The observed bright Fe Ku line emission at 6.4 keV from material...... and spectrum. The spatial distribution of the hard X-ray emission is found to be consistent with the broad region around the cluster where the 6.4 keV line is observed. The interpretation of the hard X-ray emission within the context of the X-ray reflection model puts a strong constraint on the luminosity...... of the possible illuminating hard X-ray source. The properties of the observed emission are also in broad agreement with the low-energy cosmic-ray proton excitation scenario....

  19. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  20. Standard test methods for rockwell hardness of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 These test methods cover the determination of the Rockwell hardness and the Rockwell superficial hardness of metallic materials by the Rockwell indentation hardness principle. This standard provides the requirements for Rockwell hardness machines and the procedures for performing Rockwell hardness tests. 1.2 This standard includes additional requirements in annexes: Verification of Rockwell Hardness Testing Machines Annex A1 Rockwell Hardness Standardizing Machines Annex A2 Standardization of Rockwell Indenters Annex A3 Standardization of Rockwell Hardness Test Blocks Annex A4 Guidelines for Determining the Minimum Thickness of a Test Piece Annex A5 Hardness Value Corrections When Testing on Convex Cylindrical Surfaces Annex A6 1.3 This standard includes nonmandatory information in appendixes which relates to the Rockwell hardness test. List of ASTM Standards Giving Hardness Values Corresponding to Tensile Strength Appendix X1 Examples of Procedures for Determining Rockwell Hardness Uncertainty Appendix X...

  1. DETECTION OF VERY HARD γ -RAY SPECTRUM FROM THE TEV BLAZAR MRK 501

    Energy Technology Data Exchange (ETDEWEB)

    Shukla, A.; Chitnis, V. R.; Acharya, B. S. [Department of High Energy Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Mannheim, K.; Dorner, D. [Institute for Theoretical Physics and Astrophysics, Universität Würzburg, D-97074 Würzburg (Germany); Roy, J. [UM-DAE Center for Excellence in Basic Sciences, Mumbai 400098 (India); Hughes, G.; Biland, A. [ETH Zurich, Institute for Particle Physics, Otto-Stern-Weg 5, 8093 Zurich (Switzerland)

    2016-12-01

    The occasional hardening of the GeV-to-TeV spectrum observed from the blazar Mrk 501 has reopened the debate on the physical origin of radiation and particle acceleration processes in TeV blazars. We have used the ∼7 years of Fermi -LAT data to search for the time intervals with unusually hard spectra from the nearby TeV blazar Mrk 501. We detected hard spectral components above 10 GeV with photon index <1.5 at a significance level of more than 5 sigma on 17 occasions, each with 30 day integration time. The photon index of the hardest component reached a value of 0.89 ± 0.29. We interpret these hard spectra as signatures of intermittent injection of sharply peaked and localized particle distributions from the base of the jet.

  2. Occupational dose constraint

    International Nuclear Information System (INIS)

    Heilbron Filho, Paulo Fernando Lavalle; Xavier, Ana Maria

    2005-01-01

    The revision process of the international radiological protection regulations has resulted in the adoption of new concepts, such as practice, intervention, avoidable and restriction of dose (dose constraint). The latter deserving of special mention since it may involve reducing a priori of the dose limits established both for the public and to individuals occupationally exposed, values that can be further reduced, depending on the application of the principle of optimization. This article aims to present, with clarity, from the criteria adopted to define dose constraint values to the public, a methodology to establish the dose constraint values for occupationally exposed individuals, as well as an example of the application of this methodology to the practice of industrial radiography

  3. Intégration des évènements non périodiques dans les systèmes temps réel : application à la gestion des évènements dans la spécification temps réel pour Java

    OpenAIRE

    Masson , Damien

    2008-01-01

    In computer science, real-time systems are composed of tasks. To each task is associated a timing constraint called a deadline. We distinguish two kinds of tasks : the hard ones and the soft ones. Hard tasks have hard deadlines, which must be respected to ensure the correctness of the system. So hard tasks are in essence periodic, or sporadic. Their behavior has been extensively studied. Soft tasks have soft deadlines that the system has to try to respect. When a task arrival model is unknown...

  4. Sub-second pulsations simultaneously observed at microwaves and hard X-rays in a solar burst

    International Nuclear Information System (INIS)

    Takakura, T.; Degaonkar, S.S.; Nitta, N.; Ohki, N.

    1982-11-01

    Sub-second time structures have been found in the emissions during solar bursts in mm-waves and, independently, in hard X-rays. However, simultaneous observations of such fast time structure in mm radio and X-ray ranges has not been available so far. Accordingly, coordinated observations of solar bursts in November 1981 with a high time resolution of a few milliseconds were planned. The hard X-rays (30-40 KeV were observed with hard X-ray monitor (HXM) aboard the Hinotori Satellite with a time resolution of 7.81 ms and the radio emissions were observed on the ground with 45ft dish at Itapetinga Radio Observatory with a high time resolution (1 ms) and high sensitivities at 22 GHz and 44 GHz, supplemented by a patrol observation at 7 GHz with time resolution of 100 ms. The pulsations repeated with a period of about 300 ms. The physical implication of the good correlation is not clear at this stage, but it may give a clue to the understanding of the high energy phenomena occuring during the solar flares. (Author) [pt

  5. Optimal consumption—portfolio problem with CVaR constraints

    International Nuclear Information System (INIS)

    Zhang, Qingye; Gao, Yan

    2016-01-01

    The optimal portfolio selection is a fundamental issue in finance, and its two most important ingredients are risk and return. Merton's pioneering work in dynamic portfolio selection emphasized only the expected utility of the consumption and the terminal wealth. To make the optimal portfolio strategy be achievable, risk control over bankruptcy during the investment horizon is an indispensable ingredient. So, in this paper, we consider the consumption-portfolio problem coupled with a dynamic risk control. More specifically, different from the existing literature, we impose a dynamic relative CVaR constraint on it. By the stochastic dynamic programming techniques, we derive the corresponding Hamilton–Jacobi–Bellman (HJB) equation. Moreover, by the Lagrange multiplier method, the closed form solution is provided when the utility function is a logarithmic one. At last, an illustrative empirical study is given. The results show the distinct difference of the portfolio strategies with and without the CVaR constraints: the proportion invested in the risky assets is reduced over time with CVaR constraint instead of being constant without CVaR constraints. This can provide a good decision-making reference for the investors.

  6. Exploring the impact of constraints in quantum optimal control through a kinematic formulation

    International Nuclear Information System (INIS)

    Donovan, Ashley; Beltrani, Vincent; Rabitz, Herschel

    2013-01-01

    Highlights: • This work lays a foundation for studying constraints in quantum control simulations. • The underlying quantum control landscape in the presence of constraints is explored. • Constrained controls can encounter suboptimal traps in the landscape. • The controls are kinematic stand-ins for dynamic time-dependent controls. • A method is developed to transfer between constrained kinematic and dynamic controls. - Abstract: The control of quantum dynamics with tailored laser fields is finding growing experimental success. In practice, experiments will be subject to constraints on the controls that may prevent full optimization of the objective. A framework is presented for systematically investigating the impact of constraints in quantum optimal control simulations using a two-stage process starting with simple time-independent kinematic controls, which act as stand-ins for the traditional dynamic controls. The objective is a state-to-state transition probability, and constraints are introduced by restricting the kinematic control variables during optimization. As a second stage, the means to map from kinematic to dynamic controls is presented, thus enabling a simplified overall procedure for exploring how limited resources affect the ability to optimize the objective. A demonstration of the impact of imposing several types of kinematic constraints is investigated, thereby offering insight into constrained quantum controls

  7. RAPID SPECTRAL CHANGES OF CYGNUS X-1 IN THE LOW/HARD STATE WITH SUZAKU

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, S.; Makishima, K. [Cosmic Radiation Laboratory, Institute of Physical and Chemical Research (RIKEN), Wako, Saitama 351-0198 (Japan); Negoro, H. [Department of Physics, College of Science and Technology, Nihon University, 1-8 Kanda-Surugadai, Chiyoda-ku, Tokyo 101-8308 (Japan); Torii, S.; Noda, H. [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Mineshige, S. [Department of Astronomy, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan)

    2013-04-20

    Rapid spectral changes in the hard X-ray on a timescale down to {approx}0.1 s are studied by applying a ''shot analysis'' technique to the Suzaku observations of the black hole binary Cygnus X-1, performed on 2008 April 18 during the low/hard state. We successfully obtained the shot profiles, covering 10-200 keV with the Suzaku HXD-PIN and HXD-GSO detector. It is notable that the 100-200 keV shot profile is acquired for the first time owing to the HXD-GSO detector. The intensity changes in a time-symmetric way, though the hardness changes in a time-asymmetric way. When the shot-phase-resolved spectra are quantified with the Compton model, the Compton y-parameter and the electron temperature are found to decrease gradually through the rising phase of the shot, while the optical depth appears to increase. All the parameters return to their time-averaged values immediately within 0.1 s past the shot peak. We have not only confirmed this feature previously found in energies below {approx}60 keV, but also found that the spectral change is more prominent in energies above {approx}100 keV, implying the existence of some instant mechanism for direct entropy production. We discuss possible interpretations of the rapid spectral changes in the hard X-ray band.

  8. Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security

    Science.gov (United States)

    Salloum, Ahmed

    Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models

  9. Thermal spray coatings replace hard chrome

    International Nuclear Information System (INIS)

    Schroeder, M.; Unger, R.

    1997-01-01

    Hard chrome plating provides good wear and erosion resistance, as well as good corrosion protection and fine surface finishes. Until a few years ago, it could also be applied at a reasonable cost. However, because of the many environmental and financial sanctions that have been imposed on the process over the past several years, cost has been on a consistent upward trend, and is projected to continue to escalate. Therefore, it is very important to find a coating or a process that offers the same characteristics as hard chrome plating, but without the consequent risks. This article lists the benefits and limitations of hard chrome plating, and describes the performance of two thermal spray coatings (tungsten carbide and chromium carbide) that compared favorably with hard chrome plating in a series of tests. It also lists three criteria to determine whether plasma spray or hard chrome plating should be selected

  10. CORRELATION OF HARD X-RAY AND WHITE LIGHT EMISSION IN SOLAR FLARES

    Energy Technology Data Exchange (ETDEWEB)

    Kuhar, Matej; Krucker, Säm; Battaglia, Marina; Kleint, Lucia; Casadei, Diego [University of Applied Sciences and Arts Northwestern Switzerland, Bahnhofstrasse 6, 5210 Windisch (Switzerland); Oliveros, Juan Carlos Martinez; Hudson, Hugh S. [Space Sciences Laboratory, University of California, Berkeley, CA 94720-7450 (United States)

    2016-01-01

    A statistical study of the correlation between hard X-ray and white light emission in solar flares is performed in order to search for a link between flare-accelerated electrons and white light formation. We analyze 43 flares spanning GOES classes M and X using observations from the Reuven Ramaty High Energy Solar Spectroscopic Imager and Helioseismic and Magnetic Imager. We calculate X-ray fluxes at 30 keV and white light fluxes at 6173 Å summed over the hard X-ray flare ribbons with an integration time of 45 s around the peak hard-X ray time. We find a good correlation between hard X-ray fluxes and excess white light fluxes, with a highest correlation coefficient of 0.68 for photons with energy of 30 keV. Assuming the thick target model, a similar correlation is found between the deposited power by flare-accelerated electrons and the white light fluxes. The correlation coefficient is found to be largest for energy deposition by electrons above ∼50 keV. At higher electron energies the correlation decreases gradually while a rapid decrease is seen if the energy provided by low-energy electrons is added. This suggests that flare-accelerated electrons of energy ∼50 keV are the main source for white light production.

  11. Wave functions constructed from an invariant sum over histories satisfy constraints

    International Nuclear Information System (INIS)

    Halliwell, J.J.; Hartle, J.B.

    1991-01-01

    Invariance of classical equations of motion under a group parametrized by functions of time implies constraints between canonical coordinates and momenta. In the Dirac formulation of quantum mechanics, invariance is normally imposed by demanding that physical wave functions are annihilated by the operator versions of these constraints. In the sum-over-histories quantum mechanics, however, wave functions are specified, directly, by appropriate functional integrals. It therefore becomes an interesting question whether the wave functions so specified obey the operator constraints of the Dirac theory. In this paper, we show for a wide class of theories, including gauge theories, general relativity, and first-quantized string theories, that wave functions constructed from a sum over histories are, in fact, annihilated by the constraints provided that the sum over histories is constructed in a manner which respects the invariance generated by the constraints. By this we mean a sum over histories defined with an invariant action, invariant measure, and an invariant class of paths summed over

  12. Structure, production and properties of high-melting compounds and systems (hard materials and hard metals)

    International Nuclear Information System (INIS)

    Holleck, H.; Thuemmler, F.

    1979-07-01

    The report contains contributions by various authors to the research project on the production, structure, and physical properties of high-melting compounds and systems (hard metals and hard materials), in particular WC-, TaC-, and MoC-base materials. (GSCH) [de

  13. Deaf and hard of hearing students' perspectives on bullying and school climate.

    Science.gov (United States)

    Weiner, Mary T; Day, Stefanie J; Galvan, Dennis

    2013-01-01

    Student perspectives reflect school climate. The study examined perspectives among deaf and hard of hearing students in residential and large day schools regarding bullying, and compared these perspectives with those of a national database of hearing students. The participants were 812 deaf and hard of hearing students in 11 U.S. schools. Data were derived from the Olweus Bullying Questionnaire (Olweus, 2007b), a standardized self-reported survey with multiple-choice questions focusing on different aspects of bullying problems. Significant bullying problems were found in deaf school programs. It appears that deaf and hard of hearing students experience bullying at rates 2-3 times higher than those reported by hearing students. Deaf and hard of hearing students reported that school personnel intervened less often when bullying occurred than was reported in the hearing sample. Results indicate the need for school climate improvement for all students, regardless of hearing status.

  14. Navigating contextual constraints in discourse: Design explications in institutional talk

    Science.gov (United States)

    Herijgers, MLC (Marloes); Maat, HLW (Henk) Pander

    2017-01-01

    Although institutional discourse is subject to a vast ensemble of constraints, its design is not fixed beforehand. On the contrary, optimizing the satisfaction of these constraints requires considerable discourse design skills from institutional agents. In this article, we analyze how Dutch banks’ mortgage advisors navigate their way through the consultations context. We focus on what we call discourse design explications, that is, stretches of talk in which participants refer to conflicting constraints in the discourse context, at the same time proposing particular discourse designs for dealing with these conflicts. We start by discussing three forms of design explication. Then we will examine the various resolutions they propose for constraint conflicts and show how advisors seek customer consent or cooperation for the proposed designs. Thus our analysis reveals how institutional agents, while providing services, work on demonstrating how the design of these services is optimized and tailored to customers. PMID:28781580

  15. The Manpower Allocation Problem with Time Windows and Job-Teaming Constraints: A Branch-and-Price Approach - Technical Report

    DEFF Research Database (Denmark)

    Hansen, Anders Dohn; Kolind, Esben; Clausen, Jens

    In this paper, we consider the Manpower Allocation Problem with Time Windows, Job-Teaming Constraints and a limited number of teams (m-MAPTWTC). Given a set of teams and a set of tasks, the problem is to assign to each team a sequential order of tasks to maximize the total number of assigned tasks....... Both teams and tasks may be restricted by time windows outside which operation is not possible. Some tasks require cooperation between teams, and all teams cooperating must initiate execution simultaneously. We present an IP-model for the problem, which is decomposed using Dantzig-Wolfe decomposition....... The problem is solved by column generation in a Branch-and-Price framework. Simultaneous execution of tasks is enforced by the branching scheme. To test the efficiency of the proposed algorithm, 12 realistic test instances are introduced. The algorithm is able to find the optimal solution in 11 of the test...

  16. Solar constraints

    International Nuclear Information System (INIS)

    Provost, J.

    1984-01-01

    Accurate tests of the theory of stellar structure and evolution are available from the Sun's observations. The solar constraints are reviewed, with a special attention to the recent progress in observing global solar oscillations. Each constraint is sensitive to a given region of the Sun. The present solar models (standard, low Z, mixed) are discussed with respect to neutrino flux, low and high degree five-minute oscillations and low degree internal gravity modes. It appears that actually there do not exist solar models able to fully account for all the observed quantities. (Auth.)

  17. THE EFFICIENCY OF ELECTROCOAGULATION PROCESS USING ALUMINUM ELECTRODES IN REMOVAL OF HARDNESS FROM WATER

    Directory of Open Access Journals (Sweden)

    M. Malakootian ، N. Yousefi

    2009-04-01

    Full Text Available There are various techniques for removal of water hardness each with its own special advantages and disadvantages. Electrochemical or electrocoagulation method due to its simplicity has gained a great attention and is used for removal of various ions and organic matters. The aim of the present study was to investigate the efficiency of this technique in removal of water hardness under different conditions. This experimental study was performed using a pilot plant. The applied pilot was comprised of a reservoir containing aluminum sheet electrodes. The electrodes were connected as monopolar and a power supply was used for supplying direct electrical current. Drinking water of Kerman (southeast of Iran was used in the experiments. The efficiency of the system in three different pH, voltages and time intervals were determined. Results showed the efficiency of 95.6% for electrocoagulation technique in hardness removal. pH and electrical potential had direct effect on hardness removal in a way that the highest efficiency rate was obtained in pH=10.1, potential difference of 20 volt and detention time of 60 minutes. Considering the obtained efficiency in the present study, electrocoagulation technique may be suggested as an effective alternative technique in hardness removal.

  18. A compendium of chameleon constraints

    Energy Technology Data Exchange (ETDEWEB)

    Burrage, Clare [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd St., Philadelphia, PA 19104 (United States)

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  19. A compendium of chameleon constraints

    International Nuclear Information System (INIS)

    Burrage, Clare; Sakstein, Jeremy

    2016-01-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical and laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.

  20. A NICER Look at the Aql X-1 Hard State

    DEFF Research Database (Denmark)

    Bult, Peter; Arzoumanian, Zaven; Cackett, Edward M.

    2018-01-01

    of good exposure. The spectral and timing properties of the source correspond to that of a (hard) extreme island state in the atoll classification. We find that the fractional amplitude of the low-frequency (limited noise shows a dramatic turnover as a function of energy: it peaks at 0.5 ke......V with nearly 25% rms, drops to 12% rms at 2 keV, and rises to 15% rms at 10 keV. Through the analysis of covariance spectra, we demonstrate that band-limited noise exists in both the soft thermal emission and the power-law emission. Additionally, we measure hard time lags, indicating the thermal emission at 0...

  1. Hard-hard coupling assisted anomalous magnetoresistance effect in amine-ended single-molecule magnetic junction

    Science.gov (United States)

    Tang, Y.-H.; Lin, C.-J.; Chiang, K.-R.

    2017-06-01

    We proposed a single-molecule magnetic junction (SMMJ), composed of a dissociated amine-ended benzene sandwiched between two Co tip-like nanowires. To better simulate the break junction technique for real SMMJs, the first-principles calculation associated with the hard-hard coupling between a amine-linker and Co tip-atom is carried out for SMMJs with mechanical strain and under an external bias. We predict an anomalous magnetoresistance (MR) effect, including strain-induced sign reversal and bias-induced enhancement of the MR value, which is in sharp contrast to the normal MR effect in conventional magnetic tunnel junctions. The underlying mechanism is the interplay between four spin-polarized currents in parallel and anti-parallel magnetic configurations, originated from the pronounced spin-up transmission feature in the parallel case and spiky transmission peaks in other three spin-polarized channels. These intriguing findings may open a new arena in which magnetotransport and hard-hard coupling are closely coupled in SMMJs and can be dually controlled either via mechanical strain or by an external bias.

  2. Melting of polydisperse hard disks

    NARCIS (Netherlands)

    Pronk, S.; Frenkel, D.

    2004-01-01

    The melting of a polydisperse hard-disk system is investigated by Monte Carlo simulations in the semigrand canonical ensemble. This is done in the context of possible continuous melting by a dislocation-unbinding mechanism, as an extension of the two-dimensional hard-disk melting problem. We find

  3. Theory of Constraints (TOC)

    DEFF Research Database (Denmark)

    Michelsen, Aage U.

    2004-01-01

    Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process.......Tankegangen bag Theory of Constraints samt planlægningsprincippet Drum-Buffer-Rope. Endvidere skitse af The Thinking Process....

  4. An FPGA Based Multiprocessing CPU for Beam Synchronous Timing in CERN's SPS and LHC

    CERN Document Server

    Ballester, F J; Gras, J J; Lewis, J; Savioz, J J; Serrano, J

    2003-01-01

    The Beam Synchronous Timing system (BST) will be used around the LHC and its injector, the SPS, to broadcast timing meassages and synchronize actions with the beam in different receivers. To achieve beam synchronization, the BST Master card encodes messages using the bunch clock, with a nominal value of 40.079 MHz for the LHC. These messages are produced by a set of tasks every revolution period, which is every 89 us for the LHC and every 23 us for the SPS, therefore imposing a hard real-time constraint on the system. To achieve determinism, the BST Master uses a dedicated CPU inside its main Field Programmable Gate Array (FPGA) featuring zero-delay hardware task switching and a reduced instruction set. This paper describes the BST Master card, stressing the main FPGA design, as well as the associated software, including the LynxOS driver and the tailor-made assembler.

  5. Constraint Embedding for Multibody System Dynamics

    Science.gov (United States)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  6. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    Science.gov (United States)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  7. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  8. Selection of new constraints

    International Nuclear Information System (INIS)

    Sugier, A.

    2003-01-01

    The selected new constraints should be consistent with the scale of concern i.e. be expressed roughly as fractions or multiples of the average annual background. They should take into account risk considerations and include the values of the currents limits, constraints and other action levels. The recommendation is to select four leading values for the new constraints: 500 mSv ( single event or in a decade) as a maximum value, 0.01 mSv/year as a minimum value; and two intermediate values: 20 mSv/year and 0.3 mSv/year. This new set of dose constraints, representing basic minimum standards of protection for the individuals taking into account the specificity of the exposure situations are thus coherent with the current values which can be found in ICRP Publications. A few warning need however to be noticed: There is no more multi sources limit set by ICRP. The coherence between the proposed value of dose constraint (20 mSv/year) and the current occupational dose limit of 20 mSv/year is valid only if the workers are exposed to one single source. When there is more than one source, it will be necessary to apportion. The value of 1000 mSv lifetimes used for relocation can be expressed into annual dose, which gives approximately 10 mSv/year and is coherent with the proposed dose constraint. (N.C.)

  9. Nutrient loading and metabolism in hard-bottom littoral mesocosms

    NARCIS (Netherlands)

    Kersting, K.; Lindblad, C.

    2001-01-01

    In eight hard-bottom tidal littoral mesocosms oxygen concentrations and temperature were measured every 30 s and registered as 15 min-averages. The mesocosms were fed with water from the Oslofjord (residence time about 2 h) and the measurements were also performed in the inflow. In addition,

  10. Initial permeability and vickers hardness of thermally aged FeCu alloy

    International Nuclear Information System (INIS)

    Kikuchi, H.; Onuki, T.; Kamada, Y.; Ara, K.; Kobayashi, S.; Takahashi, S.

    2007-01-01

    The initial permeability obtained from small AC field excitation is a more useful parameter for nondestructive evaluation (NDE) of ferromagnetic materials than one obtained from a major hysteresis loop from the viewpoints of electricity consumption and real-time measurements. In this paper, in order to study the possibility of applying magnetic methods to pressure vessel surveillance at nuclear power plants, permeability of the thermally aged Fe-Cu specimens were evaluated using impedance measurements and the hardness of those specimens was also evaluated. The Vickers hardness increases as aging time increases. The permeability of the cold-rolled specimen decreases with thermal aging. On the other hand, the permeability of as-received specimens increased at first then decreases as thermal aging goes

  11. Advances in hard nucleus cataract surgery

    Directory of Open Access Journals (Sweden)

    Wei Cui

    2013-11-01

    Full Text Available Security and perfect vision and fewer complications are our goals in cataract surgery, and hard-nucleus cataract surgery is always a difficulty one. Many new studies indicate that micro-incision phacoemulsification in treating hard nucleus cataract is obviously effective. This article reviews the evolution process of hard nuclear cataract surgery, the new progress in the research of artificial intraocular lens for microincision, and analyse advantages and disadvantages of various surgical methods.

  12. Simplification of integrity constraints with aggregates and arithmetic built-ins

    DEFF Research Database (Denmark)

    Martinenghi, Davide

    2004-01-01

    Both aggregates and arithmetic built-ins are widely used in current database query languages: Aggregates are second-order constructs such as CNT and SUM of SQL; arithmetic built-ins include relational and other mathematical operators that apply to numbers, such as < and +. These features are also...... time, simplified versions of such integrity constraints that can be tested before the execution of any update. In this way, virtually no time is spent for optimization or rollbacks at run time. Both set and bag semantics are considered....... of interest in the context of database integrity constraints: correct and efficient integrity checking is crucial, as, without any guarantee of data consistency, the answers to queries cannot be trusted. In this paper we propose a method of practical relevance that can be used to derive, at database design...

  13. Management practices and production constraints of central ...

    African Journals Online (AJOL)

    management practices of central highland goats and their major constraints. ... tance to improve the goat production potential and livelihood of the farmers in the study ... ing the productivity and income from keeping goats, there is a study gap in ..... and day time, possibly increasing the chance of getting contagious diseases.

  14. Resolving the hard X-ray emission of GX 5-1 with INTEGRAL

    DEFF Research Database (Denmark)

    Paizis, A.; Ebisawa, K.; Tikkanen, T.

    2005-01-01

    We present the study of one year of INTEGRAL data on the neutron star low mass X-ray binary GX 5-1. Thanks to the excellent angular resolution and sensitivity of INTEGRAL, we are able to obtain a high quality spectrum of GX 5-1 from similar to 5keV to similar to 100 keV, for the first time without...... contamination from the nearby black hole candidate GRS 1758-258 above 20 keV. During our observations, GX 5-1 was mostly found in the horizontal and normal branch of its hardness intensity diagram. A clear hard X-ray emission is observed above similar to 30 keV which exceeds the exponential cut-off spectrum...... expected from lower energies. This spectral flattening may have the same origin of the hard components observed in other Z sources as it shares the property of being characteristic to the horizontal branch. The hard excess is explained by introducing Compton up-scattering of soft photons from the neutron...

  15. Beyond mechanistic interaction: Value-based constraints on meaning in language

    Directory of Open Access Journals (Sweden)

    Joanna eRączaszek-Leonardi

    2015-10-01

    Full Text Available According to situated, embodied, distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize the coordinative function of language treat language as a system of replicable constraints that work both on individuals and on interactions. In this paper we argue that the integration of replicable constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple time-scales. Since the ontogenetic timescale offers a convenient window into a process of the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.

  16. Orthology and paralogy constraints: satisfiability and consistency.

    Science.gov (United States)

    Lafond, Manuel; El-Mabrouk, Nadia

    2014-01-01

    A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family  G. But is a given set  C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for  G? While previous studies have focused on full sets of constraints, here we consider the general case where  C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is  C satisfiable, i.e. can we find an event-labeled gene tree G inducing  C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.

  17. Constraint-preserving boundary treatment for a harmonic formulation of the Einstein equations

    Energy Technology Data Exchange (ETDEWEB)

    Seiler, Jennifer; Szilagyi, Bela; Pollney, Denis; Rezzolla, Luciano [Max-Planck-Institut fuer Gravitationsphysik, Albert-Einstein-Institut, Golm (Germany)

    2008-09-07

    We present a set of well-posed constraint-preserving boundary conditions for a first-order in time, second-order in space, harmonic formulation of the Einstein equations. The boundary conditions are tested using robust stability, linear and nonlinear waves, and are found to be both less reflective and constraint preserving than standard Sommerfeld-type boundary conditions.

  18. Constraint-preserving boundary treatment for a harmonic formulation of the Einstein equations

    International Nuclear Information System (INIS)

    Seiler, Jennifer; Szilagyi, Bela; Pollney, Denis; Rezzolla, Luciano

    2008-01-01

    We present a set of well-posed constraint-preserving boundary conditions for a first-order in time, second-order in space, harmonic formulation of the Einstein equations. The boundary conditions are tested using robust stability, linear and nonlinear waves, and are found to be both less reflective and constraint preserving than standard Sommerfeld-type boundary conditions

  19. Minimal Flavor Constraints for Technicolor

    DEFF Research Database (Denmark)

    Sakuma, Hidenori; Sannino, Francesco

    2010-01-01

    We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self-coupling and mas......We analyze the constraints on the the vacuum polarization of the standard model gauge bosons from a minimal set of flavor observables valid for a general class of models of dynamical electroweak symmetry breaking. We will show that the constraints have a strong impact on the self...

  20. Exploring the time course of face matching: temporal constraints impair unfamiliar face identification under temporally unconstrained viewing.

    Science.gov (United States)

    Ozbek, Müge; Bindemann, Markus

    2011-10-01

    The identification of unfamiliar faces has been studied extensively with matching tasks, in which observers decide if pairs of photographs depict the same person (identity matches) or different people (mismatches). In experimental studies in this field, performance is usually self-paced under the assumption that this will encourage best-possible accuracy. Here, we examined the temporal characteristics of this task by limiting display times and tracking observers' eye movements. Observers were required to make match/mismatch decisions to pairs of faces shown for 200, 500, 1000, or 2000ms, or for an unlimited duration. Peak accuracy was reached within 2000ms and two fixations to each face. However, intermixing exposure conditions produced a context effect that generally reduced accuracy on identity mismatch trials, even when unlimited viewing of faces was possible. These findings indicate that less than 2s are required for face matching when exposure times are variable, but temporal constraints should be avoided altogether if accuracy is truly paramount. The implications of these findings are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Hard coal; Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Loo, Kai van de; Sitte, Andreas-Peter [Gesamtverband Steinkohle e.V. (GVSt), Herne (Germany)

    2015-07-01

    International the coal market in 2014 was the first time in a long time in a period of stagnation. In Germany, the coal consumption decreased even significantly, mainly due to the decrease in power generation. Here the national energy transition has now been noticable affected negative for coal use. The political guidances can expect a further significant downward movement for the future. In the present phase-out process of the German hard coal industry with still three active mines there was in 2014 no decommissioning. But the next is at the end of 2015, and the plans for the time after mining have been continued. [German] International war der Markt fuer Steinkohle 2014 erstmals seit langem wieder von einer Stagnation gekennzeichnet. In Deutschland ging der Steinkohlenverbrauch sogar deutlich zurueck, vor allem wegen des Rueckgangs in der Stromerzeugung. Hier hat sich die nationale Energiewende nun spuerbar und fuer die Steinkohlennutzung negativ ausgewirkt. Die politischen Weichenstellungen lassen fuer die Zukunft eine weitere erhebliche Abwaertsbewegung erwarten. Bei dem im Auslaufprozess befindlichen deutschen Steinkohlenbergbau mit noch drei aktiven Bergwerken gab es 2014 keine Stilllegung. Doch die naechste steht zum Jahresende 2015 an, und die Planungen fuer die Zeit nach dem Bergbau sind fortgefuehrt worden.

  2. Convergence Guaranteed Nonlinear Constraint Model Predictive Control via I/O Linearization

    Directory of Open Access Journals (Sweden)

    Xiaobing Kong

    2013-01-01

    Full Text Available Constituting reliable optimal solution is a key issue for the nonlinear constrained model predictive control. Input-output feedback linearization is a popular method in nonlinear control. By using an input-output feedback linearizing controller, the original linear input constraints will change to nonlinear constraints and sometimes the constraints are state dependent. This paper presents an iterative quadratic program (IQP routine on the continuous-time system. To guarantee its convergence, another iterative approach is incorporated. The proposed algorithm can reach a feasible solution over the entire prediction horizon. Simulation results on both a numerical example and the continuous stirred tank reactors (CSTR demonstrate the effectiveness of the proposed method.

  3. Disposal of waste computer hard disk drive: data destruction and resources recycling.

    Science.gov (United States)

    Yan, Guoqing; Xue, Mianqiang; Xu, Zhenming

    2013-06-01

    An increasing quantity of discarded computers is accompanied by a sharp increase in the number of hard disk drives to be eliminated. A waste hard disk drive is a special form of waste electrical and electronic equipment because it holds large amounts of information that is closely connected with its user. Therefore, the treatment of waste hard disk drives is an urgent issue in terms of data security, environmental protection and sustainable development. In the present study the degaussing method was adopted to destroy the residual data on the waste hard disk drives and the housing of the disks was used as an example to explore the coating removal process, which is the most important pretreatment for aluminium alloy recycling. The key operation points of the degaussing determined were: (1) keep the platter plate parallel with the magnetic field direction; and (2) the enlargement of magnetic field intensity B and action time t can lead to a significant upgrade in the degaussing effect. The coating removal experiment indicated that heating the waste hard disk drives housing at a temperature of 400 °C for 24 min was the optimum condition. A novel integrated technique for the treatment of waste hard disk drives is proposed herein. This technique offers the possibility of destroying residual data, recycling the recovered resources and disposing of the disks in an environmentally friendly manner.

  4. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    Science.gov (United States)

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  5. Efficient Searching with Linear Constraints

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2000-01-01

    We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...

  6. Solar flare hard and soft x ray relationship determined from SMM HXRBS and BCS data

    Science.gov (United States)

    Toot, G. David

    1989-01-01

    The exact nature of the solar flare process is still somewhat a mystery. A key element to understanding flares if the relationship between the hard x rays emitted by the most energetic portions of the flare and the soft x rays from other areas and times. This relationship was studied by comparing hard x ray light curved from the Hard X-Ray Burst Spectrometer (HXRBS) with the soft x ray light curve and its derivation from the Bent Crystal Spectrometer (BCS) which is part of the X-Ray Polychrometer (XRP), these instruments being on the Solar Maximum Mission spacecraft (SMM). Data sample was taken from flares observed with the above instruments during 1980, the peak of the previous maximum of solar activity. Flares were chosen based on complete coverage of the event by several instruments. The HXRBS data covers the x ray spectrum from about 25 keV to about 440 keV in 15 spectral channels, while the BCS data used covers a region of the Spectrum around 3 angstroms including emission from the Ca XIX ion. Both sets of data were summed over their spectral ranges and plotted against time at a maximum time resolution of around 3 seconds. The most popular theory of flares holds that a beam of electrons produces the hard x rays by bremsstrahlung while the soft x rays are the thermal response to this energy deposition. The question is whether the rate of change of soft x ray emission might reflect the variability of the electron beam and hence the variability of the hard x rays. To address this, we took the time derivative of the soft x ray light curve and compared it to the hard flares, 12 of them showed very closed agreement between the soft x ray derivative and the hard x ray light curve. The other five did not show this behavior but were similar to each other in general soft x ray behavior. Efforts to determine basic differences between the two kinds of flares continue. In addition the behavior of soft x ray temperature of flares was examined.

  7. Constraint qualifications and optimality conditions for optimization problems with cardinality constraints

    Czech Academy of Sciences Publication Activity Database

    Červinka, Michal; Kanzow, Ch.; Schwartz, A.

    2016-01-01

    Roč. 160, č. 1 (2016), s. 353-377 ISSN 0025-5610 R&D Projects: GA ČR GAP402/12/1309; GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Cardinality constraints * Constraint qualifications * Optimality conditions * KKT conditions * Strongly stationary points Subject RIV: BA - General Mathematics Impact factor: 2.446, year: 2016 http://library.utia.cas.cz/separaty/2016/MTR/cervinka-0461165.pdf

  8. Constraint Specialisation in Horn Clause Verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2015-01-01

    We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query-answer transformation of a given set of clauses and a goal. The effect is to propagate the constraints from the goal top......-down and propagate answer constraints bottom-up. Our approach does not unfold the clauses at all; we use the constraints from the model to compute a specialised version of each clause in the program. The approach is independent of the abstract domain and the constraints theory underlying the clauses. Experimental...

  9. Constraint specialisation in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2017-01-01

    We present a method for specialising the constraints in constrained Horn clauses with respect to a goal. We use abstract interpretation to compute a model of a query–answer transformed version of a given set of clauses and a goal. The constraints from the model are then used to compute...... a specialised version of each clause. The effect is to propagate the constraints from the goal top-down and propagate answer constraints bottom-up. The specialisation procedure can be repeated to yield further specialisation. The approach is independent of the abstract domain and the constraint theory...

  10. Building memristive and radiation hardness TiO{sub 2}-based junctions

    Energy Technology Data Exchange (ETDEWEB)

    Ghenzi, N., E-mail: n.ghenzi@gmail.com [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); Rubi, D. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); ECyT, UNSAM, Martín de Irigoyen 3100, 1650 San Martín, Bs As (Argentina); Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) (Argentina); Mangano, E.; Gimenez, G. [Instituto Nacional de Tecnología Industrial (INTI) (Argentina); Lell, J. [Gerencia de Investigación y Aplicaciones, Comisión Nacional de Energía Atómica (Argentina); Zelcer, A. [Gerencia Química, Comisión Nacional de Energía Atómica (Argentina); ECyT, UNSAM, Martín de Irigoyen 3100, 1650 San Martín, Bs As (Argentina); Stoliar, P. [ECyT, UNSAM, Martín de Irigoyen 3100, 1650 San Martín, Bs As (Argentina); IMN, Université de Nantes, CNRS, 2 rue de la Houssinière, BP 32229, 44322 Nantes (France); and others

    2014-01-01

    We study micro-scale TiO{sub 2} junctions that are suitable to be used as resistive random-access memory nonvolatile devices with radiation hardness memristive properties. The fabrication and structural and electrical characterization of the junctions are presented. We obtained a retentivity of 10{sup 5} s, an endurance of 10{sup 4} cycles and reliable switching with short electrical pulses (time-width below 10 ns). Additionally, the devices were exposed to 25 MeV oxygen ions. Then, we performed electrical measurements comparing pristine and irradiated devices in order to check the feasibility of using these junctions as memory elements with memristive and radiation hardness properties. - Highlights: • We fabricated radiation hardness memristive metal insulator metal junctions. • We characterized the structural properties of the devices. • We showed the feasibility of the junctions as a non-volatile memory.

  11. MEDICAL STAFF SCHEDULING USING SIMULATED ANNEALING

    Directory of Open Access Journals (Sweden)

    Ladislav Rosocha

    2015-07-01

    Full Text Available Purpose: The efficiency of medical staff is a fundamental feature of healthcare facilities quality. Therefore the better implementation of their preferences into the scheduling problem might not only rise the work-life balance of doctors and nurses, but also may result into better patient care. This paper focuses on optimization of medical staff preferences considering the scheduling problem.Methodology/Approach: We propose a medical staff scheduling algorithm based on simulated annealing, a well-known method from statistical thermodynamics. We define hard constraints, which are linked to legal and working regulations, and minimize the violations of soft constraints, which are related to the quality of work, psychic, and work-life balance of staff.Findings: On a sample of 60 physicians and nurses from gynecology department we generated monthly schedules and optimized their preferences in terms of soft constraints. Our results indicate that the final value of objective function optimized by proposed algorithm is more than 18-times better in violations of soft constraints than initially generated random schedule that satisfied hard constraints.Research Limitation/implication: Even though the global optimality of final outcome is not guaranteed, desirable solutionwas obtained in reasonable time. Originality/Value of paper: We show that designed algorithm is able to successfully generate schedules regarding hard and soft constraints. Moreover, presented method is significantly faster than standard schedule generation and is able to effectively reschedule due to the local neighborhood search characteristics of simulated annealing.

  12. Rapid sampling of stochastic displacements in Brownian dynamics simulations with stresslet constraints

    Science.gov (United States)

    Fiore, Andrew M.; Swan, James W.

    2018-01-01

    equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)

  13. The Cherenkov correlated timing detector: materials, geometry and timing constraints

    International Nuclear Information System (INIS)

    Aronstein, D.; Bergfeld, T.; Horton, D.; Palmer, M.; Selen, M.; Thayer, G.; Boyer, V.; Honscheid, K.; Kichimi, H.; Sugaya, Y.; Yamaguchi, H.; Yoshimura, Y.; Kanda, S.; Olsen, S.; Ueno, K.; Tamura, N.; Yoshimura, K.; Lu, C.; Marlow, D.; Mindas, C.; Prebys, E.; Pomianowski, P.

    1996-01-01

    The key parameters of Cherenkov correlated timing (CCT) detectors are discussed. Measurements of radiator geometry, optical properties of radiator and coupling materials, and photon detector timing performance are presented. (orig.)

  14. Initiative hard coal; Initiative Steinkohle

    Energy Technology Data Exchange (ETDEWEB)

    Leonhardt, J.

    2007-08-02

    In order to decrease the import dependence of hard coal in the European Union, the author has submitted suggestions to the director of conventional sources of energy (directorate general for energy and transport) of the European community, which found a positive resonance. These suggestions are summarized in an elaboration 'Initiative Hard Coal'. After clarifying the starting situation and defining the target the presupposition for a better use of hard coal deposits as raw material in the European Union are pointed out. On that basis concrete suggestions for measures are made. Apart from the conditions of the deposits it concerns thereby also new mining techniques and mining-economical developments, connected with tasks for the mining-machine industry. (orig.)

  15. Hard diffraction and rapidity gaps

    International Nuclear Information System (INIS)

    Brandt, A.

    1995-09-01

    The field of hard diffraction, which studies events with a rapidity gap and a hard scattering, has expanded dramatically recently. A review of new results from CDF, D OE, H1 and ZEUS will be given. These results include diffractive jet production, deep-inelastic scattering in large rapidity gap events, rapidity gaps between high transverse energy jets, and a search for diffractive W-boson production. The combination of these results gives new insight into the exchanged object, believed to be the pomeron. The results axe consistent with factorization and with a hard pomeron that contains both quarks and gluons. There is also evidence for the exchange of a strongly interacting color singlet in high momentum transfer (36 2 ) events

  16. HardCem : an innovative product and partnership

    Energy Technology Data Exchange (ETDEWEB)

    Joudrie, C. [Teck Cominco, Vancouver, BC (Canada)

    2007-07-01

    This paper described the multiple uses of Hard-Cem{sup TM}, a concrete hardener developed for ready-mix and pre-cast concrete applications. The product is engineered to improve the durability of concrete for air and non-air entrained construction projects including buildings, roads, bridges, dams and recreational facilities such as skate parks. The development history of Hard-Cem was reviewed along with its market introduction by Teck Cominco Limited. Technical and operating partnerships were also outlined along with future marketing opportunities. The concrete additive is engineered to increase abrasion resistance. It is added to the concrete during the batching and mixing operations where it is evenly dispersed through the concrete matrix with other proprietary ingredients. The recommended dosages were described along with performance data. The product was shown to save time and money while offering more resistance to mechanical and water borne abrasion forces in both interior and exterior concrete applications. tabs., figs.

  17. Quality Assurance of LHC Cryogenic Instrumentation during Installation and Commissioning

    CERN Document Server

    Lopez Lorente, A; Casas-Cubillos, J; Fortescue, E; Gomes, P; Jeanmonod, N; Peñacoba, G; Vauthier, N

    2009-01-01

    The operation and monitoring of the LHC requires a cryogenic instrumentation system of an unprecedented size (800 instrumentation crates, holding 15000 sensors and actuators), with strict constraints on temperature measurement uncertainty and radiation hardness for all sensors and actuators. This paper presents the applied procedures of quality assurance and the specific hard- & software tools used to meet and track the mentioned requirements during its lifetime (fabrication, installation, commissioning, operation and maintenance); within the given constraints of time schedule, accessibility and coordination with other teams.

  18. Effect of Mo concentration and aging time on the magnetic and mechanical hardness of Fe-xMo-5Ni-0.05C alloys (x = 5, 8, 11 and 15 wt. (%

    Directory of Open Access Journals (Sweden)

    Mauro Carlos Lopes Souza

    2009-01-01

    Full Text Available Changes to the microestructure during thermal aging treatment at 610 ºC in Fe-xMo-5Ni-0.05C alloys were studied for different aging times with different Mo concentrations. The heat treatment at 610 ºC induces carbide precipitation into the metallic matrix near Fe2Mo phase. The X-ray diffraction studies revealed a more intense precipitation of α-FeMo, Fe3Mo, R(Fe63Mo37 phases and MoC, Fe2MoC carbides for the alloys containing 15 and 11% Mo, respectively. This work shows that hardness and coercive force changes are function of the molybdenum content and aging time variation. Vickers hardness and coercive force both increase with the increase of molybdenum content and reach maximum values at 4 and 1h of aging, respectively.

  19. Genetic analysis of kernel texture (grain hardness) in a hard red spring wheat (Triticum aestivum L.) bi-parental population

    Science.gov (United States)

    Grain hardness is a very important trait in determining wheat market class and also influences milling and baking traits. At the grain Hardness (Ha) locus on chromosome 5DS, there are two primary mutations responsible for conveying a harder kernel texture among U.S. hard red spring wheats: (1) the P...

  20. Transmission and capacity pricing and constraints

    International Nuclear Information System (INIS)

    Fusco, M.

    1999-01-01

    A series of overhead viewgraphs accompanied this presentation which discussed the following issues regarding the North American electric power industry: (1) capacity pricing transmission constraints, (2) nature of transmission constraints, (3) consequences of transmission constraints, and (4) prices as market evidence. Some solutions suggested for pricing constraints included the development of contingent contracts, back-up power in supply regions, and new line capacity construction. 8 tabs., 20 figs