WorldWideScience

Sample records for automated output-only dynamic

  1. Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation

    Science.gov (United States)

    2018-01-01

    ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model...Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...although some minor changes may be needed. The program processes a GTRAJ output text file that contains results from 2 or more simulations , where each

  2. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States)

    2016-06-15

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm{sup 2}. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  3. SU-G-BRB-04: Automated Output Factor Measurements Using Continuous Data Logging for Linac Commissioning

    International Nuclear Information System (INIS)

    Zhu, X; Li, S; Zheng, D; Wang, S; Lei, Y; Zhang, M; Ma, R; Fan, Q; Wang, X; Li, X; Verma, V; Enke, C; Zhou, S

    2016-01-01

    Purpose: Linac commissioning is a time consuming and labor intensive process, the streamline of which is highly desirable. In particular, manual measurement of output factors for a variety of field sizes and energy greatly hinders the commissioning efficiency. In this study, automated measurement of output factors was demonstrated as ‘one-click’ using data logging of an electrometer. Methods: Beams to be measured were created in the recording and verifying (R&V) system and configured for continuous delivery. An electrometer with an automatic data logging feature enabled continuous data collection for all fields without human intervention. The electrometer saved data into a spreadsheet every 0.5 seconds. A Matlab program was developed to analyze the excel data to monitor and check the data quality. Results: For each photon energy, output factors were measured for five configurations, including open field and four wedges. Each configuration includes 72 fields sizes, ranging from 4×4 to 20×30 cm"2. Using automation, it took 50 minutes to complete the measurement of 72 field sizes, in contrast to 80 minutes when using the manual approach. The automation avoided the necessity of redundant Linac status checks between fields as in the manual approach. In fact, the only limiting factor in such automation is Linac overheating. The data collection beams in the R&V system are reusable, and the simplified process is less error-prone. In addition, our Matlab program extracted the output factors faithfully from data logging, and the discrepancy between the automatic and manual measurement is within ±0.3%. For two separate automated measurements 30 days apart, consistency check shows a discrepancy within ±1% for 6MV photon with a 60 degree wedge. Conclusion: Automated output factor measurements can save time by 40% when compared with conventional manual approach. This work laid ground for further improvement for the automation of Linac commissioning.

  4. Automated adaptive inference of phenomenological dynamical models

    Science.gov (United States)

    Daniels, Bryan

    Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.

  5. A review of output-only structural mode identification literature employing blind source separation methods

    Science.gov (United States)

    Sadhu, A.; Narasimhan, S.; Antoni, J.

    2017-09-01

    Output-only modal identification has seen significant activity in recent years, especially in large-scale structures where controlled input force generation is often difficult to achieve. This has led to the development of new system identification methods which do not require controlled input. They often work satisfactorily if they satisfy some general assumptions - not overly restrictive - regarding the stochasticity of the input. Hundreds of papers covering a wide range of applications appear every year related to the extraction of modal properties from output measurement data in more than two dozen mechanical, aerospace and civil engineering journals. In little more than a decade, concepts of blind source separation (BSS) from the field of acoustic signal processing have been adopted by several researchers and shown that they can be attractive tools to undertake output-only modal identification. Originally intended to separate distinct audio sources from a mixture of recordings, mathematical equivalence to problems in linear structural dynamics have since been firmly established. This has enabled many of the developments in the field of BSS to be modified and applied to output-only modal identification problems. This paper reviews over hundred articles related to the application of BSS and their variants to output-only modal identification. The main contribution of the paper is to present a literature review of the papers which have appeared on the subject. While a brief treatment of the basic ideas are presented where relevant, a comprehensive and critical explanation of their contents is not attempted. Specific issues related to output-only modal identification and the relative advantages and limitations of BSS methods both from theoretical and application standpoints are discussed. Gap areas requiring additional work are also summarized and the paper concludes with possible future trends in this area.

  6. Automated processing of data generated by molecular dynamics

    International Nuclear Information System (INIS)

    Lobato Hoyos, Ivan; Rojas Tapia, Justo; Instituto Peruano de Energia Nuclear, Lima

    2008-01-01

    A new integrated tool for automated processing of data generated by molecular dynamics packages and programs have been developed. The program allows to calculate important quantities such as pair correlation function, the analysis of common neighbors, counting nanoparticles and their size distribution, conversion of output files between different formats. The work explains in detail the modules of the tool, the interface between them. The uses of program are illustrated in application examples in the calculation of various properties of silver nanoparticles. (author)

  7. Regimes of data output from an automated scanning system into a computer

    International Nuclear Information System (INIS)

    Ovsov, Yu.V.; Shaislamov, P.T.

    1984-01-01

    A method is described for accomplishment of rather a complex algorithm of various coordinate and service data transmission from different automated scanning system devices into a monitoring computer in the automated system for processing images from bubble chambers. The accepted data output algorithm and the developed appropriate equipment enable data transmission both in separate words and word arrays

  8. INFLUENCE OF FISCAL POLICY DYNAMICS ON OUTPUT MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Predescu Antoniu

    2013-04-01

    Full Text Available Dynamics of fiscal policy, more specific rise in fiscal pressure, increase which can be obtained either through enforcing one or more taxes, or by augmenting at least a tax, has a powerful impact on output management – visible, in the first place, in the realm of output size. But, not only output size will vary, after an increase in fiscal pressure, at least because output management is dealing with more than issue of producing a certain quantity of products, material or not, goods and/or services. Products are made for selling, but selling is impossible but through price and with a price; price is an essential economic variable, both in microeconomic and macroeconomic spheres. Thus, on one side rise in fiscal pressure determines, at least in short term, and, of course, if producers pay, or even support, a tax, be it newly enforced or (newly augmented, a rise of prices for sold products, and, on the other side, this results in a variation in output size, e.g. a reduced output volume, but, though, not in a linear trend. The dynamics, in this case of economic mechanism whose yield is a reduced volume of goods and/or services, in not linear, because essential are, too, the characteristics of products, from which effects of demand price elasticity and offer price elasticity influence significantly, in this framework, output management.

  9. Driver compliance to take-over requests with different auditory outputs in conditional automation.

    Science.gov (United States)

    Forster, Yannick; Naujoks, Frederik; Neukum, Alexandra; Huestegge, Lynn

    2017-12-01

    Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Reinforcement learning for partially observable dynamic processes: adaptive dynamic programming using measured output data.

    Science.gov (United States)

    Lewis, F L; Vamvoudakis, Kyriakos G

    2011-02-01

    Approximate dynamic programming (ADP) is a class of reinforcement learning methods that have shown their importance in a variety of applications, including feedback control of dynamical systems. ADP generally requires full information about the system internal states, which is usually not available in practical situations. In this paper, we show how to implement ADP methods using only measured input/output data from the system. Linear dynamical systems with deterministic behavior are considered herein, which are systems of great interest in the control system community. In control system theory, these types of methods are referred to as output feedback (OPFB). The stochastic equivalent of the systems dealt with in this paper is a class of partially observable Markov decision processes. We develop both policy iteration and value iteration algorithms that converge to an optimal controller that requires only OPFB. It is shown that, similar to Q -learning, the new methods have the important advantage that knowledge of the system dynamics is not needed for the implementation of these learning algorithms or for the OPFB control. Only the order of the system, as well as an upper bound on its "observability index," must be known. The learned OPFB controller is in the form of a polynomial autoregressive moving-average controller that has equivalent performance with the optimal state variable feedback gain.

  11. Dynamic Output Feedback Robust MPC with Input Saturation Based on Zonotopic Set-Membership Estimation

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2016-01-01

    Full Text Available For quasi-linear parameter varying (quasi-LPV systems with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC with the consideration of input saturation is investigated. The saturated dynamic output feedback controller is represented by a convex hull involving the actual dynamic output controller and an introduced auxiliary controller. By taking both the actual output feedback controller and the auxiliary controller with a parameter-dependent form, the main optimization problem can be formulated as convex optimization. The consideration of input saturation in the main optimization problem reduces the conservatism of dynamic output feedback controller design. The estimation error set and bounded disturbance are represented by zonotopes and refreshed by zonotopic set-membership estimation. Compared with the previous results, the proposed algorithm can not only guarantee the recursive feasibility of the optimization problem, but also improve the control performance at the cost of higher computational burden. A nonlinear continuous stirred tank reactor (CSTR example is given to illustrate the effectiveness of the approach.

  12. Energy and output dynamics in Bangladesh

    International Nuclear Information System (INIS)

    Paul, Biru Paksha; Uddin, Gazi Salah

    2011-01-01

    The relationship between energy consumption and output is still ambiguous in the existing literature. The economy of Bangladesh, having spectacular output growth and rising energy demand as well as energy efficiency in recent decades, can be an ideal case for examining energy-output dynamics. We find that while fluctuations in energy consumption do not affect output fluctuations, movements in output inversely affect movements in energy use. The results of Granger causality tests in this respect are consistent with those of innovative accounting that includes variance decompositions and impulse responses. Autoregressive distributed lag models also suggest a role of output in Bangladesh's energy use. Hence, the findings of this study have policy implications for other developing nations where measures for energy conservation and efficiency can be relevant in policymaking.

  13. INPUT-OUTPUT STRUCTURE OF LINEAR-DIFFERENTIAL ALGEBRAIC SYSTEMS

    NARCIS (Netherlands)

    KUIJPER, M; SCHUMACHER, JM

    Systems of linear differential and algebraic equations occur in various ways, for instance, as a result of automated modeling procedures and in problems involving algebraic constraints, such as zero dynamics and exact model matching. Differential/algebraic systems may represent an input-output

  14. A Study on Damage Detection Using Output-Only Modal Data

    DEFF Research Database (Denmark)

    Kharrazi, M. K. H.; Ventura, Carlos E.; Brincker, Rune

    2002-01-01

    detection using output-only data from the vibration study. A one-third scale model of a four story steel frame at the University of British Columbia was used as the test specimen. A series of forced and ambient vibration tests on this frame for various levels of damage were conducted. Damage was simulated...

  15. Quantized Passive Dynamic Output Feedback Control with Actuator Failure

    Directory of Open Access Journals (Sweden)

    Zu-Xin Li

    2016-01-01

    Full Text Available This paper investigates the problem of passive dynamic output feedback control for fuzzy discrete nonlinear systems with quantization and actuator failures, where the measurement output of the system is quantized by a logarithmic quantizer before being transferred to the fuzzy controller. By employing the fuzzy-basis-dependent Lyapunov function, sufficient condition is established to guarantee the closed-loop system to be mean-square stable and the prescribed passive performance. Based on the sufficient condition, the fuzzy dynamic output feedback controller is proposed for maintaining acceptable performance levels in the case of actuator failures and quantization effects. Finally, a numerical example is given to show the usefulness of the proposed method.

  16. Interaction Dynamics Determine Signaling and Output Pathway Responses

    Directory of Open Access Journals (Sweden)

    Klement Stojanovski

    2017-04-01

    Full Text Available The understanding of interaction dynamics in signaling pathways can shed light on pathway architecture and provide insights into targets for intervention. Here, we explored the relevance of kinetic rate constants of a key upstream osmosensor in the yeast high-osmolarity glycerol-mitogen-activated protein kinase (HOG-MAPK pathway to signaling output responses. We created mutant pairs of the Sln1-Ypd1 complex interface that caused major compensating changes in the association (kon and dissociation (koff rate constants (kinetic perturbations but only moderate changes in the overall complex affinity (Kd. Yeast cells carrying a Sln1-Ypd1 mutant pair with moderate increases in kon and koff displayed a lower threshold of HOG pathway activation than wild-type cells. Mutants with higher kon and koff rates gave rise to higher basal signaling and gene expression but impaired osmoadaptation. Thus, the kon and koff rates of the components in the Sln1 osmosensor determine proper signaling dynamics and osmoadaptation.

  17. Drivability Improvement Control for Vehicle Start-Up Applied to an Automated Manual Transmission

    Directory of Open Access Journals (Sweden)

    Danna Jiang

    2017-01-01

    Full Text Available Drivability is the key factor for the automated manual transmission. It includes fast response to the driver’s demand and the driving comfort. This paper deals with a control methodology applied to an automated manual transmission vehicle for drivability enhancement during vehicle start-up phase. Based on a piecewise model of powertrain, a multiple-model predictive controller (mMPC is designed with the engine speed, clutch disc speed, and wheel speed as the measurable input variables and the engine torque reference and clutch friction torque reference as the controller’s output variables. The model not only includes the clutch dynamic, the flexible shaft dynamic, but also includes the actuators’ delay character. Considering the driver’s intention, a slipping speed trajectory is generated based on the acceleration pedal dynamically. The designed control strategy is verified on a complete powertrain and longitudinal vehicle dynamic model with different driver’s torque demands.

  18. Output Only Modal Testing of a Car Body Subject to Engine Excitation

    DEFF Research Database (Denmark)

    Brincker, Rune; Andersen, Palle; Møller, Nis

    2000-01-01

    In this paper an output only modal testing and identification of a car body subject to engine excitation is presented. The response data were analyzed using two different techniques: a non-parametric technique based on Frequency Domain Decomposition (FDD), and a parametric technique working...

  19. Muscular outputs during dynamic bench press under stable versus unstable conditions.

    Science.gov (United States)

    Koshida, Sentaro; Urabe, Yukio; Miyashita, Koji; Iwai, Kanzunori; Kagimori, Aya

    2008-09-01

    Previous studies have suggested that resistance training exercise under unstable conditions decreases the isometric force output, yet little is known about its influence on muscular outputs during dynamic movement. The objective of this study was to investigate the effect of an unstable condition on power, force, and velocity outputs during the bench press. Twenty male collegiate athletes (mean age, 21.3 +/- 1.5 years; mean height, 167.7 +/- 7.7 cm; mean weight, 75.9 +/- 17.5 kg) participated in this study. Each subject attempted 3 sets of single bench presses with 50% of 1 repetition maximum (1RM) under a stable condition with a flat bench and an unstable condition with a Swiss ball. Acceleration data were obtained with an accelerometer attached to the center of a barbell shaft, and peak outputs of power, force, and velocity were computed. Although significant loss of the peak outputs was found under the unstable condition (p velocity outputs, compared with previous findings. Such small reduction rates of muscular outputs may not compromise the training effect. Prospective studies are necessary to confirm whether the resistance training under an unstable condition permits the improvement of dynamic performance and trunk stability.

  20. Automated and dynamic scheduling for geodetic VLBI - A simulation study for AuScope and global networks

    Science.gov (United States)

    Iles, E. J.; McCallum, L.; Lovell, J. E. J.; McCallum, J. N.

    2018-02-01

    As we move into the next era of geodetic VLBI, the scheduling process is one focus for improvement in terms of increased flexibility and the ability to react with changing conditions. A range of simulations were conducted to ascertain the impact of scheduling on geodetic results such as Earth Orientation Parameters (EOPs) and station coordinates. The potential capabilities of new automated scheduling modes were also simulated, using the so-called 'dynamic scheduling' technique. The primary aim was to improve efficiency for both cost and time without losing geodetic precision, particularly to maximise the uses of the Australian AuScope VLBI array. We show that short breaks in observation will not significantly degrade the results of a typical 24 h experiment, whereas simply shortening observing time degrades precision exponentially. We also confirm the new automated, dynamic scheduling mode is capable of producing the same standard of result as a traditional schedule, with close to real-time flexibility. Further, it is possible to use the dynamic scheduler to augment the 3 station Australian AuScope array and thereby attain EOPs of the current global precision with only intermittent contribution from 2 additional stations. We thus confirm automated, dynamic scheduling bears great potential for flexibility and automation in line with aims for future continuous VLBI operations.

  1. Automation Framework for Flight Dynamics Products Generation

    Science.gov (United States)

    Wiegand, Robert E.; Esposito, Timothy C.; Watson, John S.; Jun, Linda; Shoan, Wendy; Matusow, Carla

    2010-01-01

    XFDS provides an easily adaptable automation platform. To date it has been used to support flight dynamics operations. It coordinates the execution of other applications such as Satellite TookKit, FreeFlyer, MATLAB, and Perl code. It provides a mechanism for passing messages among a collection of XFDS processes, and allows sending and receiving of GMSEC messages. A unified and consistent graphical user interface (GUI) is used for the various tools. Its automation configuration is stored in text files, and can be edited either directly or using the GUI.

  2. JACoW Configuring and automating an LHC experiment for faster and better physics output

    CERN Document Server

    Gaspar, Clara; Alessio, Federico; Barbosa, Joao; Cardoso, Luis; Frank, Markus; Jost, Beat; Neufeld, Niko; Schwemmer, Rainer

    2018-01-01

    LHCb has introduced a novel online detector alignment and calibration for LHC Run II. This strategy allows for better trigger efficiency, better data quality and direct physics analysis at the trigger output. This implies: running a first High Level Trigger (HLT) pass synchronously with data taking and buffering locally its output; use the data collected at the beginning of the fill, or on a run-by-run basis, to determine the new alignment and calibration constants; run a second HLT pass on the buffered data using the new constants. Operationally, it represented a challenge: it required running different activities concurrently in the farm, starting at different times and load balanced depending on the LHC state. However, these activities are now an integral part of LHCb's dataflow, seamlessly integrated in the Experiment Control System and completely automated under the supervision of LHCb's 'Big Brother'. In total, for all activities, there are usually around 60000 tasks running in the ~1600 nodes of the fa...

  3. Co-Design of Event Generator and Dynamic Output Feedback Controller for LTI Systems

    Directory of Open Access Journals (Sweden)

    Dan Ma

    2015-01-01

    Full Text Available This paper presents a co-design method of the event generator and the dynamic output feedback controller for a linear time-invariant (LIT system. The event-triggered condition on the sensor-to-controller and the controller-to-actuator depends on the plant output and the controller output, respectively. A sufficient condition on the existence of the event generator and the dynamic output feedback controller is proposed and the co-design problem can be converted into the feasibility of linear matrix inequalities (LMIs. The LTI system is asymptotically stable under the proposed event-triggered controller and also reduces the computing resources with respect to the time-triggered one. In the end, a numerical example is given to illustrate the effectiveness of the proposed approach.

  4. Early-Transition Output Decline Revisited

    Directory of Open Access Journals (Sweden)

    Crt Kostevc

    2016-05-01

    Full Text Available In this paper we revisit the issue of aggregate output decline that took place in the early transition period. We propose an alternative explanation of output decline that is applicable to Central- and Eastern-European countries. In the first part of the paper we develop a simple dynamic general equilibrium model that builds on work by Gomulka and Lane (2001. In particular, we consider price liberalization, interpreted as elimination of distortionary taxation, as a trigger of the output decline. We show that price liberalization in interaction with heterogeneous adjustment costs and non-employment benefits lead to aggregate output decline and surge in wage inequality. While these patterns are consistent with actual dynamics in CEE countries, this model cannot generate output decline in all sectors. Instead sectors that were initially taxed even exhibit output growth. Thus, in the second part we consider an alternative general equilibrium model with only one production sector and two types of labor and distortion in a form of wage compression during the socialist era. The trigger for labor mobility and consequently output decline is wage liberalization. Assuming heterogeneity of workers in terms of adjustment costs and non-employment benefits can explain output decline in all industries.

  5. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than

  6. Topology Detection for Output-Coupling Weighted Complex Dynamical Networks with Coupling and Transmission Delays

    Directory of Open Access Journals (Sweden)

    Xinwei Wang

    2017-01-01

    Full Text Available Topology detection for output-coupling weighted complex dynamical networks with two types of time delays is investigated in this paper. Different from existing literatures, coupling delay and transmission delay are simultaneously taken into account in the output-coupling network. Based on the idea of the state observer, we build the drive-response system and apply LaSalle’s invariance principle to the error dynamical system of the drive-response system. Several convergent criteria are deduced in the form of algebraic inequalities. Some numerical simulations for the complex dynamical network, with node dynamics being chaotic, are given to verify the effectiveness of the proposed scheme.

  7. Decentralized H∞ Control for Uncertain Interconnected Systems of Neutral Type via Dynamic Output Feedback

    Directory of Open Access Journals (Sweden)

    Heli Hu

    2014-01-01

    Full Text Available The design of the dynamic output feedback H∞ control for uncertain interconnected systems of neutral type is investigated. In the framework of Lyapunov stability theory, a mathematical technique dealing with the nonlinearity on certain matrix variables is developed to obtain the solvability conditions for the anticipated controller. Based on the corresponding LMIs, the anticipated gains for dynamic output feedback can be achieved by solving some algebraic equations. Also, the norm of the transfer function from the disturbance input to the controlled output is less than the given index. A numerical example and the simulation results are given to show the effectiveness of the proposed method.

  8. Public Infrastructure Investment, Output Dynamics, and Balanced Budget Fiscal Rules

    NARCIS (Netherlands)

    Duarte Bom, P.R.; Ligthart, J.E.

    2011-01-01

    We study the dynamic output and welfare effects of public infrastructure investment under a balanced budget fiscal rule, using an overlapping generations model of a small open economy. The government finances public investment by employing distortionary labor taxes. We find a negative short-run

  9. Scaling Mode Shapes in Output-Only Structure by a Mass-Change-Based Method

    Directory of Open Access Journals (Sweden)

    Liangliang Yu

    2017-01-01

    Full Text Available A mass-change-based method based on output-only data for the rescaling of mode shapes in operational modal analysis (OMA is introduced. The mass distribution matrix, which is defined as a diagonal matrix whose diagonal elements represent the ratios among the diagonal elements of the mass matrix, is calculated using the unscaled mode shapes. Based on the theory of null space, the mass distribution vector or mass distribution matrix is obtained. A small mass with calibrated weight is added to a certain location of the structure, and then the mass distribution vector of the modified structure is estimated. The mass matrix is identified according to the difference of the mass distribution vectors between the original and modified structures. Additionally, the universal set of modes is unnecessary when calculating the mass distribution matrix, indicating that modal truncation is allowed in the proposed method. The mass-scaled mode shapes estimated in OMA according to the proposed method are compared with those obtained by experimental modal analysis. A simulation is employed to validate the feasibility of the method. Finally, the method is tested on output-only data from an experiment on a five-storey structure, and the results confirm the effectiveness of the method.

  10. Magnetospheric storm dynamics in terms of energy output rate

    International Nuclear Information System (INIS)

    Prigancova, A.; Feldstein, Ya.I.

    1992-01-01

    Using hourly values of both the global magnetospheric disturbance characteristic DR, and AE index of auroral ionospheric currents during magnetic storm intervals, the energy output rate dynamics is evaluated for a magnetic storm main/recovery phase and a whole storm interval. The magnetospheric response to the solar wind energy input rate under varying interplanetary and magnetospheric conditions is considered from the temporal variability point of view. The peculiarities of the response are traced separately. As far as quantitative characteristics of energy output rate are concerned, the time dependence pattern of the ring current decay parameter is emphasized to be fairly important. It is pointed out that more insight into the plasma processes, especially at L = 3 - 5, is needed for adequate evidence of the dependence. (Author)

  11. Dynamic Estimation on Output Elasticity of Highway Capital Stock in China

    Science.gov (United States)

    Li, W. J.; Zuo, Q. L.; Bai, Y. F.

    2017-12-01

    By using the Perpetual Inventory Method to calculate the capital stock of highway in China from 1988 to 2016, the paper builds the State Space Model based on Translog Production Function, according to the Ridge Regression and Kalman Filter Method, the dynamic estimation results of output elasticity are measured continuously and analyzed. The conclusions show that: Firstly, China’s growth speed on highway industry capital stock are divided into three stages which are respectively from 1988 to 2000, from 2001 to 2009 and from 2010 to 2016, during which shows steady growth, between which reflect rapid growth; Secondly, the output elasticity of highway capital stock, being between 0.154 and 0.248, is slightly larger than the output elasticity of human input factor, lower than the output elasticity of the technical level, shows positive effect on transport economy and rises steadily, but the output efficiency is low on the whole; Thirdly, around the year of 2010, the scale pay on highway industry begins to highlight the characteristic of increase.

  12. Output-Only Modal Parameter Recursive Estimation of Time-Varying Structures via a Kernel Ridge Regression FS-TARMA Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Sai Ma

    2017-01-01

    Full Text Available Modal parameter estimation plays an important role in vibration-based damage detection and is worth more attention and investigation, as changes in modal parameters are usually being used as damage indicators. This paper focuses on the problem of output-only modal parameter recursive estimation of time-varying structures based upon parameterized representations of the time-dependent autoregressive moving average (TARMA. A kernel ridge regression functional series TARMA (FS-TARMA recursive identification scheme is proposed and subsequently employed for the modal parameter estimation of a numerical three-degree-of-freedom time-varying structural system and a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudolinear regression FS-TARMA approach via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics in a recursive manner.

  13. Automation of analytical systems in power cycles

    International Nuclear Information System (INIS)

    Staub Lukas

    2008-01-01

    'Automation' is a widely used term in instrumentation and is often applied to signal exchange, PLC and SCADA systems. Common use, however, does not necessarily described autonomous operation of analytical devices. We define an automated analytical system as a black box with an input (sample) and an output (measured value). In addition we need dedicated status lines for assessing the validities of the input for our black box and the output for subsequent systems. We will discuss input parameters, automated analytical processes and output parameters. Further considerations will be given to signal exchange and integration into the operating routine of a power plant. Local control loops (chemical dosing) and the automation of sampling systems are not discussed here. (author)

  14. Pandemic recovery analysis using the dynamic inoperability input-output model.

    Science.gov (United States)

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  15. Programmable, automated transistor test system

    Science.gov (United States)

    Truong, L. V.; Sundburg, G. R.

    1986-01-01

    A programmable, automated transistor test system was built to supply experimental data on new and advanced power semiconductors. The data will be used for analytical models and by engineers in designing space and aircraft electric power systems. A pulsed power technique was used at low duty cycles in a nondestructive test to examine the dynamic switching characteristic curves of power transistors in the 500 to 1000 V, 10 to 100 A range. Data collection, manipulation, storage, and output are operator interactive but are guided and controlled by the system software.

  16. Improvement of Frequency Domain Output Only Modal Identification from the Application of the Random Decrement Technique

    DEFF Research Database (Denmark)

    Rodrigues, J.; Brincker, Rune; Andersen, P.

    2004-01-01

    This paper explores the idea of estimating the spectral densities as the Fourier transform of the random decrement functions for the application of frequency domain output-only modal identification methods. The gains in relation to the usual procedure of computing the spectral densities directly...

  17. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    Science.gov (United States)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  18. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  19. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Schlicht, K.

    1977-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these equipment techniques is further improved and if it stands in a certain ratio with a definite efficiency. (orig.) [de

  20. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    Science.gov (United States)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  1. Input-output relation and energy efficiency in the neuron with different spike threshold dynamics.

    Science.gov (United States)

    Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin

    2015-01-01

    Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.

  2. Programmable automated transistor test system

    International Nuclear Information System (INIS)

    Truong, L.V.; Sundberg, G.R.

    1986-01-01

    The paper describes a programmable automated transistor test system (PATTS) and its utilization to evaluate bipolar transistors and Darlingtons, and such MOSFET and special types as can be accommodated with the PATTS base-drive. An application of a pulsed power technique at low duty cycles in a non-destructive test is used to examine the dynamic switching characteristic curves of power transistors. Data collection, manipulation, storage, and output are operator interactive but are guided and controlled by the system software. In addition a library of test data is established on disks, tapes, and hard copies for future reference

  3. Basic study on dynamic reactive-power control method with PV output prediction for solar inverter

    Directory of Open Access Journals (Sweden)

    Ryunosuke Miyoshi

    2016-01-01

    Full Text Available To effectively utilize a photovoltaic (PV system, reactive-power control methods for solar inverters have been considered. Among the various methods, the constant-voltage control outputs less reactive power compared with the other methods. We have developed a constant-voltage control to reduce the reactive-power output. However, the developed constant-voltage control still outputs unnecessary reactive power because the control parameter is constant in every waveform of the PV output. To reduce the reactive-power output, we propose a dynamic reactive-power control method with a PV output prediction. In the proposed method, the control parameter is varied according to the properties of the predicted PV waveform. In this study, we performed numerical simulations using a distribution system model, and we confirmed that the proposed method reduces the reactive-power output within the voltage constraint.

  4. Incremental learning for automated knowledge capture

    Energy Technology Data Exchange (ETDEWEB)

    Benz, Zachary O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Basilico, Justin Derrick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Davis, Warren Leon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dixon, Kevin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jones, Brian S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wendt, Jeremy Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-12-01

    People responding to high-consequence national-security situations need tools to help them make the right decision quickly. The dynamic, time-critical, and ever-changing nature of these situations, especially those involving an adversary, require models of decision support that can dynamically react as a situation unfolds and changes. Automated knowledge capture is a key part of creating individualized models of decision making in many situations because it has been demonstrated as a very robust way to populate computational models of cognition. However, existing automated knowledge capture techniques only populate a knowledge model with data prior to its use, after which the knowledge model is static and unchanging. In contrast, humans, including our national-security adversaries, continually learn, adapt, and create new knowledge as they make decisions and witness their effect. This artificial dichotomy between creation and use exists because the majority of automated knowledge capture techniques are based on traditional batch machine-learning and statistical algorithms. These algorithms are primarily designed to optimize the accuracy of their predictions and only secondarily, if at all, concerned with issues such as speed, memory use, or ability to be incrementally updated. Thus, when new data arrives, batch algorithms used for automated knowledge capture currently require significant recomputation, frequently from scratch, which makes them ill suited for use in dynamic, timecritical, high-consequence decision making environments. In this work we seek to explore and expand upon the capabilities of dynamic, incremental models that can adapt to an ever-changing feature space.

  5. A New Device to Automate the Monitoring of Critical Patients’ Urine Output

    Directory of Open Access Journals (Sweden)

    Abraham Otero

    2014-01-01

    Full Text Available Urine output (UO is usually measured manually each hour in acutely ill patients. This task consumes a substantial amount of time. Furthermore, in the literature there is evidence that more frequent (minute-by-minute UO measurement could impact clinical decision making and improve patient outcomes. However, it is not feasible to manually take minute-by-minute UO measurements. A device capable of automatically monitoring UO could save precious time of the healthcare staff and improve patient outcomes through a more precise and continuous monitoring of this parameter. This paper presents a device capable of automatically monitoring UO. It provides minute by minute measures and it can generate alarms that warn of deviations from therapeutic goals. It uses a capacitive sensor for the measurement of the UO collected within a rigid container. When the container is full, it automatically empties without requiring any internal or external power supply or any intervention by the nursing staff. In vitro tests have been conducted to verify the proper operation and accuracy in the measures of the device. These tests confirm the viability of the device to automate the monitoring of UO.

  6. Output gap uncertainty and real-time monetary policy

    Directory of Open Access Journals (Sweden)

    Francesco Grigoli

    2015-12-01

    Full Text Available Output gap estimates are subject to a wide range of uncertainty owing principally to the difficulty in distinguishing between cycle and trend in real time. We show that country desks tend to overestimate economic slack, especially during recessions, and that uncertainty in initial output gap estimates persists several years. Only a small share of output gap revisions is predictable based on output dynamics, data quality, and policy frameworks. We also show that for a group of Latin American inflation targeters the prescriptions from monetary policy rules are subject to large changes due to revised output gap estimates. These explain a sizable proportion of the deviation of inflation from target, suggesting this information is not accounted for in real-time policy decisions.

  7. The relationship between cardiac output and dynamic cerebral autoregulation in humans.

    Science.gov (United States)

    Deegan, B M; Devine, E R; Geraghty, M C; Jones, E; Ólaighin, G; Serrador, J M

    2010-11-01

    Cerebral autoregulation adjusts cerebrovascular resistance in the face of changing perfusion pressures to maintain relatively constant flow. Results from several studies suggest that cardiac output may also play a role. We tested the hypothesis that cerebral blood flow would autoregulate independent of changes in cardiac output. Transient systemic hypotension was induced by thigh-cuff deflation in 19 healthy volunteers (7 women) in both supine and seated positions. Mean arterial pressure (Finapres), cerebral blood flow (transcranial Doppler) in the anterior (ACA) and middle cerebral artery (MCA), beat-by-beat cardiac output (echocardiography), and end-tidal Pco(2) were measured. Autoregulation was assessed using the autoregulatory index (ARI) defined by Tiecks et al. (Tiecks FP, Lam AM, Aaslid R, Newell DW. Stroke 26: 1014-1019, 1995). Cerebral autoregulation was better in the supine position in both the ACA [supine ARI: 5.0 ± 0.21 (mean ± SE), seated ARI: 3.9 ± 0.4, P = 0.01] and MCA (supine ARI: 5.0 ± 0.2, seated ARI: 3.8 ± 0.3, P = 0.004). In contrast, cardiac output responses were not different between positions and did not correlate with cerebral blood flow ARIs. In addition, women had better autoregulation in the ACA (P = 0.046), but not the MCA, despite having the same cardiac output response. These data demonstrate cardiac output does not appear to affect the dynamic cerebral autoregulatory response to sudden hypotension in healthy controls, regardless of posture. These results also highlight the importance of considering sex when studying cerebral autoregulation.

  8. Dynamic Output Feedback Control for Nonlinear Networked Control Systems with Random Packet Dropout and Random Delay

    Directory of Open Access Journals (Sweden)

    Shuiqing Yu

    2013-01-01

    Full Text Available This paper investigates the dynamic output feedback control for nonlinear networked control systems with both random packet dropout and random delay. Random packet dropout and random delay are modeled as two independent random variables. An observer-based dynamic output feedback controller is designed based upon the Lyapunov theory. The quantitative relationship of the dropout rate, transition probability matrix, and nonlinear level is derived by solving a set of linear matrix inequalities. Finally, an example is presented to illustrate the effectiveness of the proposed method.

  9. New Dilated LMI Characterization for the Multiobjective Full-Order Dynamic Output Feedback Synthesis Problem

    Directory of Open Access Journals (Sweden)

    Zrida Jalel

    2010-01-01

    Full Text Available This paper introduces new dilated LMI conditions for continuous-time linear systems which not only characterize stability and performance specifications, but also, performance specifications. These new conditions offer, in addition to new analysis tools, synthesis procedures that have the advantages of keeping the controller parameters independent of the Lyapunov matrix and offering supplementary degrees of freedom. The impact of such advantages is great on the multiobjective full-order dynamic output feedback control problem as the obtained dilated LMI conditions always encompass the standard ones. It follows that much less conservatism is possible in comparison to the currently used standard LMI based synthesis procedures. A numerical simulation, based on an empirically abridged search procedure, is presented and shows the advantage of the proposed synthesis methods.

  10. Validation of programmable industrial automation systems for safety critical applications in NPP's; dynamic testing

    International Nuclear Information System (INIS)

    Haapanen, P.; Korhonen, J.

    1995-01-01

    The safety assessment of programmable automation systems cannot be totally be based on conventional probabilistic methods because of the difficulties in quantification of the reliability of the software as well as the hardware. Additional means shall therefore be used to gain more confidence on the system dependability. One central confidence building measure is the independent dynamic testing of the completed system. An automated test harness is needed to run the required large amount of test cases in a restricted time span. This paper describes a prototype dynamic testing harness for programmable digital systems developed at VTT. (author). 12 refs, 2 figs, 2 tabs

  11. Improving Usefulness of Automated Driving by Lowering Primary Task Interference through HMI Design

    Directory of Open Access Journals (Sweden)

    Frederik Naujoks

    2017-01-01

    Full Text Available During conditionally automated driving (CAD, driving time can be used for non-driving-related tasks (NDRTs. To increase safety and comfort of an automated ride, upcoming automated manoeuvres such as lane changes or speed adaptations may be communicated to the driver. However, as the driver’s primary task consists of performing NDRTs, they might prefer to be informed in a nondistracting way. In this paper, the potential of using speech output to improve human-automation interaction is explored. A sample of 17 participants completed different situations which involved communication between the automation and the driver in a motion-based driving simulator. The Human-Machine Interface (HMI of the automated driving system consisted of a visual-auditory HMI with either generic auditory feedback (i.e., standard information tones or additional speech output. The drivers were asked to perform a common NDRT during the drive. Compared to generic auditory output, communicating upcoming automated manoeuvres additionally by speech led to a decrease in self-reported visual workload and decreased monitoring of the visual HMI. However, interruptions of the NDRT were not affected by additional speech output. Participants clearly favoured the HMI with additional speech-based output, demonstrating the potential of speech to enhance usefulness and acceptance of automated vehicles.

  12. Input-output interactions and optimal monetary policy

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    2011-01-01

    This paper deals with the implications of factor demand linkages for monetary policy design in a two-sector dynamic general equilibrium model. Part of the output of each sector serves as a production input in both sectors, in accordance with a realistic input–output structure. Strategic...... complementarities induced by factor demand linkages significantly alter the transmission of shocks and amplify the loss of social welfare under optimal monetary policy, compared to what is observed in standard two-sector models. The distinction between value added and gross output that naturally arises...... in this context is of key importance to explore the welfare properties of the model economy. A flexible inflation targeting regime is close to optimal only if the central bank balances inflation and value added variability. Otherwise, targeting gross output variability entails a substantial increase in the loss...

  13. Corticomuscular synchronization with small and large dynamic force output

    Science.gov (United States)

    Andrykiewicz, Agnieszka; Patino, Luis; Naranjo, Jose Raul; Witte, Matthias; Hepp-Reymond, Marie-Claude; Kristeva, Rumyana

    2007-01-01

    Background Over the last few years much research has been devoted to investigating the synchronization between cortical motor and muscular activity as measured by EEG/MEG-EMG coherence. The main focus so far has been on corticomuscular coherence (CMC) during static force condition, for which coherence in beta-range has been described. In contrast, we showed in a recent study [1] that dynamic force condition is accompanied by gamma-range CMC. The modulation of the CMC by various dynamic force amplitudes, however, remained uninvestigated. The present study addresses this question. We examined eight healthy human subjects. EEG and surface EMG were recorded simultaneously. The visuomotor task consisted in isometric compensation for 3 forces (static, small and large dynamic) generated by a manipulandum. The CMC, the cortical EEG spectral power (SP), the EMG SP and the errors in motor performance (as the difference between target and exerted force) were analyzed. Results For the static force condition we found the well-documented, significant beta-range CMC (15–30 Hz) over the contralateral sensorimotor cortex. Gamma-band CMC (30–45 Hz) occurred in both small and large dynamic force conditions without any significant difference between both conditions. Although in some subjects beta-range CMC was observed during both dynamic force conditions no significant difference between conditions could be detected. With respect to the motor performance, the lowest errors were obtained in the static force condition and the highest ones in the dynamic condition with large amplitude. However, when we normalized the magnitude of the errors to the amplitude of the applied force (relative errors) no significant difference between both dynamic conditions was observed. Conclusion These findings confirm that during dynamic force output the corticomuscular network oscillates at gamma frequencies. Moreover, we show that amplitude modulation of dynamic force has no effect on the gamma CMC

  14. Corticomuscular synchronization with small and large dynamic force output

    Directory of Open Access Journals (Sweden)

    Witte Matthias

    2007-11-01

    Full Text Available Abstract Background Over the last few years much research has been devoted to investigating the synchronization between cortical motor and muscular activity as measured by EEG/MEG-EMG coherence. The main focus so far has been on corticomuscular coherence (CMC during static force condition, for which coherence in beta-range has been described. In contrast, we showed in a recent study 1 that dynamic force condition is accompanied by gamma-range CMC. The modulation of the CMC by various dynamic force amplitudes, however, remained uninvestigated. The present study addresses this question. We examined eight healthy human subjects. EEG and surface EMG were recorded simultaneously. The visuomotor task consisted in isometric compensation for 3 forces (static, small and large dynamic generated by a manipulandum. The CMC, the cortical EEG spectral power (SP, the EMG SP and the errors in motor performance (as the difference between target and exerted force were analyzed. Results For the static force condition we found the well-documented, significant beta-range CMC (15–30 Hz over the contralateral sensorimotor cortex. Gamma-band CMC (30–45 Hz occurred in both small and large dynamic force conditions without any significant difference between both conditions. Although in some subjects beta-range CMC was observed during both dynamic force conditions no significant difference between conditions could be detected. With respect to the motor performance, the lowest errors were obtained in the static force condition and the highest ones in the dynamic condition with large amplitude. However, when we normalized the magnitude of the errors to the amplitude of the applied force (relative errors no significant difference between both dynamic conditions was observed. Conclusion These findings confirm that during dynamic force output the corticomuscular network oscillates at gamma frequencies. Moreover, we show that amplitude modulation of dynamic force has no

  15. Validation of programmable industrial automation systems for safety critical applications in NPP's dynamic testing

    International Nuclear Information System (INIS)

    Haapanen, P.; Korhonen, J.

    1995-01-01

    The safety assessment of programmable automation systems can not totally be based on conventional probabilistic methods because of the difficulties in quantification of the reliability of the software as well as the hardware. Additional means shall therefore be used to gain more confidence on the system dependability. One central confidence building measure is the independent dynamic testing of the completed system. An automated test harness is needed to run the required large amount of test cases in a restricted time span. The prototype dynamic testing harness for programmable digital systems developed at the Technical Research Centre of Finland (VTT) is described in the presentation. (12 refs., 2 figs., 2 tabs.)

  16. Automated X-ray television complex for testing large dynamic objects

    International Nuclear Information System (INIS)

    Gusev, E.A.; Luk'yanenko, Eh.A.; Chelnokov, V.B.; Kuleshov, V.K.; Alkhimov, Yu.V.

    1992-01-01

    An automated X-ray television complex on the base of matrix gas-dischage large-area (2.1x1.0 m) converter for testing large cargoes and containers, as well as for inductrial article diagnostics is described. The complex pulsed operation with the 512 kbytes television digital memory unit provides for testing dynamic objects under minimal doses (20-100 μR)

  17. Dynamic Output Feedback Based Active Decentralized Fault-Tolerant Control for Reconfigurable Manipulator with Concurrent Failures

    Directory of Open Access Journals (Sweden)

    Yuanchun Li

    2015-01-01

    Full Text Available The goal of this paper is to describe an active decentralized fault-tolerant control (ADFTC strategy based on dynamic output feedback for reconfigurable manipulators with concurrent actuator and sensor failures. Consider each joint module of the reconfigurable manipulator as a subsystem, and treat the fault as the unknown input of the subsystem. Firstly, by virtue of linear matrix inequality (LMI technique, the decentralized proportional-integral observer (DPIO is designed to estimate and compensate the sensor fault online; hereafter, the compensated system model could be derived. Then, the actuator fault is estimated similarly by another DPIO using LMI as well, and the sufficient condition of the existence of H∞ fault-tolerant controller in the dynamic output feedback is presented for the compensated system model. Furthermore, the dynamic output feedback controller is presented based on the estimation of actuator fault to realize active fault-tolerant control. Finally, two 3-DOF reconfigurable manipulators with different configurations are employed to verify the effectiveness of the proposed scheme in simulation. The main advantages of the proposed scheme lie in that it can handle the concurrent faults act on the actuator and sensor on the same joint module, as well as there is no requirement of fault detection and isolation process; moreover, it is more feasible to the modularity of the reconfigurable manipulator.

  18. Automated analysis of information processing, kinetic independence and modular architecture in biochemical networks using MIDIA.

    Science.gov (United States)

    Bowsher, Clive G

    2011-02-15

    Understanding the encoding and propagation of information by biochemical reaction networks and the relationship of such information processing properties to modular network structure is of fundamental importance in the study of cell signalling and regulation. However, a rigorous, automated approach for general biochemical networks has not been available, and high-throughput analysis has therefore been out of reach. Modularization Identification by Dynamic Independence Algorithms (MIDIA) is a user-friendly, extensible R package that performs automated analysis of how information is processed by biochemical networks. An important component is the algorithm's ability to identify exact network decompositions based on both the mass action kinetics and informational properties of the network. These modularizations are visualized using a tree structure from which important dynamic conditional independence properties can be directly read. Only partial stoichiometric information needs to be used as input to MIDIA, and neither simulations nor knowledge of rate parameters are required. When applied to a signalling network, for example, the method identifies the routes and species involved in the sequential propagation of information between its multiple inputs and outputs. These routes correspond to the relevant paths in the tree structure and may be further visualized using the Input-Output Path Matrix tool. MIDIA remains computationally feasible for the largest network reconstructions currently available and is straightforward to use with models written in Systems Biology Markup Language (SBML). The package is distributed under the GNU General Public License and is available, together with a link to browsable Supplementary Material, at http://code.google.com/p/midia. Further information is at www.maths.bris.ac.uk/~macgb/Software.html.

  19. SU-E-J-168: Automated Pancreas Segmentation Based On Dynamic MRI

    International Nuclear Information System (INIS)

    Gou, S; Rapacchi, S; Hu, P; Sheng, K

    2014-01-01

    Purpose: MRI guided radiotherapy is particularly attractive for abdominal targets with low CT contrast. To fully utilize this modality for pancreas tracking, automated segmentation tools are needed. A hybrid gradient, region growth and shape constraint (hGReS) method to segment 2D upper abdominal dynamic MRI is developed for this purpose. Methods: 2D coronal dynamic MR images of 2 healthy volunteers were acquired with a frame rate of 5 f/second. The regions of interest (ROIs) included the liver, pancreas and stomach. The first frame was used as the source where the centers of the ROIs were annotated. These center locations were propagated to the next dynamic MRI frame. 4-neighborhood region transfer growth was performed from these initial seeds for rough segmentation. To improve the results, gradient, edge and shape constraints were applied to the ROIs before final refinement using morphological operations. Results from hGReS and 3 other automated segmentation methods using edge detection, region growth and level set were compared to manual contouring. Results: For the first patient, hGReS resulted in the organ segmentation accuracy as measure by the Dices index (0.77) for the pancreas. The accuracy was slightly superior to the level set method (0.72), and both are significantly more accurate than the edge detection (0.53) and region growth methods (0.42). For the second healthy volunteer, hGReS reliably segmented the pancreatic region, achieving a Dices index of 0.82, 0.92 and 0.93 for the pancreas, stomach and liver, respectively, comparing to manual segmentation. Motion trajectories derived from the hGReS, level set and manual segmentation methods showed high correlation to respiratory motion calculated using a lung blood vessel as the reference while the other two methods showed substantial motion tracking errors. hGReS was 10 times faster than level set. Conclusion: We have shown the feasibility of automated segmentation of the pancreas anatomy based on

  20. Automated model-based testing of hybrid systems

    NARCIS (Netherlands)

    Osch, van M.P.W.J.

    2009-01-01

    In automated model-based input-output conformance testing, tests are automati- cally generated from a speci¯cation and automatically executed on an implemen- tation. Input is applied to the implementation and output is observed from the implementation. If the observed output is allowed according to

  1. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  2. L1 Adaptive Control Augmentation System with Application to the X-29 Lateral/Directional Dynamics: A Multi-Input Multi-Output Approach

    Science.gov (United States)

    Griffin, Brian Joseph; Burken, John J.; Xargay, Enric

    2010-01-01

    This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.

  3. Completely automated modal analysis procedure based on the combination of different OMA methods

    Science.gov (United States)

    Ripamonti, Francesco; Bussini, Alberto; Resta, Ferruccio

    2018-03-01

    In this work a completely automated output-only Modal Analysis procedure is presented and all its benefits are listed. Based on the merging of different Operational Modal Analysis methods and a statistical approach, the identification process has been improved becoming more robust and giving as results only the real natural frequencies, damping ratios and mode shapes of the system. The effect of the temperature can be taken into account as well, leading to the creation of a better tool for automated Structural Health Monitoring. The algorithm has been developed and tested on a numerical model of a scaled three-story steel building present in the laboratories of Politecnico di Milano.

  4. Hierarchical-control-based output synchronization of coexisting attractor networks

    International Nuclear Information System (INIS)

    Yun-Zhong, Song; Yi-Fa, Tang

    2010-01-01

    This paper introduces the concept of hierarchical-control-based output synchronization of coexisting attractor networks. Within the new framework, each dynamic node is made passive at first utilizing intra-control around its own arena. Then each dynamic node is viewed as one agent, and on account of that, the solution of output synchronization of coexisting attractor networks is transformed into a multi-agent consensus problem, which is made possible by virtue of local interaction between individual neighbours; this distributed working way of coordination is coined as inter-control, which is only specified by the topological structure of the network. Provided that the network is connected and balanced, the output synchronization would come true naturally via synergy between intra and inter-control actions, where the Tightness is proved theoretically via convex composite Lyapunov functions. For completeness, several illustrative examples are presented to further elucidate the novelty and efficacy of the proposed scheme. (general)

  5. Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.

    Science.gov (United States)

    Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping

    2017-01-31

    In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.

  6. Output feedback control of a quadrotor UAV using neural networks.

    Science.gov (United States)

    Dierks, Travis; Jagannathan, Sarangapani

    2010-01-01

    In this paper, a new nonlinear controller for a quadrotor unmanned aerial vehicle (UAV) is proposed using neural networks (NNs) and output feedback. The assumption on the availability of UAV dynamics is not always practical, especially in an outdoor environment. Therefore, in this work, an NN is introduced to learn the complete dynamics of the UAV online, including uncertain nonlinear terms like aerodynamic friction and blade flapping. Although a quadrotor UAV is underactuated, a novel NN virtual control input scheme is proposed which allows all six degrees of freedom (DOF) of the UAV to be controlled using only four control inputs. Furthermore, an NN observer is introduced to estimate the translational and angular velocities of the UAV, and an output feedback control law is developed in which only the position and the attitude of the UAV are considered measurable. It is shown using Lyapunov theory that the position, orientation, and velocity tracking errors, the virtual control and observer estimation errors, and the NN weight estimation errors for each NN are all semiglobally uniformly ultimately bounded (SGUUB) in the presence of bounded disturbances and NN functional reconstruction errors while simultaneously relaxing the separation principle. The effectiveness of proposed output feedback control scheme is then demonstrated in the presence of unknown nonlinear dynamics and disturbances, and simulation results are included to demonstrate the theoretical conjecture.

  7. Automated planning through abstractions in dynamic and stochastic environments

    OpenAIRE

    Martínez Muñoz, Moisés

    2016-01-01

    Mención Internacional en el título de doctor Generating sequences of actions - plans - for an automatic system, like a robot, using Automated Planning is particularly diflicult in stochastic and/or dynamic environments. These plans are composed of actions whose execution, in certain scenarios, might fail, which in tum prevents the execution of the rest of the actions in the plan. Also, in some environments, plans must he generated fast, hoth at the start of the execution and after every ex...

  8. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  9. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  10. Using Automated Essay Scores as an Anchor When Equating Constructed Response Writing Tests

    Science.gov (United States)

    Almond, Russell G.

    2014-01-01

    Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…

  11. Adaptive fuzzy dynamic surface control of nonlinear systems with input saturation and time-varying output constraints

    Science.gov (United States)

    Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.

    2018-02-01

    This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.

  12. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    Science.gov (United States)

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  13. Software complex for developing dynamically packed program system for experiment automation

    International Nuclear Information System (INIS)

    Baluka, G.; Salamatin, I.M.

    1985-01-01

    Software complex for developing dynamically packed program system for experiment automation is considered. The complex includes general-purpose programming systems represented as the RT-11 standard operating system and specially developed problem-oriented moduli providing execution of certain jobs. The described complex is realized in the PASKAL' and MAKRO-2 languages and it is rather flexible to variations of the technique of the experiment

  14. Production and quality assurance automation in the Goddard Space Flight Center Flight Dynamics Facility

    Science.gov (United States)

    Chapman, K. B.; Cox, C. M.; Thomas, C. W.; Cuevas, O. O.; Beckman, R. M.

    1994-01-01

    The Flight Dynamics Facility (FDF) at the NASA Goddard Space Flight Center (GSFC) generates numerous products for NASA-supported spacecraft, including the Tracking and Data Relay Satellites (TDRS's), the Hubble Space Telescope (HST), the Extreme Ultraviolet Explorer (EUVE), and the space shuttle. These products include orbit determination data, acquisition data, event scheduling data, and attitude data. In most cases, product generation involves repetitive execution of many programs. The increasing number of missions supported by the FDF has necessitated the use of automated systems to schedule, execute, and quality assure these products. This automation allows the delivery of accurate products in a timely and cost-efficient manner. To be effective, these systems must automate as many repetitive operations as possible and must be flexible enough to meet changing support requirements. The FDF Orbit Determination Task (ODT) has implemented several systems that automate product generation and quality assurance (QA). These systems include the Orbit Production Automation System (OPAS), the New Enhanced Operations Log (NEOLOG), and the Quality Assurance Automation Software (QA Tool). Implementation of these systems has resulted in a significant reduction in required manpower, elimination of shift work and most weekend support, and improved support quality, while incurring minimal development cost. This paper will present an overview of the concepts used and experiences gained from the implementation of these automation systems.

  15. Response estimation for a floating bridge using acceleration output only

    NARCIS (Netherlands)

    Petersen, Øyvind Wiig; Øiseth, Ole; Nord, Torodd Skjerve; Lourens, E.; Sas, P.; Moens, D.; van de Walle, A.

    2016-01-01

    The Norwegian Public Roads Administration is reviewing the possibility of using floating bridges as fjord crossings. The dynamic behaviour of very long floating bridges with novel designs are prone to uncertainties. Studying the dynamic behaviour of existing bridges is valuable for understanding

  16. Input/Output linearizing control of a nuclear reactor

    International Nuclear Information System (INIS)

    Perez C, V.

    1994-01-01

    The feedback linearization technique is an approach to nonlinear control design. The basic idea is to transform, by means of algebraic methods, the dynamics of a nonlinear control system into a full or partial linear system. As a result of this linearization process, the well known basic linear control techniques can be used to obtain some desired dynamic characteristics. When full linearization is achieved, the method is referred to as input-state linearization, whereas when partial linearization is achieved, the method is referred to as input-output linearization. We will deal with the latter. By means of input-output linearization, the dynamics of a nonlinear system can be decomposed into an external part (input-output), and an internal part (unobservable). Since the external part consists of a linear relationship among the output of the plant and the auxiliary control input mentioned above, it is easy to design such an auxiliary control input so that we get the output to behave in a predetermined way. Since the internal dynamics of the system is known, we can check its dynamics behavior on order of to ensure that the internal states are bounded. The linearization method described here can be applied to systems with one-input/one-output, as well as to systems with multiple-inputs/multiple-outputs. Typical control problems such as stabilization and reference path tracking can be solved using this technique. In this work, the input/output linearization theory is presented, as well as the problem of getting the output variable to track some desired trayectories. Further, the design of an input/output control system applied to the nonlinear model of a research nuclear reactor is included, along with the results obtained by computer simulation. (Author)

  17. Applications of Dynamic Deployment of Services in Industrial Automation

    Science.gov (United States)

    Candido, Gonçalo; Barata, José; Jammes, François; Colombo, Armando W.

    Service-oriented Architecture (SOA) is becoming a de facto paradigm for business and enterprise integration. SOA is expanding into several domains of application envisioning a unified solution suitable across all different layers of an enterprise infrastructure. The application of SOA based on open web standards can significantly enhance the interoperability and openness of those devices. By embedding a dynamical deployment service even into small field de- vices, it would be either possible to allow machine builders to place built- in services and still allow the integrator to deploy on-the-run the services that best fit his current application. This approach allows the developer to keep his own preferred development language, but still deliver a SOA- compliant application. A dynamic deployment service is envisaged as a fundamental framework to support more complex applications, reducing deployment delays, while increasing overall system agility. As use-case scenario, a dynamic deployment service was implemented over DPWS and WS-Management specifications allowing designing and programming an automation application using IEC61131 languages, and deploying these components as web services into devices.

  18. Adaptive neural network output feedback control for stochastic nonlinear systems with unknown dead-zone and unmodeled dynamics.

    Science.gov (United States)

    Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang

    2014-06-01

    This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.

  19. Automated x-ray television complex for inspecting standard-size dynamic objects

    International Nuclear Information System (INIS)

    Gusev, E.A.; Luk'yanenko, E.A.; Chelnokov, V.B.; Kuleshov, V.K.; Alkhimov, Yu.V.

    1993-01-01

    An automated x-ray television complex based on a matrix gas-discharge converter having a large area (2.1 x 1.0 m) for inspecting standard-size freight and containers and for diagnosing industrial articles is presented. The pulsed operating mode of the complex with a 512K digital television storage makes it possible to inspect dynamic objects with a minimum dose load (20--100 μR). 6 refs., 5 figs

  20. Automated analysis of invadopodia dynamics in live cells

    Directory of Open Access Journals (Sweden)

    Matthew E. Berginski

    2014-07-01

    Full Text Available Multiple cell types form specialized protein complexes that are used by the cell to actively degrade the surrounding extracellular matrix. These structures are called podosomes or invadopodia and collectively referred to as invadosomes. Due to their potential importance in both healthy physiology as well as in pathological conditions such as cancer, the characterization of these structures has been of increasing interest. Following early descriptions of invadopodia, assays were developed which labelled the matrix underneath metastatic cancer cells allowing for the assessment of invadopodia activity in motile cells. However, characterization of invadopodia using these methods has traditionally been done manually with time-consuming and potentially biased quantification methods, limiting the number of experiments and the quantity of data that can be analysed. We have developed a system to automate the segmentation, tracking and quantification of invadopodia in time-lapse fluorescence image sets at both the single invadopodia level and whole cell level. We rigorously tested the ability of the method to detect changes in invadopodia formation and dynamics through the use of well-characterized small molecule inhibitors, with known effects on invadopodia. Our results demonstrate the ability of this analysis method to quantify changes in invadopodia formation from live cell imaging data in a high throughput, automated manner.

  1. Determining intervention thresholds that change output behavior patterns

    NARCIS (Netherlands)

    Walrave, B.

    2016-01-01

    This paper details a semi-automated method that can calculate intervention thresholds—that is, the minimum required intervention sizes, over a given time frame, that result in a desired change in a system’s output behavior pattern. The method exploits key differences in atomic behavior profiles that

  2. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    Science.gov (United States)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  3. Dynamic integrated assessment of bioenergy technologies for energy production utilizing agricultural residues: An input–output approach

    International Nuclear Information System (INIS)

    Song, Junnian; Yang, Wei; Higano, Yoshiro; Wang, Xian’en

    2015-01-01

    Highlights: • A dynamic input–output model is developed with bioenergy technologies complemented. • Availability of agricultural residues for bioenergy technologies is evaluated. • Trends in electricity and biofuel production are simulated dynamically. • Net profit and GHG mitigation contribution of bioenergy technologies are assessed. • Combustion power generation and briquette fuel are more advantageous. - Abstract: In order to facilitate regional agricultural residue utilization for energy production through bioenergy technologies, a dynamic input–output model is developed to estimate and assess the energy, economic and environmental performances of industrialization of five bioenergy technologies within a 15-year time horizon. Electricity and solid, gaseous and liquid biofuels are energy products of bioenergy technologies. Bioenergy technologies are complemented into regional input–output framework and combined with socioeconomic activities aided by their bottom-up economic and energy parameters. The simulation results for the target area indicate that the agricultural residues available for bioenergy technologies could amount to 55.16 million t, facilitating to 8.38 million t coal-equivalent bioenergy production by 2025. A 3.1% net reduction in accumulative greenhouse gas emission compared with the “business as usual” case could be achieved owing to substitution of fossil energy with electricity and biofuels produced by bioenergy technologies. From energy production, economic benefits and greenhouse gas mitigation three aspects integratedly, direct-combustion power generation and briquette fuel are more advantageous in the target area. The quantified energy, economic and environmental performances of bioenergy technologies are expected to give recommendations for their industrial development.

  4. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    Science.gov (United States)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  5. Will Automated Vehicles Negatively Impact Traffic Flow?

    Directory of Open Access Journals (Sweden)

    S. C. Calvert

    2017-01-01

    Full Text Available With low-level vehicle automation already available, there is a necessity to estimate its effects on traffic flow, especially if these could be negative. A long gradual transition will occur from manual driving to automated driving, in which many yet unknown traffic flow dynamics will be present. These effects have the potential to increasingly aid or cripple current road networks. In this contribution, we investigate these effects using an empirically calibrated and validated simulation experiment, backed up with findings from literature. We found that low-level automated vehicles in mixed traffic will initially have a small negative effect on traffic flow and road capacities. The experiment further showed that any improvement in traffic flow will only be seen at penetration rates above 70%. Also, the capacity drop appeared to be slightly higher with the presence of low-level automated vehicles. The experiment further investigated the effect of bottleneck severity and truck shares on traffic flow. Improvements to current traffic models are recommended and should include a greater detail and understanding of driver-vehicle interaction, both in conventional and in mixed traffic flow. Further research into behavioural shifts in driving is also recommended due to limited data and knowledge of these dynamics.

  6. A methodology based on dynamic artificial neural network for short-term forecasting of the power output of a PV generator

    International Nuclear Information System (INIS)

    Almonacid, F.; Pérez-Higueras, P.J.; Fernández, Eduardo F.; Hontoria, L.

    2014-01-01

    Highlights: • The output of the majority of renewables energies depends on the variability of the weather conditions. • The short-term forecast is going to be essential for effectively integrating solar energy sources. • A new method based on artificial neural network to predict the power output of a PV generator one hour ahead is proposed. • This new method is based on dynamic artificial neural network to predict global solar irradiance and the air temperature. • The methodology developed can be used to estimate the power output of a PV generator with a satisfactory margin of error. - Abstract: One of the problems of some renewables energies is that the output of these kinds of systems is non-dispatchable depending on variability of weather conditions that cannot be predicted and controlled. From this point of view, the short-term forecast is going to be essential for effectively integrating solar energy sources, being a very useful tool for the reliability and stability of the grid ensuring that an adequate supply is present. In this paper a new methodology for forecasting the output of a PV generator one hour ahead based on dynamic artificial neural network is presented. The results of this study show that the proposed methodology could be used to forecast the power output of PV systems one hour ahead with an acceptable degree of accuracy

  7. Automation of a thermal expansion instrument

    Energy Technology Data Exchange (ETDEWEB)

    Holland, L.L.

    1979-03-01

    Automation of a thermal expansion instrument using a minicomputer system and with analog-to-digital converter inputs and flip-flop relay outputs is described. The necessary hardware link and the software were developed to allow equipment control, data acquisition, data reduction, and report generation by the minicomputer. The design of the automation allows non-programmers to run the experiment, reduce the data, and generate the report.

  8. Long-term WWER-440 dynamics in cyclic power output changes

    International Nuclear Information System (INIS)

    Petruzela, I.

    1989-01-01

    Xenon poisoning is one of the main factors limiting the operation of a nuclear power plant with a WWER-440 reactor in the variable load mode, when long-term dynamics applies to cyclic power output changes. An analysis of the xenon poisoning linearized transfer shows that a phase shift of 180deg takes place between the summed-up reactivity change due to a power change and the reactivity change due to xenon poisoning, this for a sine-wave power change with a period of 24 hours. Thus, the requirements are minimized for the change in reactivity of the control elements, and the maximum value can be achieved of released reactivity that can be utilized before the end of the campaign. (B.S.). 6 figs., 4 tabs., 9 refs

  9. The feasibility of automated online flow cytometry for in-situ monitoring of microbial dynamics in aquatic ecosystems

    Science.gov (United States)

    Besmer, Michael D.; Weissbrodt, David G.; Kratochvil, Bradley E.; Sigrist, Jürg A.; Weyland, Mathias S.; Hammes, Frederik

    2014-01-01

    Fluorescent staining coupled with flow cytometry (FCM) is often used for the monitoring, quantification and characterization of bacteria in engineered and environmental aquatic ecosystems including seawater, freshwater, drinking water, wastewater, and industrial bioreactors. However, infrequent grab sampling hampers accurate characterization and subsequent understanding of microbial dynamics in all of these ecosystems. A logic technological progression is high throughput and full automation of the sampling, staining, measurement, and data analysis steps. Here we assess the feasibility and applicability of automated FCM by means of actual data sets produced with prototype instrumentation. As proof-of-concept we demonstrate examples of microbial dynamics in (i) flowing tap water from a municipal drinking water supply network and (ii) river water from a small creek subject to two rainfall events. In both cases, automated measurements were done at 15-min intervals during 12–14 consecutive days, yielding more than 1000 individual data points for each ecosystem. The extensive data sets derived from the automated measurements allowed for the establishment of baseline data for each ecosystem, as well as for the recognition of daily variations and specific events that would most likely be missed (or miss-characterized) by infrequent sampling. In addition, the online FCM data from the river water was combined and correlated with online measurements of abiotic parameters, showing considerable potential for a better understanding of cause-and-effect relationships in aquatic ecosystems. Although several challenges remain, the successful operation of an automated online FCM system and the basic interpretation of the resulting data sets represent a breakthrough toward the eventual establishment of fully automated online microbiological monitoring technologies. PMID:24917858

  10. Automated Driftmeter Fused with Inertial Navigation

    Science.gov (United States)

    2014-03-27

    dynamics matrix, Γc is an input matrix, δu is the unknown accelerometer and gyro biases vector, and w is a 38 Brownian motion vector (which quantifies...velocity over a flat and non-rotating Earth. Therefore, the only true dynamics of the simulated aircraft is the movement in the x direction. This... movement is due to the aircraft’s constant velocity, which directly relates to the true aircraft position. Therefore, the output of the Free INS is

  11. Automated smoother for the numerical decoupling of dynamics models.

    Science.gov (United States)

    Vilela, Marco; Borges, Carlos C H; Vinga, Susana; Vasconcelos, Ana Tereza R; Santos, Helena; Voit, Eberhard O; Almeida, Jonas S

    2007-08-21

    Structure identification of dynamic models for complex biological systems is the cornerstone of their reverse engineering. Biochemical Systems Theory (BST) offers a particularly convenient solution because its parameters are kinetic-order coefficients which directly identify the topology of the underlying network of processes. We have previously proposed a numerical decoupling procedure that allows the identification of multivariate dynamic models of complex biological processes. While described here within the context of BST, this procedure has a general applicability to signal extraction. Our original implementation relied on artificial neural networks (ANN), which caused slight, undesirable bias during the smoothing of the time courses. As an alternative, we propose here an adaptation of the Whittaker's smoother and demonstrate its role within a robust, fully automated structure identification procedure. In this report we propose a robust, fully automated solution for signal extraction from time series, which is the prerequisite for the efficient reverse engineering of biological systems models. The Whittaker's smoother is reformulated within the context of information theory and extended by the development of adaptive signal segmentation to account for heterogeneous noise structures. The resulting procedure can be used on arbitrary time series with a nonstationary noise process; it is illustrated here with metabolic profiles obtained from in-vivo NMR experiments. The smoothed solution that is free of parametric bias permits differentiation, which is crucial for the numerical decoupling of systems of differential equations. The method is applicable in signal extraction from time series with nonstationary noise structure and can be applied in the numerical decoupling of system of differential equations into algebraic equations, and thus constitutes a rather general tool for the reverse engineering of mechanistic model descriptions from multivariate experimental

  12. Automated material interface (AMI) for mini-environment technology

    International Nuclear Information System (INIS)

    Wolf, M.; Fleming, J.; Mueller, R.; Pulaski, L.

    1993-01-01

    The Automated Material Inter-face or AMI presented here is a new method of automating the input/output of materials or products into mini-environments. The AMI concept and hardware, and preliminary data are presented to show that neither the automation hardware nor the product isolation carrier contribute any significant contamination to the product during the I/O process. Budgetary estimates are presented which support the AMI concept for cost effective manufacturing of advanced semiconductors and disk media

  13. Effects of self-coupling and asymmetric output on metastable dynamical transient firing patterns in arrays of neurons with bidirectional inhibitory coupling.

    Science.gov (United States)

    Horikawa, Yo

    2016-04-01

    Metastable dynamical transient patterns in arrays of bidirectionally coupled neurons with self-coupling and asymmetric output were studied. First, an array of asymmetric sigmoidal neurons with symmetric inhibitory bidirectional coupling and self-coupling was considered and the bifurcations of its steady solutions were shown. Metastable dynamical transient spatially nonuniform states existed in the presence of a pair of spatially symmetric stable solutions as well as unstable spatially nonuniform solutions in a restricted range of the output gain of a neuron. The duration of the transients increased exponentially with the number of neurons up to the maximum number at which the spatially nonuniform steady solutions were stabilized. The range of the output gain for which they existed reduced as asymmetry in a sigmoidal output function of a neuron increased, while the existence range expanded as the strength of inhibitory self-coupling increased. Next, arrays of spiking neuron models with slow synaptic inhibitory bidirectional coupling and self-coupling were considered with computer simulation. In an array of Class 1 Hindmarsh-Rose type models, in which each neuron showed a graded firing rate, metastable dynamical transient firing patterns were observed in the presence of inhibitory self-coupling. This agreed with the condition for the existence of metastable dynamical transients in an array of sigmoidal neurons. In an array of Class 2 Bonhoeffer-van der Pol models, in which each neuron had a clear threshold between firing and resting, long-lasting transient firing patterns with bursting and irregular motion were observed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Static stretching vs. dynamic warm-ups: a comparison of their effects on torque and electromyography output of the quadriceps and hamstring muscles.

    Science.gov (United States)

    Williams, N; Coburn, J; Gillum, T

    2015-11-01

    The aim of this paper was to determine if two different warm-up protocols differently affect torque of the quadriceps and hamstrings, and electromyography (EMG) output of the rectus femoris (RF) and vastus lateralis (VL) when completing 30 maximal leg extensions and curls. Twenty-one healthy male (N.=8) and female (N.=13) subjects volunteered to participate in a familiarization session and three testing sessions. The three testing sessions control, dynamic, and static were completed in a counterbalanced order on non-consecutive days. First, subjects warmed-up on a treadmill for five minutes before completing six dynamic movements, six static-stretches, or no stretches. They then rested for five minutes before completing 30 maximal leg extensions and curls at a speed of 60 s-1. A significant decrease in quadriceps torque output over time was determined for the dynamic protocol when compared to the control (Phamstring torque or EMG output of the RF and VL. Short duration static-stretching has the ability to increase peak and average torque of the leg extensors, while some types of anaerobic exercise involving maximal contractions to fatigue may be hindered by performing dynamic movements as part of the warm-up.

  15. Adaptive Neural Output-Feedback Control for a Class of Nonlower Triangular Nonlinear Systems With Unmodeled Dynamics.

    Science.gov (United States)

    Wang, Huanqing; Liu, Peter Xiaoping; Li, Shuai; Wang, Ding

    2017-08-29

    This paper presents the development of an adaptive neural controller for a class of nonlinear systems with unmodeled dynamics and immeasurable states. An observer is designed to estimate system states. The structure consistency of virtual control signals and the variable partition technique are combined to overcome the difficulties appearing in a nonlower triangular form. An adaptive neural output-feedback controller is developed based on the backstepping technique and the universal approximation property of the radial basis function (RBF) neural networks. By using the Lyapunov stability analysis, the semiglobally and uniformly ultimate boundedness of all signals within the closed-loop system is guaranteed. The simulation results show that the controlled system converges quickly, and all the signals are bounded. This paper is novel at least in the two aspects: 1) an output-feedback control strategy is developed for a class of nonlower triangular nonlinear systems with unmodeled dynamics and 2) the nonlinear disturbances and their bounds are the functions of all states, which is in a more general form than existing results.

  16. Evaluation of automated residential demand response with flat and dynamic pricing

    International Nuclear Information System (INIS)

    Swisher, Joel; Wang, Kitty; Stewart, Stewart

    2005-01-01

    This paper reviews the performance of two recent automated load management programs for residential customers of electric utilities in two American states. Both pilot programs have been run with about 200 participant houses each, and both programs have control populations of similar customers without the technology or program treatment. In both cases, the technology used in the pilot is GoodWatts, an advanced, two-way, real-time, comprehensive home energy management system. The purpose of each pilot is to determine the household kW reduction in coincident peak electric load from the energy management technology. Nevada Power has conducted a pilot program for Air-Conditioning Load Management (ACLM), in which customers are sent an electronic curtailment signal for three-hour intervals during times of maximum peak demand. The participating customers receive an annual incentive payment, but otherwise they are on a conventional utility tariff. In California, three major utilities are jointly conducting a pilot demonstration of an Automated Demand Response System (ADRS). Customers are on a time-of-use (ToU) tariff, which includes a critical peak pricing (CPP) element. During times of maximum peak demand, customers are sent an electronic price signal that is three times higher than the normal on-peak price. Houses with the automated GoodWatts technology reduced their demand in both the ACLM and the ADRS programs by about 50% consistently across the summer curtailment or super peak events, relative to homes without the technology or any load management program or tariff in place. The absolute savings were greater in the ACLM program, due to the higher baseline air conditioning loads in the hotter Las Vegas climate. The results suggest that either automated technology or dynamic pricing can deliver significant demand response in low-consumption houses. However, for high-consumption houses, automated technology can reduce load by a greater absolute kWh difference. Targeting

  17. A study on an automated computerized differential diagnosis of diffuse liver diseases, based only on hepatic scintigrams using sup(99m)Tc-Sn-colloid

    International Nuclear Information System (INIS)

    Matsuo, Michimasa; Fujii, Susumu; Kaneda, Yukio

    1980-01-01

    Hepatic scintigrams using sup(99m)Tc-compounds are now routinely performed. In this study, automated computerized pattern characterizations of right lateral hepatic scintigrams using sup(99m)Tc-Sn colloid were studied to extract characteristic indicators, which are effective for an automated computerized differential diagnosis. The program, developed by us, of the automated computerized pattern characterization and the automated computerized differential diagnosis can be performed without the aid of professional doctors' ability of pattern recognition. The right lateral hepatic scintigrams of fifty one cases, which are accurately diagnosed by biopsy, are applied as the training group. The results of the automated computerized differential diagnosis were as follows: Three cases were accurately diagnosed among 3 normal cases; Three among 3 acute hepatitis; Seven among 7 chronic inactive hepatitis; Twenty among 22 chronic active hepatitis; Sixteen among 16 liver cirrhosis. Only two cases of chronic active hepatitis are falsely-diagnosed as chronic inactive hepatitis and as liver cirrhosis respectively. The over all accuracy rate was 96% in the training group. With this result, the automated computerized differential diagnosis of diffuse liver diseases is suggested to be possible, based on the hepatic scintigram. (author)

  18. Comparison between manual and automated techniques for assessment of data from dynamic antral scintigraphy

    International Nuclear Information System (INIS)

    Misiara, Gustavo P.; Troncon, Luiz E.A.; Secaf, Marie; Moraes, Eder R.

    2008-01-01

    This work aimed at determining whether data from dynamic antral scintigraphy (DAS) yielded by a simple, manual technique are as accurate as those generated by a conventional automated technique (fast Fourier transform) for assessing gastric contractility. Seventy-one stretches (4 min) of 'activity versus time' curves obtained by DAS from 10 healthy volunteers and 11 functional dyspepsia patients, after ingesting a liquid meal (320 ml, 437 kcal) labeled with technetium-99m ( 99m Tc)-phytate, were independently analyzed by manual and automated techniques. Data obtained by both techniques for the frequency of antral contractions were similar. Contraction amplitude determined by the manual technique was significantly higher than that estimated by the automated method, in both patients and controls. The contraction frequency 30 min post-meal was significantly lower in patients than in controls, which was correctly shown by both techniques. A manual technique using ordinary resources of the gamma camera workstation, despite yielding higher figures for the amplitude of gastric contractions, is as accurate as the conventional automated technique of DAS analysis. These findings may favor a more intensive use of DAS coupled to gastric emptying studies, which would provide a more comprehensive assessment of gastric motor function in disease. (author)

  19. Improvement of Computer Software Quality through Software Automated Tools.

    Science.gov (United States)

    1986-08-30

    information that are returned from the tools to the human user, and the forms in which these outputs are presented. Page 2 of 4 STAGE OF DEVELOPMENT: What... AUTOMIATED SOFTWARE TOOL MONITORING SYSTEM APPENDIX 2 2-1 INTRODUCTION This document and Automated Software Tool Monitoring Program (Appendix 1) are...t Output Output features provide links from the tool to both the human user and the target machine (where applicable). They describe the types

  20. Automated nuclear materials accounting

    International Nuclear Information System (INIS)

    Pacak, P.; Moravec, J.

    1982-01-01

    An automated state system of accounting for nuclear materials data was established in Czechoslovakia in 1979. A file was compiled of 12 programs in the PL/1 language. The file is divided into four groups according to logical associations, namely programs for data input and checking, programs for handling the basic data file, programs for report outputs in the form of worksheets and magnetic tape records, and programs for book inventory listing, document inventory handling and materials balance listing. A similar automated system of nuclear fuel inventory for a light water reactor was introduced for internal purposes in the Institute of Nuclear Research (UJV). (H.S.)

  1. Automated validation of a computer operating system

    Science.gov (United States)

    Dervage, M. M.; Milberg, B. A.

    1970-01-01

    Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.

  2. Reconstruction of dynamic structures of experimental setups based on measurable experimental data only

    Science.gov (United States)

    Chen, Tian-Yu; Chen, Yang; Yang, Hu-Jiang; Xiao, Jing-Hua; Hu, Gang

    2018-03-01

    Nowadays, massive amounts of data have been accumulated in various and wide fields, it has become today one of the central issues in interdisciplinary fields to analyze existing data and extract as much useful information as possible from data. It is often that the output data of systems are measurable while dynamic structures producing these data are hidden, and thus studies to reveal system structures by analyzing available data, i.e., reconstructions of systems become one of the most important tasks of information extractions. In the past, most of the works in this respect were based on theoretical analyses and numerical verifications. Direct analyses of experimental data are very rare. In physical science, most of the analyses of experimental setups were based on the first principles of physics laws, i.e., so-called top-down analyses. In this paper, we conducted an experiment of “Boer resonant instrument for forced vibration” (BRIFV) and inferred the dynamic structure of the experimental set purely from the analysis of the measurable experimental data, i.e., by applying the bottom-up strategy. Dynamics of the experimental set is strongly nonlinear and chaotic, and itʼs subjects to inevitable noises. We proposed to use high-order correlation computations to treat nonlinear dynamics; use two-time correlations to treat noise effects. By applying these approaches, we have successfully reconstructed the structure of the experimental setup, and the dynamic system reconstructed with the measured data reproduces good experimental results in a wide range of parameters.

  3. Robust output synchronization of heterogeneous nonlinear agents in uncertain networks.

    Science.gov (United States)

    Yang, Xi; Wan, Fuhua; Tu, Mengchuan; Shen, Guojiang

    2017-11-01

    This paper investigates the global robust output synchronization problem for a class of nonlinear multi-agent systems. In the considered setup, the controlled agents are heterogeneous and with both dynamic and parametric uncertainties, the controllers are incapable of exchanging their internal states with the neighbors, and the communication network among agents is defined by an uncertain simple digraph. The problem is pursued via nonlinear output regulation theory and internal model based design. For each agent, the input-driven filter and the internal model compose the controller, and the decentralized dynamic output feedback control law is derived by using backstepping method and the modified dynamic high-gain technique. The theoretical result is applied to output synchronization problem for uncertain network of Lorenz-type agents. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Initial development of an automated task analysis profiling system

    International Nuclear Information System (INIS)

    Jorgensen, C.C.

    1984-01-01

    A program for automated task analysis is described. Called TAPS (task analysis profiling system), the program accepts normal English prose and outputs skills, knowledges, attitudes, and abilities (SKAAs) along with specific guidance and recommended ability measurement tests for nuclear power plant operators. A new method for defining SKAAs is presented along with a sample program output

  5. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  6. BOA: Framework for Automated Builds

    CERN Document Server

    Ratnikova, N

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions.

  7. BOA: Framework for automated builds

    International Nuclear Information System (INIS)

    Ratnikova, N.

    2003-01-01

    Managing large-scale software products is a complex software engineering task. The automation of the software development, release and distribution process is most beneficial in the large collaborations, where the big number of developers, multiple platforms and distributed environment are typical factors. This paper describes Build and Output Analyzer framework and its components that have been developed in CMS to facilitate software maintenance and improve software quality. The system allows to generate, control and analyze various types of automated software builds and tests, such as regular rebuilds of the development code, software integration for releases and installation of the existing versions

  8. Methodology to Support Dynamic Function Allocation Policies Between Humans and Flight Deck Automation

    Science.gov (United States)

    Johnson, Eric N.

    2012-01-01

    Function allocation assigns work functions to all agents in a team, both human and automation. Efforts to guide function allocation systematically have been studied in many fields such as engineering, human factors, team and organization design, management science, cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary aspects of function allocation. Four distinctive perspectives have emerged from this comprehensive review of literature on those fields: the technology-centered, human-centered, team-oriented, and work-oriented perspectives. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), structure and strategy of a team, and work structure and environment. This report offers eight issues with function allocation that can be used to assess the extent to which each of issues exist on a given function allocation. A modeling framework using formal models and simulation was developed to model work as described by the environment, agents, their inherent dynamics, and relationships among them. Finally, to validate the framework and metrics, a case study modeled four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight.

  9. Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software

    Energy Technology Data Exchange (ETDEWEB)

    Ebersberger, Ullrich [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); Heart Center Munich-Bogenhausen, Department of Cardiology and Intensive Care Medicine, Munich (Germany); Marcus, Roy P.; Nikolaou, Konstantin; Bamberg, Fabian [University of Munich, Institute of Clinical Radiology, Munich (Germany); Schoepf, U.J.; Gray, J.C.; McQuiston, Andrew D. [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); Lo, Gladys G. [Hong Kong Sanatorium and Hospital, Department of Diagnostic and Interventional Radiology, Hong Kong (China); Wang, Yining [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Department of Radiology, Beijing (China); Blanke, Philipp [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); University Hospital Freiburg, Department of Diagnostic Radiology, Freiburg (Germany); Geyer, Lucas L. [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); University of Munich, Institute of Clinical Radiology, Munich (Germany); Cho, Young Jun [Medical University of South Carolina, Heart and Vascular Center, Charleston, SC (United States); Konyang University College of Medicine, Department of Radiology, Daejeon (Korea, Republic of); Scheuering, Michael; Canstein, Christian [Siemens Healthcare, CT Division, Forchheim (Germany); Hoffmann, Ellen [Heart Center Munich-Bogenhausen, Department of Cardiology and Intensive Care Medicine, Munich (Germany)

    2014-01-15

    To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. (orig.)

  10. Fungible dynamics: There are only two types of entangling multiple-qubit interactions

    International Nuclear Information System (INIS)

    Bremner, Michael J.; Dodd, Jennifer L.; Nielsen, Michael A.; Bacon, Dave

    2004-01-01

    What interactions are sufficient to simulate arbitrary quantum dynamics in a composite quantum system? It has been shown that all two-body Hamiltonian evolutions can be simulated using any fixed two-body entangling n-qubit Hamiltonian and fast local unitaries. By entangling we mean that every qubit is coupled to every other qubit, if not directly, then indirectly via intermediate qubits. We extend this study to the case where interactions may involve more than two qubits at a time. We find necessary and sufficient conditions for an arbitrary n-qubit Hamiltonian to be dynamically universal, that is, able to simulate any other Hamiltonian acting on n qubits, possibly in an inefficient manner. We prove that an entangling Hamiltonian is dynamically universal if and only if it contains at least one coupling term involving an even number of interacting qubits. For odd entangling Hamiltonians, i.e., Hamiltonians with couplings that involve only an odd number of qubits, we prove that dynamic universality is possible on an encoded set of n-1 logical qubits. We further prove that an odd entangling Hamiltonian can simulate any other odd Hamiltonian and classify the algebras that such Hamiltonians generate. Thus, our results show that up to local unitary operations, there are only two fundamentally different types of entangling Hamiltonian on n qubits. We also demonstrate that, provided the number of qubits directly coupled by the Hamiltonian is bounded above by a constant, our techniques can be made efficient

  11. Sensors and Automated Analyzers for Radionuclides

    International Nuclear Information System (INIS)

    Grate, Jay W.; Egorov, Oleg B.

    2003-01-01

    The production of nuclear weapons materials has generated large quantities of nuclear waste and significant environmental contamination. We have developed new, rapid, automated methods for determination of radionuclides using sequential injection methodologies to automate extraction chromatographic separations, with on-line flow-through scintillation counting for real time detection. This work has progressed in two main areas: radionuclide sensors for water monitoring and automated radiochemical analyzers for monitoring nuclear waste processing operations. Radionuclide sensors have been developed that collect and concentrate radionuclides in preconcentrating minicolumns with dual functionality: chemical selectivity for radionuclide capture and scintillation for signal output. These sensors can detect pertechnetate to below regulatory levels and have been engineered into a prototype for field testing. A fully automated process monitor has been developed for total technetium in nuclear waste streams. This instrument performs sample acidification, speciation adjustment, separation and detection in fifteen minutes or less

  12. System for Automated Calibration of Vector Modulators

    Science.gov (United States)

    Lux, James; Boas, Amy; Li, Samuel

    2009-01-01

    Vector modulators are used to impose baseband modulation on RF signals, but non-ideal behavior limits the overall performance. The non-ideal behavior of the vector modulator is compensated using data collected with the use of an automated test system driven by a LabVIEW program that systematically applies thousands of control-signal values to the device under test and collects RF measurement data. The technology innovation automates several steps in the process. First, an automated test system, using computer controlled digital-to-analog converters (DACs) and a computer-controlled vector network analyzer (VNA) systematically can apply different I and Q signals (which represent the complex number by which the RF signal is multiplied) to the vector modulator under test (VMUT), while measuring the RF performance specifically, gain and phase. The automated test system uses the LabVIEW software to control the test equipment, collect the data, and write it to a file. The input to the Lab - VIEW program is either user-input for systematic variation, or is provided in a file containing specific test values that should be fed to the VMUT. The output file contains both the control signals and the measured data. The second step is to post-process the file to determine the correction functions as needed. The result of the entire process is a tabular representation, which allows translation of a desired I/Q value to the required analog control signals to produce a particular RF behavior. In some applications, corrected performance is needed only for a limited range. If the vector modulator is being used as a phase shifter, there is only a need to correct I and Q values that represent points on a circle, not the entire plane. This innovation has been used to calibrate 2-GHz MMIC (monolithic microwave integrated circuit) vector modulators in the High EIRP Cluster Array project (EIRP is high effective isotropic radiated power). These calibrations were then used to create

  13. Automated Dynamic Demand Response Implementation on a Micro-grid

    Energy Technology Data Exchange (ETDEWEB)

    Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Chelmis, Charalampos; Prasanna, Viktor K.

    2016-11-16

    In this paper, we describe a system for real-time automated Dynamic and Sustainable Demand Response with sparse data consumption prediction implemented on the University of Southern California campus microgrid. Supply side approaches to resolving energy supply-load imbalance do not work at high levels of renewable energy penetration. Dynamic Demand Response (D2R) is a widely used demand-side technique to dynamically adjust electricity consumption during peak load periods. Our D2R system consists of accurate machine learning based energy consumption forecasting models that work with sparse data coupled with fast and sustainable load curtailment optimization algorithms that provide the ability to dynamically adapt to changing supply-load imbalances in near real-time. Our Sustainable DR (SDR) algorithms attempt to distribute customer curtailment evenly across sub-intervals during a DR event and avoid expensive demand peaks during a few sub-intervals. It also ensures that each customer is penalized fairly in order to achieve the targeted curtailment. We develop near linear-time constant-factor approximation algorithms along with Polynomial Time Approximation Schemes (PTAS) for SDR curtailment that minimizes the curtailment error defined as the difference between the target and achieved curtailment values. Our SDR curtailment problem is formulated as an Integer Linear Program that optimally matches customers to curtailment strategies during a DR event while also explicitly accounting for customer strategy switching overhead as a constraint. We demonstrate the results of our D2R system using real data from experiments performed on the USC smartgrid and show that 1) our prediction algorithms can very accurately predict energy consumption even with noisy or missing data and 2) our curtailment algorithms deliver DR with extremely low curtailment errors in the 0.01-0.05 kWh range.

  14. Transiently chaotic neural networks with piecewise linear output functions

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.-S. [Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan (China); Shih, C.-W. [Department of Applied Mathematics, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu, Taiwan (China)], E-mail: cwshih@math.nctu.edu.tw

    2009-01-30

    Admitting both transient chaotic phase and convergent phase, the transiently chaotic neural network (TCNN) provides superior performance than the classical networks in solving combinatorial optimization problems. We derive concrete parameter conditions for these two essential dynamic phases of the TCNN with piecewise linear output function. The confirmation for chaotic dynamics of the system results from a successful application of the Marotto theorem which was recently clarified. Numerical simulation on applying the TCNN with piecewise linear output function is carried out to find the optimal solution of a travelling salesman problem. It is demonstrated that the performance is even better than the previous TCNN model with logistic output function.

  15. A programmable Gaussian random pulse generator for automated performance measurements

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1989-01-01

    This paper describes a versatile random signal generator which produces logic pulses with a Gaussian distribution for the pulse spacing. The average rate at the pulse generator output can be software-programmed, which makes it useful in performing automated measurements of dead time and CPU time performance of data acquisition systems and modules over a wide range of data rates. Hardware and software components are described and data on the input-output characteristics and the statistical properties of the pulse generator are given. Typical applications are discussed together with advantages over using radioactive test sources. Results obtained from an automated performance run on a VAX 11/785 data acquisition system are presented. (orig.)

  16. Output controllability of nonlinear systems with bounded control

    International Nuclear Information System (INIS)

    Garcia, Rafael; D'Attellis, Carlos

    1990-01-01

    The control problem treated in this paper is the output controllability of a nonlinear system in the form: x = f(x) + g(x)u(t); y = h(x), using bounded controls. The approach to the problem consists of a modification in the system using dynamic feedback in such a way that the input/output behaviour of the closed loop matches the input/output behaviour of a completely output-controllable system with bounded controls. Sufficient conditions are also put forward on the system so that a compact set in the output space may be reached in finite time using uniformally bounded controls, and a result on output regulation in finite time with asymptotic state stabilization is obtained. (Author)

  17. Online updating and uncertainty quantification using nonstationary output-only measurement

    Science.gov (United States)

    Yuen, Ka-Veng; Kuok, Sin-Chi

    2016-01-01

    Extended Kalman filter (EKF) is widely adopted for state estimation and parametric identification of dynamical systems. In this algorithm, it is required to specify the covariance matrices of the process noise and measurement noise based on prior knowledge. However, improper assignment of these noise covariance matrices leads to unreliable estimation and misleading uncertainty estimation on the system state and model parameters. Furthermore, it may induce diverging estimation. To resolve these problems, we propose a Bayesian probabilistic algorithm for online estimation of the noise parameters which are used to characterize the noise covariance matrices. There are three major appealing features of the proposed approach. First, it resolves the divergence problem in the conventional usage of EKF due to improper choice of the noise covariance matrices. Second, the proposed approach ensures the reliability of the uncertainty quantification. Finally, since the noise parameters are allowed to be time-varying, nonstationary process noise and/or measurement noise are explicitly taken into account. Examples using stationary/nonstationary response of linear/nonlinear time-varying dynamical systems are presented to demonstrate the efficacy of the proposed approach. Furthermore, comparison with the conventional usage of EKF will be provided to reveal the necessity of the proposed approach for reliable model updating and uncertainty quantification.

  18. Automated processing of first-pass radionuclide angiocardiography by factor analysis of dynamic structures

    Energy Technology Data Exchange (ETDEWEB)

    Cavailloles, F.; Valette, H.; Hebert, J.-L.; Bazin, J.-P.; Di Paola, R.; Capderou, A.

    1987-05-01

    A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independent method provides more objective and reproducible results.

  19. Automated processing of first-pass radionuclide angiocardiography by factor analysis of dynamic structures

    International Nuclear Information System (INIS)

    Cavailloles, F.; Valette, H.; Hebert, J.-L.; Bazin, J.-P.; Di Paola, R.; Capderou, A.

    1987-01-01

    A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independent method provides more objective and reproducible results. (author)

  20. Utilization of INIS output in Czechoslovakia

    International Nuclear Information System (INIS)

    Stanik, Z.; Blazek, J.

    1978-01-01

    Information on INIS output materials - INIS magnetic tape, INIS Atomindex, full texts of non-conventional literature on microfiches. Complex is provided of INIS-SDI service by the Nuclear Information Centre for CSSR. The Unified Software System (USS) of the UVTEI-UTZ (the Central Technical Base of the Central Office for Scientific, Technical and Economic Information) is used for the automated processing of INIS magnetic tapes. A survey of INIS-SDI services in the years 1974 to 1978 is given. The further development of the system consists in the use of the terminal network, with direct access to the IAEA computer in Vienna. (author)

  1. Component-based modeling of systems for automated fault tree generation

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2009-01-01

    One of the challenges in the field of automated fault tree construction is to find an efficient modeling approach that can support modeling of different types of systems without ignoring any necessary details. In this paper, we are going to represent a new system of modeling approach for computer-aided fault tree generation. In this method, every system model is composed of some components and different types of flows propagating through them. Each component has a function table that describes its input-output relations. For the components having different operational states, there is also a state transition table. Each component can communicate with other components in the system only through its inputs and outputs. A trace-back algorithm is proposed that can be applied to the system model to generate the required fault trees. The system modeling approach and the fault tree construction algorithm are applied to a fire sprinkler system and the results are presented

  2. Automation bias: a systematic review of frequency, effect mediators, and mitigators.

    Science.gov (United States)

    Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C

    2012-01-01

    Automation bias (AB)--the tendency to over-rely on automation--has been studied in various academic fields. Clinical decision support systems (CDSS) aim to benefit the clinical decision-making process. Although most research shows overall improved performance with use, there is often a failure to recognize the new errors that CDSS can introduce. With a focus on healthcare, a systematic review of the literature from a variety of research fields has been carried out, assessing the frequency and severity of AB, the effect mediators, and interventions potentially mitigating this effect. This is discussed alongside automation-induced complacency, or insufficient monitoring of automation output. A mix of subject specific and freetext terms around the themes of automation, human-automation interaction, and task performance and error were used to search article databases. Of 13 821 retrieved papers, 74 met the inclusion criteria. User factors such as cognitive style, decision support systems (DSS), and task specific experience mediated AB, as did attitudinal driving factors such as trust and confidence. Environmental mediators included workload, task complexity, and time constraint, which pressurized cognitive resources. Mitigators of AB included implementation factors such as training and emphasizing user accountability, and DSS design factors such as the position of advice on the screen, updated confidence levels attached to DSS output, and the provision of information versus recommendation. By uncovering the mechanisms by which AB operates, this review aims to help optimize the clinical decision-making process for CDSS developers and healthcare practitioners.

  3. Investigation, development, and application of optimal output feedback theory. Volume 3: The relationship between dynamic compensators and observers and Kalman filters

    Science.gov (United States)

    Broussard, John R.

    1987-01-01

    Relationships between observers, Kalman Filters and dynamic compensators using feedforward control theory are investigated. In particular, the relationship, if any, between the dynamic compensator state and linear functions of a discrete plane state are investigated. It is shown that, in steady state, a dynamic compensator driven by the plant output can be expressed as the sum of two terms. The first term is a linear combination of the plant state. The second term depends on plant and measurement noise, and the plant control. Thus, the state of the dynamic compensator can be expressed as an estimator of the first term with additive error given by the second term. Conditions under which a dynamic compensator is a Kalman filter are presented, and reduced-order optimal estimaters are investigated.

  4. Improved protein hydrogen/deuterium exchange mass spectrometry platform with fully automated data processing.

    Science.gov (United States)

    Zhang, Zhongqi; Zhang, Aming; Xiao, Gang

    2012-06-05

    Protein hydrogen/deuterium exchange (HDX) followed by protease digestion and mass spectrometric (MS) analysis is accepted as a standard method for studying protein conformation and conformational dynamics. In this article, an improved HDX MS platform with fully automated data processing is described. The platform significantly reduces systematic and random errors in the measurement by introducing two types of corrections in HDX data analysis. First, a mixture of short peptides with fast HDX rates is introduced as internal standards to adjust the variations in the extent of back exchange from run to run. Second, a designed unique peptide (PPPI) with slow intrinsic HDX rate is employed as another internal standard to reflect the possible differences in protein intrinsic HDX rates when protein conformations at different solution conditions are compared. HDX data processing is achieved with a comprehensive HDX model to simulate the deuterium labeling and back exchange process. The HDX model is implemented into the in-house developed software MassAnalyzer and enables fully unattended analysis of the entire protein HDX MS data set starting from ion detection and peptide identification to final processed HDX output, typically within 1 day. The final output of the automated data processing is a set (or the average) of the most possible protection factors for each backbone amide hydrogen. The utility of the HDX MS platform is demonstrated by exploring the conformational transition of a monoclonal antibody by increasing concentrations of guanidine.

  5. Input-Output Economics : Theory and Applications - Featuring Asian Economies

    NARCIS (Netherlands)

    Ten Raa, T.

    2009-01-01

    Thijs ten Raa, author of the acclaimed text The Economics of Input–Output Analysis, now takes the reader to the forefront of the field. This volume collects and unifies his and his co-authors' research papers on national accounting, Input–Output coefficients, economic theory, dynamic models,

  6. Nonlinear control of marine vehicles using only position and attitude measurements

    Energy Technology Data Exchange (ETDEWEB)

    Paulsen, Marit Johanne

    1996-12-31

    This thesis presents new results on the design and analysis of nonlinear output feedback controllers for auto pilots and dynamic positioning systems for ships and underwater vehicles. Only position and attitude measurements of the vehicle are used in the control design. The underlying idea of the work is to use certain structural properties of the equations of motion in the controller design and analysis. New controllers for regulation and tracking have been developed and the stability of the resulting closed-loop systems has been rigorously established. The results are supported by simulations. The following problems have been investigated covering design of passive controller for regulation, comparison of two auto pilots, nonlinear damping compensation for tracking, tracking control for nonlinear ships, and output tracking control with wave filtering for multivariable models of possibly unstable vehicles. 97 refs., 32 figs.

  7. Automated 3D-Printed Unibody Immunoarray for Chemiluminescence Detection of Cancer Biomarker Proteins

    Science.gov (United States)

    Tang, C. K.; Vaze, A.; Rusling, J. F.

    2017-01-01

    A low cost three-dimensional (3D) printed clear plastic microfluidic device was fabricated for fast, low cost automated protein detection. The unibody device features three reagent reservoirs, an efficient 3D network for passive mixing, and an optically transparent detection chamber housing a glass capture antibody array for measuring chemiluminescence output with a CCD camera. Sandwich type assays were built onto the glass arrays using a multi-labeled detection antibody-polyHRP (HRP = horseradish peroxidase). Total assay time was ~30 min in a complete automated assay employing a programmable syringe pump so that the protocol required minimal operator intervention. The device was used for multiplexed detection of prostate cancer biomarker proteins prostate specific antigen (PSA) and platelet factor 4 (PF-4). Detection limits of 0.5 pg mL−1 were achieved for these proteins in diluted serum with log dynamic ranges of four orders of magnitude. Good accuracy vs ELISA was validated by analyzing human serum samples. This prototype device holds good promise for further development as a point-of-care cancer diagnostics tool. PMID:28067370

  8. Determination of moving load characteristics by output-only identification over the Pescara beams

    Science.gov (United States)

    Bellino, A.; Garibaldi, L.; Marchesiello, S.

    2011-07-01

    The determination of the characteristics of moving loads over bridges and beams is a topic that only recently has gained the interest of the researchers. In real applications, in fact, as for the case of bridges, it is not always possible to know the load and speed of the trains which are travelling over the bridge. Moreover, in real applications the systems analyzed cannot be always considered linear. Because of these difficulties, the present paper proposes firstly a technique for the identification of the nonlinearity, secondly a procedure to subtract its effect on the modal parameters and finally a method based on them to extract the information on the mass and the speed of the moving load crossing a beam. For this study, some reinforced concrete beams have been tested in the framework of a vast project titled "Monitoring and diagnostics of railway bridges by means of the analysis of the dynamics response due to train crossing", financed by Italian Ministry of Research. These beams show a clear softening nonlinear behaviour during the crossing of a moving carriage. The method is able to detect the load characteristics after having eliminated the nonlinear influence.

  9. Determination of moving load characteristics by output-only identification over the Pescara beams

    International Nuclear Information System (INIS)

    Bellino, A; Garibaldi, L; Marchesiello, S

    2011-01-01

    The determination of the characteristics of moving loads over bridges and beams is a topic that only recently has gained the interest of the researchers. In real applications, in fact, as for the case of bridges, it is not always possible to know the load and speed of the trains which are travelling over the bridge. Moreover, in real applications the systems analyzed cannot be always considered linear. Because of these difficulties, the present paper proposes firstly a technique for the identification of the nonlinearity, secondly a procedure to subtract its effect on the modal parameters and finally a method based on them to extract the information on the mass and the speed of the moving load crossing a beam. For this study, some reinforced concrete beams have been tested in the framework of a vast project titled M onitoring and diagnostics of railway bridges by means of the analysis of the dynamics response due to train crossing , financed by Italian Ministry of Research. These beams show a clear softening nonlinear behaviour during the crossing of a moving carriage. The method is able to detect the load characteristics after having eliminated the nonlinear influence.

  10. Automated irrigation systems for wheat and tomato crops in arid ...

    African Journals Online (AJOL)

    2017-04-02

    Apr 2, 2017 ... Many methods have been described and sensors developed to manage irrigation ... time, and automated irrigation systems based on crop water needs can .... output components, and a software program for decision support.

  11. An automated procedure for covariation-based detection of RNA structure

    International Nuclear Information System (INIS)

    Winker, S.; Overbeek, R.; Woese, C.R.; Olsen, G.J.; Pfluger, N.

    1989-12-01

    This paper summarizes our investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. We have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counter-evidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of our automated procedure. Currently, we are systematically studying mitochondrial rRNA. Our goal is to detect tertiary structure within 16S rRNA and quaternary structure between 16S and 23S rRNA. Our ultimate hope is that automated covariation analysis will contribute significantly to a refined picture of ribosome structure. Our colleagues in biology have begun experiments to test certain hypotheses suggested by an examination of our program's output. These experiments involve sequencing key portions of the 23S ribosomal RNA for species in which the known 16S ribosomal RNA exhibits variation (from the dominant pattern) at the site of a proposed bonding. The hope is that the 23S ribosomal RNA of these species will exhibit corresponding complementary variation or generalized covariation. 24 refs

  12. An automated procedure for covariation-based detection of RNA structure

    Energy Technology Data Exchange (ETDEWEB)

    Winker, S.; Overbeek, R.; Woese, C.R.; Olsen, G.J.; Pfluger, N.

    1989-12-01

    This paper summarizes our investigations into the computational detection of secondary and tertiary structure of ribosomal RNA. We have developed a new automated procedure that not only identifies potential bondings of secondary and tertiary structure, but also provides the covariation evidence that supports the proposed bondings, and any counter-evidence that can be detected in the known sequences. A small number of previously unknown bondings have been detected in individual RNA molecules (16S rRNA and 7S RNA) through the use of our automated procedure. Currently, we are systematically studying mitochondrial rRNA. Our goal is to detect tertiary structure within 16S rRNA and quaternary structure between 16S and 23S rRNA. Our ultimate hope is that automated covariation analysis will contribute significantly to a refined picture of ribosome structure. Our colleagues in biology have begun experiments to test certain hypotheses suggested by an examination of our program's output. These experiments involve sequencing key portions of the 23S ribosomal RNA for species in which the known 16S ribosomal RNA exhibits variation (from the dominant pattern) at the site of a proposed bonding. The hope is that the 23S ribosomal RNA of these species will exhibit corresponding complementary variation or generalized covariation. 24 refs.

  13. Power output of field-based downhill mountain biking.

    Science.gov (United States)

    Hurst, Howard Thomas; Atkins, Stephen

    2006-10-01

    The purpose of this study was to assess the power output of field-based downhill mountain biking. Seventeen trained male downhill cyclists (age 27.1 +/- 5.1 years) competing nationally performed two timed runs of a measured downhill course. An SRM powermeter was used to simultaneously record power, cadence, and speed. Values were sampled at 1-s intervals. Heart rates were recorded at 5-s intervals using a Polar S710 heart rate monitor. Peak and mean power output were 834 +/- 129 W and 75 +/- 26 W respectively. Mean power accounted for only 9% of peak values. Paradoxically, mean heart rate was 168 +/- 9 beats x min(-1) (89% of age-predicted maximum heart rate). Mean cadence (27 +/- 5 rev x min(-1)) was significantly related to speed (r = 0.51; P biking. The poor relationships between power and run time and between cadence and run time suggest they are not essential pre-requisites to downhill mountain biking performance and indicate the importance of riding dynamics to overall performance.

  14. Automated lung nodule classification following automated nodule detection on CT: A serial approach

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Altman, Michael B.; Wilkie, Joel; Sone, Shusuke; Li, Feng; Doi, Kunio; Roy, Arunabha S.

    2003-01-01

    We have evaluated the performance of an automated classifier applied to the task of differentiating malignant and benign lung nodules in low-dose helical computed tomography (CT) scans acquired as part of a lung cancer screening program. The nodules classified in this manner were initially identified by our automated lung nodule detection method, so that the output of automated lung nodule detection was used as input to automated lung nodule classification. This study begins to narrow the distinction between the 'detection task' and the 'classification task'. Automated lung nodule detection is based on two- and three-dimensional analyses of the CT image data. Gray-level-thresholding techniques are used to identify initial lung nodule candidates, for which morphological and gray-level features are computed. A rule-based approach is applied to reduce the number of nodule candidates that correspond to non-nodules, and the features of remaining candidates are merged through linear discriminant analysis to obtain final detection results. Automated lung nodule classification merges the features of the lung nodule candidates identified by the detection algorithm that correspond to actual nodules through another linear discriminant classifier to distinguish between malignant and benign nodules. The automated classification method was applied to the computerized detection results obtained from a database of 393 low-dose thoracic CT scans containing 470 confirmed lung nodules (69 malignant and 401 benign nodules). Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate between nodule candidates that correspond to malignant nodules and nodule candidates that correspond to benign lesions. The area under the ROC curve for this classification task attained a value of 0.79 during a leave-one-out evaluation

  15. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  16. Omega-3 fatty acid supplementation enhances stroke volume and cardiac output during dynamic exercise.

    Science.gov (United States)

    Walser, Buddy; Stebbins, Charles L

    2008-10-01

    Docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) have beneficial effects on cardiovascular function. We tested the hypotheses that dietary supplementation with DHA (2 g/day) + EPA (3 g/day) enhances increases in stroke volume (SV) and cardiac output (CO) and decreases in systemic vascular resistance (SVR) during dynamic exercise. Healthy subjects received DHA + EPA (eight men, four women) or safflower oil (six men, three women) for 6 weeks. Both groups performed 20 min of bicycle exercise (10 min each at a low and moderate work intensity) before and after DHA + EPA or safflower oil treatment. Mean arterial pressure (MAP), heart rate (HR), SV, CO, and SVR were assessed before exercise and during both workloads. HR was unaffected by DHA + EPA and MAP was reduced, but only at rest (88 +/- 5 vs. 83 +/- 4 mm Hg). DHA + EPA augmented increases in SV (14.1 +/- 6.3 vs. 32.3 +/- 8.7 ml) and CO (8.5 +/- 1.0 vs. 10.3 +/- 1.2 L/min) and tended to attenuate decreases in SVR (-7.0 +/- 0.6 vs. -10.1 +/- 1.6 mm Hg L(-1) min(-1)) during the moderate workload. Safflower oil treatment had no effects on MAP, HR, SV, CO or SVR at rest or during exercise. DHA + EPA-induced increases in SV and CO imply that dietary supplementation with these fatty acids can increase oxygen delivery during exercise, which may have beneficial clinical implications for individuals with cardiovascular disease and reduced exercise tolerance.

  17. Distributed control design for nonlinear output agreement in convergent systems

    NARCIS (Netherlands)

    Weitenberg, Erik; De Persis, Claudio

    2015-01-01

    This work studies the problem of output agreement in homogeneous networks of nonlinear dynamical systems under time-varying disturbances using controllers placed at the nodes of the networks. For the class of contractive systems, necessary and sufficient conditions for output agreement are derived,

  18. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  19. TECHNICAL TRAINING: Programmation automate Schneider TSX Premium 2ème niveau - French version only

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. TECHNICAL TRAINING Monique Duval tel. 74924 technical.training@cern.ch CERN Technical Training 2003: Learning for the LHC ! Annonce de nouvelle formation - cours avancé Dans le cadre du programme de l'Enseignement Technique 2003, et en collaboration avec le GUAPI (Groupe des Utilisateurs d'Automates Programmables Industriels du CERN), une nouvelle formation a été organisée afin de perfectionner les connaissances sur l'outil PL7 de programmation des Automates PREMIUM de Schneider. Ce nouveau cours (niveau 2) s'adresse aux person...

  20. ENSEIGNEMENT TECHNIQUE : Programmation automate Schneider TSX Premium 2ème niveau - French version only

    CERN Multimedia

    2003-01-01

    If you wish to participate in one of the following courses, please discuss with your supervisor and apply electronically directly from the course description pages that can be found on the Web at: http://www.cern.ch/Training/ or fill in an "application for training" form available from your Divisional Secretariat or from your DTO (Divisional Training Officer). Applications will be accepted in the order of their receipt. ENSEIGNEMENT TECHNIQUE Monique Duval Tel./Tél. 74924 technical.training@cern.ch CERN Technical Training 2003: Learning for the LHC ! Annonce de nouvelle formation - cours avancé Dans le cadre du programme de l'Enseignement Technique 2003, et en collaboration avec le GUAPI (Groupe des Utilisateurs d'Automates Programmables Industriels du CERN), une nouvelle formation a été organisée afin de perfectionner les connaissances sur l'outil PL7 de programmation des Automates PREMIUM de Schneider. Ce nouveau cours (niveau 2) s'...

  1. Simulation and Automation of Microwave Frequency Control in Dynamic Nuclear Polarization for Solid Polarized Targets

    Science.gov (United States)

    Perera, Gonaduwage; Johnson, Ian; Keller, Dustin

    2017-09-01

    Dynamic Nuclear Polarization (DNP) is used in most of the solid polarized target scattering experiments. Those target materials must be irradiated using microwaves at a frequency determined by the difference in the nuclear Larmor and electron paramagnetic resonance (EPR) frequencies. But the resonance frequency changes with time as a result of radiation damage. Hence the microwave frequency should be adjusted accordingly. Manually adjusting the frequency can be difficult, and improper adjustments negatively impact the polarization. In order to overcome these difficulties, two controllers were developed which automate the process of seeking and maintaining the optimal frequency: one being a standalone controller for a traditional DC motor and the other a LabVIEW VI for a stepper motor configuration. Further a Monte-Carlo simulation was developed which can accurately model the polarization over time as a function of microwave frequency. In this talk, analysis of the simulated data and recent improvements to the automated system will be presented. DOE.

  2. OptoDyCE: Automated system for high-throughput all-optical dynamic cardiac electrophysiology

    Science.gov (United States)

    Klimas, Aleksandra; Yu, Jinzhu; Ambrosi, Christina M.; Williams, John C.; Bien, Harold; Entcheva, Emilia

    2016-02-01

    In the last two decades, market were due to cardiac toxicity, where unintended interactions with ion channels disrupt the heart's normal electrical function. Consequently, all new drugs must undergo preclinical testing for cardiac liability, adding to an already expensive and lengthy process. Recognition that proarrhythmic effects often result from drug action on multiple ion channels demonstrates a need for integrative and comprehensive measurements. Additionally, patient-specific therapies relying on emerging technologies employing stem-cell derived cardiomyocytes (e.g. induced pluripotent stem-cell-derived cardiomyocytes, iPSC-CMs) require better screening methods to become practical. However, a high-throughput, cost-effective approach for cellular cardiac electrophysiology has not been feasible. Optical techniques for manipulation and recording provide a contactless means of dynamic, high-throughput testing of cells and tissues. Here, we consider the requirements for all-optical electrophysiology for drug testing, and we implement and validate OptoDyCE, a fully automated system for all-optical cardiac electrophysiology. We demonstrate the high-throughput capabilities using multicellular samples in 96-well format by combining optogenetic actuation with simultaneous fast high-resolution optical sensing of voltage or intracellular calcium. The system can also be implemented using iPSC-CMs and other cell-types by delivery of optogenetic drivers, or through the modular use of dedicated light-sensitive somatic cells in conjunction with non-modified cells. OptoDyCE provides a truly modular and dynamic screening system, capable of fully-automated acquisition of high-content information integral for improved discovery and development of new drugs and biologics, as well as providing a means of better understanding of electrical disturbances in the heart.

  3. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  4. Automated migration analysis based on cell texture: method & reliability

    Directory of Open Access Journals (Sweden)

    Chittenden Thomas W

    2005-03-01

    Full Text Available Abstract Background In this paper, we present and validate a way to measure automatically the extent of cell migration based on automated examination of a series of digital photographs. It was designed specifically to identify the impact of Second Hand Smoke (SHS on endothelial cell migration but has broader applications. The analysis has two stages: (1 preprocessing of image texture, and (2 migration analysis. Results The output is a graphic overlay that indicates the front lines of cell migration superimposed on each original image, with automated reporting of the distance traversed vs. time. Expert preference compares to manual placement of leading edge shows complete equivalence of automated vs. manual leading edge definition for cell migration measurement. Conclusion Our method is indistinguishable from careful manual determinations of cell front lines, with the advantages of full automation, objectivity, and speed.

  5. Automated chemical kinetic modeling via hybrid reactive molecular dynamics and quantum chemistry simulations.

    Science.gov (United States)

    Döntgen, Malte; Schmalz, Felix; Kopp, Wassja A; Kröger, Leif C; Leonhard, Kai

    2018-06-13

    An automated scheme for obtaining chemical kinetic models from scratch using reactive molecular dynamics and quantum chemistry simulations is presented. This methodology combines the phase space sampling of reactive molecular dynamics with the thermochemistry and kinetics prediction capabilities of quantum mechanics. This scheme provides the NASA polynomial and modified Arrhenius equation parameters for all species and reactions that are observed during the simulation and supplies them in the ChemKin format. The ab initio level of theory for predictions is easily exchangeable and the presently used G3MP2 level of theory is found to reliably reproduce hydrogen and methane oxidation thermochemistry and kinetics data. Chemical kinetic models obtained with this approach are ready-to-use for, e.g., ignition delay time simulations, as shown for hydrogen combustion. The presented extension of the ChemTraYzer approach can be used as a basis for methodologically advancing chemical kinetic modeling schemes and as a black-box approach to generate chemical kinetic models.

  6. Low Power Continuous-Time Delta-Sigma ADC with Current Output DAC

    DEFF Research Database (Denmark)

    Marker-Villumsen, Niels; Jørgensen, Ivan Harald Holger; Bruun, Erik

    2015-01-01

    The paper presents a continuous-time (CT) DeltaSigma (∆Σ) analog-to-digital converter (ADC) using a current output digital-to-analog converter (DAC) for the feedback. From circuit analysis it is shown that using a current output DAC makes it possible to relax the noise requirements of the 1st...... integrator of the loopfilter, and thereby reduce the current consumption. Furthermore, the noise of the current output DAC being dependent on the ADC input signal level, enabling a dynamic range that is larger than the peak signal-to-noise ratio (SNR). The current output DAC is used in a 3rd order multibit...... CT ∆Σ ADC for audio applications, designed in a 0.18 µm CMOS process, with active-RC integrators, a 7-level Flash ADC quantizer and current output DAC for the feedback. From simulations the ADC achieves a dynamic range of 95.0 dB in the audio band, with a current consumption of 284 µA for a 1.7 V...

  7. Automated metabolic gas analysis systems: a review.

    Science.gov (United States)

    Macfarlane, D J

    2001-01-01

    The use of automated metabolic gas analysis systems or metabolic measurement carts (MMC) in exercise studies is common throughout the industrialised world. They have become essential tools for diagnosing many hospital patients, especially those with cardiorespiratory disease. Moreover, the measurement of maximal oxygen uptake (VO2max) is routine for many athletes in fitness laboratories and has become a defacto standard in spite of its limitations. The development of metabolic carts has also facilitated the noninvasive determination of the lactate threshold and cardiac output, respiratory gas exchange kinetics, as well as studies of outdoor activities via small portable systems that often use telemetry. Although the fundamental principles behind the measurement of oxygen uptake (VO2) and carbon dioxide production (VCO2) have not changed, the techniques used have, and indeed, some have almost turned through a full circle. Early scientists often employed a manual Douglas bag method together with separate chemical analyses, but the need for faster and more efficient techniques fuelled the development of semi- and full-automated systems by private and commercial institutions. Yet, recently some scientists are returning back to the traditional Douglas bag or Tissot-spirometer methods, or are using less complex automated systems to not only save capital costs, but also to have greater control over the measurement process. Over the last 40 years, a considerable number of automated systems have been developed, with over a dozen commercial manufacturers producing in excess of 20 different automated systems. The validity and reliability of all these different systems is not well known, with relatively few independent studies having been published in this area. For comparative studies to be possible and to facilitate greater consistency of measurements in test-retest or longitudinal studies of individuals, further knowledge about the performance characteristics of these

  8. Automation of radioimmunoassay

    International Nuclear Information System (INIS)

    Yamaguchi, Chisato; Yamada, Hideo; Iio, Masahiro

    1974-01-01

    Automation systems for measuring Australian antigen by radioimmunoassay under development were discussed. Samples were processed as follows: blood serum being dispensed by automated sampler to the test tube, and then incubated under controlled time and temperature; first counting being omitted; labelled antibody being dispensed to the serum after washing; samples being incubated and then centrifuged; radioactivities in the precipitate being counted by auto-well counter; measurements being tabulated by automated typewriter. Not only well-type counter but also position counter was studied. (Kanao, N.)

  9. Experimentation of Eigenvector Dynamics in a Multiple Input Multiple Output Channel in the 5GHz Band

    DEFF Research Database (Denmark)

    Brown, Tim; Eggers, Patrick Claus F.; Katz, Marcos

    2005-01-01

    Much research has been carried out on the production of both physical and non physical Multiple Input Multiple Output channel models with regard to increased channel capacity as well as analysis of eigenvalues through the use of singular value decomposition. Little attention has been paid...... to the analysis of vector dynamics in terms of how the state of eigenvectors will change as a mobile is moving through a changing physical environment. This is important in terms of being able to track the orthogonal eigenmodes at system level, while also relieving the burden of tracking of the full channel...

  10. Future of Automated Insulin Delivery Systems.

    Science.gov (United States)

    Castle, Jessica R; DeVries, J Hans; Kovatchev, Boris

    2017-06-01

    Advances in continuous glucose monitoring (CGM) have brought on a paradigm shift in the management of type 1 diabetes. These advances have enabled the automation of insulin delivery, where an algorithm determines the insulin delivery rate in response to the CGM values. There are multiple automated insulin delivery (AID) systems in development. A system that automates basal insulin delivery has already received Food and Drug Administration approval, and more systems are likely to follow. As the field of AID matures, future systems may incorporate additional hormones and/or multiple inputs, such as activity level. All AID systems are impacted by CGM accuracy and future CGM devices must be shown to be sufficiently accurate to be safely incorporated into AID. In this article, we summarize recent achievements in AID development, with a special emphasis on CGM sensor performance, and discuss the future of AID systems from the point of view of their input-output characteristics, form factor, and adaptability.

  11. A Kepler Workflow Tool for Reproducible AMBER GPU Molecular Dynamics.

    Science.gov (United States)

    Purawat, Shweta; Ieong, Pek U; Malmstrom, Robert D; Chan, Garrett J; Yeung, Alan K; Walker, Ross C; Altintas, Ilkay; Amaro, Rommie E

    2017-06-20

    With the drive toward high throughput molecular dynamics (MD) simulations involving ever-greater numbers of simulation replicates run for longer, biologically relevant timescales (microseconds), the need for improved computational methods that facilitate fully automated MD workflows gains more importance. Here we report the development of an automated workflow tool to perform AMBER GPU MD simulations. Our workflow tool capitalizes on the capabilities of the Kepler platform to deliver a flexible, intuitive, and user-friendly environment and the AMBER GPU code for a robust and high-performance simulation engine. Additionally, the workflow tool reduces user input time by automating repetitive processes and facilitates access to GPU clusters, whose high-performance processing power makes simulations of large numerical scale possible. The presented workflow tool facilitates the management and deployment of large sets of MD simulations on heterogeneous computing resources. The workflow tool also performs systematic analysis on the simulation outputs and enhances simulation reproducibility, execution scalability, and MD method development including benchmarking and validation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Output-Feedback Control of Unknown Linear Discrete-Time Systems With Stochastic Measurement and Process Noise via Approximate Dynamic Programming.

    Science.gov (United States)

    Wang, Jun-Sheng; Yang, Guang-Hong

    2017-07-25

    This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.

  13. Automated nuclear material recovery and decontamination of large steel dynamic experiment containers

    International Nuclear Information System (INIS)

    Dennison, D.K.; Gallant, D.A.; Nelson, D.C.; Stovall, L.A.; Wedman, D.E.

    1999-01-01

    A key mission of the Los Alamos National Laboratory (LANL) is to reduce the global nuclear danger through stockpile stewardship efforts that ensure the safety and reliability of nuclear weapons. In support of this mission LANL performs dynamic experiments on special nuclear materials (SNM) within large steel containers. Once these experiments are complete, these containers must be processed to recover residual SNM and to decontaminate the containers to below low level waste (LLW) disposal limits which are much less restrictive for disposal purposes than transuranic (TRU) waste limits. The purpose of this paper is to describe automation efforts being developed by LANL for improving the efficiency, increasing worker safety, and reducing worker exposure during the material cleanout and recovery activities performed on these containers

  14. Why Are There Still So Many Jobs? The History and Future of Workplace Automation

    OpenAIRE

    David H. Autor

    2015-01-01

    In this essay, I begin by identifying the reasons that automation has not wiped out a majority of jobs over the decades and centuries. Automation does indeed substitute for labor—as it is typically intended to do. However, automation also complements labor, raises output in ways that leads to higher demand for labor, and interacts with adjustments in labor supply. Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the stron...

  15. A statistical-dynamical modeling approach for the simulation of local paleo proxy records using GCM output

    Energy Technology Data Exchange (ETDEWEB)

    Reichert, B.K.; Bengtsson, L. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany); Aakesson, O. [Sveriges Meteorologiska och Hydrologiska Inst., Norrkoeping (Sweden)

    1998-08-01

    Recent proxy data obtained from ice core measurements, dendrochronology and valley glaciers provide important information on the evolution of the regional or local climate. General circulation models integrated over a long period of time could help to understand the (external and internal) forcing mechanisms of natural climate variability. For a systematic interpretation of in situ paleo proxy records, a combined method of dynamical and statistical modeling is proposed. Local 'paleo records' can be simulated from GCM output by first undertaking a model-consistent statistical downscaling and then using a process-based forward modeling approach to obtain the behavior of valley glaciers and the growth of trees under specific conditions. The simulated records can be compared to actual proxy records in order to investigate whether e.g. the response of glaciers to climatic change can be reproduced by models and to what extent climate variability obtained from proxy records (with the main focus on the last millennium) can be represented. For statistical downscaling to local weather conditions, a multiple linear forward regression model is used. Daily sets of observed weather station data and various large-scale predictors at 7 pressure levels obtained from ECMWF reanalyses are used for development of the model. Daily data give the closest and most robust relationships due to the strong dependence on individual synoptic-scale patterns. For some local variables, the performance of the model can be further increased by developing seasonal specific statistical relationships. The model is validated using both independent and restricted predictor data sets. The model is applied to a long integration of a mixed layer GCM experiment simulating pre-industrial climate variability. The dynamical-statistical local GCM output within a region around Nigardsbreen glacier, Norway is compared to nearby observed station data for the period 1868-1993. Patterns of observed

  16. Vibration-based Energy Harvesting Systems Characterization Using Automated Electronic Equipment

    Directory of Open Access Journals (Sweden)

    Ioannis KOSMADAKIS

    2015-04-01

    Full Text Available A measurement bench has been developed to fully automate the procedure for the characterization of a vibration-based energy scavenging system. The measurement system is capable of monitoring all important characteristics of a vibration harvesting system (input and output voltage, current, and other parameters, frequency and acceleration values, etc.. It is composed of a PC, typical digital measuring instruments (oscilloscope, waveform generator, etc., certain sensors and actuators, along with a microcontroller based automation module. The automation of the procedure and the manipulation of the acquired data are performed by LabVIEW software. Typical measurements of a system consisting of a vibrating source, a vibration transducer and an active rectifier are presented.

  17. A method for the automated long-term monitoring of three-spined stickleback Gasterosteus aculeatus shoal dynamics.

    Science.gov (United States)

    Kleinhappel, T K; Al-Zoubi, A; Al-Diri, B; Burman, O; Dickinson, P; John, L; Wilkinson, A; Pike, T W

    2014-04-01

    This paper describes and evaluates a flexible, non-invasive tagging system for the automated identification and long-term monitoring of individual three-spined sticklebacks Gasterosteus aculeatus. The system is based on barcoded tags, which can be reliably and robustly detected and decoded to provide information on an individual's identity and location. Because large numbers of fish can be individually tagged, it can be used to monitor individual- and group-level dynamics within fish shoals. © 2014 The Fisheries Society of the British Isles.

  18. SU-E-T-362: Enhanced Dynamic Wedge Output Factors for Varian 2300CD and the Case for a Reference Database

    International Nuclear Information System (INIS)

    Njeh, C

    2015-01-01

    Purpose: Dose inhomogeneity in treatment planning can be compensated using physical wedges. Enhanced dynamic wedges (EDW) were introduced by Varian to overcome some of the short comings of physical wedges. The objectives of this study were to measure EDW output factors for 6 MV and 20 MV photon energies for a Varian 2300CD. Secondly to review the literature in terms of published enhanced dynamic wedge output factors (EDWOF) for different Varian models and thereby adding credence to the case of the validity of reference databases. Methods: The enhanced dynamic wedge output factors were measured for the Varian 2300CD for both 6 MV and 20 MV photon energies. Twelve papers with published EDWOF for different Varian Linac models were found in the literature. Results: The EDWOF for 6 MV varied from 0.980 for a 5×5 cm 10 degree wedge to 0.424 for 20×20 cm 60 degree wedge. Similarly for 20 MV, the EDWOF varied from 0.986 for 5×5 cm 10 degree wedge to 0.529 for 20×20 cm 60 degree wedge. EDWOF are highly dependent on field size. Comparing our results with the published mean, we found an excellent agreement for 6 MV EDWOF with the percentage differences ranging from 0.01% to 0.57% with a mean of 0.03%. The coefficient of variation of published EDWOF ranged from 0.17% to 0.85% and 0.1% to 0.9% for the for 6 MV and 18MV photon energies respectively. This paper provides the first published EDWOF for 20 MV photon energy. In addition, we have provided the first compendium of EDWOFs for different Varian linac models. Conclusion: The consistency of EDWOF across models and institution provide further support that, a standard data set of basic photon and electron dosimetry could be established, as a guide for future commissioning, beam modeling and quality assurance purposes

  19. Development of design principles for automated systems in transport control.

    Science.gov (United States)

    Balfe, Nora; Wilson, John R; Sharples, Sarah; Clarke, Theresa

    2012-01-01

    This article reports the results of a qualitative study investigating attitudes towards and opinions of an advanced automation system currently used in UK rail signalling. In-depth interviews were held with 10 users, key issues associated with automation were identified and the automation's impact on the signalling task investigated. The interview data highlighted the importance of the signallers' understanding of the automation and their (in)ability to predict its outputs. The interviews also covered the methods used by signallers to interact with and control the automation, and the perceived effects on their workload. The results indicate that despite a generally low level of understanding and ability to predict the actions of the automation system, signallers have developed largely successful coping mechanisms that enable them to use the technology effectively. These findings, along with parallel work identifying desirable attributes of automation from the literature in the area, were used to develop 12 principles of automation which can be used to help design new systems which better facilitate cooperative working. The work reported in this article was completed with the active involvement of operational rail staff who regularly use automated systems in rail signalling. The outcomes are currently being used to inform decisions on the extent and type of automation and user interfaces in future generations of rail control systems.

  20. Current status of automated ultrasonic pipe inspection systems - ISI of stainless steel piping systems in BWR power plants

    International Nuclear Information System (INIS)

    Jeong, P.

    1985-01-01

    The field of ultrasonics nondestructive testing is constantly expanding its ability of acquiring data and its speed by implementing a computer into the testing system. The computer made it possible to store massive test data into a compact magnetic hard disk for permanent records. The data outputs are displayed on the color CRT screen, and varieties of image display methods, such as A-scan, B-scan, C-scan, P-scan, or many other 3 dimensional isometric views and the modified display techniques are available to an operator. Various hardcopy machines are now a part of the testing system so that the displayed data outputs can be easily copied and filed for permanent documentation. The faster and more accurate mechanized scanners are gradually being substituted for the conventional manual scanning method which has been a major time consuming part of the testing operation. When all such improvements are combined into an integral unit, a reliable, fully automated ultrasonic testing system can by made. The fully automated ultrasonic testing system is needed not only for fast data acquisition, processing, and reliable data display, but also, even more importantly, for considerable reduction of human intervention, which could be a critical factor under the severely limited field environment. Obviously, in the past several years, tremendous accomplishments have been made in automating the test system, and many such systems are being used in the field. However, most of the existing automated systems are still bulky in size and the displayed data is often difficult to interpret to the field operators. Major effect should, therefore, be directed to size reduction of the system as well as improvement on the system reliability

  1. Automated borehole gravity meter system

    International Nuclear Information System (INIS)

    Lautzenhiser, Th.V.; Wirtz, J.D.

    1984-01-01

    An automated borehole gravity meter system for measuring gravity within a wellbore. The gravity meter includes leveling devices for leveling the borehole gravity meter, displacement devices for applying forces to a gravity sensing device within the gravity meter to bring the gravity sensing device to a predetermined or null position. Electronic sensing and control devices are provided for (i) activating the displacement devices, (ii) sensing the forces applied to the gravity sensing device, (iii) electronically converting the values of the forces into a representation of the gravity at the location in the wellbore, and (iv) outputting such representation. The system further includes electronic control devices with the capability of correcting the representation of gravity for tidal effects, as well as, calculating and outputting the formation bulk density and/or porosity

  2. Functional System Dynamics

    OpenAIRE

    Ligterink, N.E.

    2007-01-01

    Functional system dynamics is the analysis, modelling, and simulation of continuous systems usually described by partial differential equations. From the infinite degrees of freedom of such systems only a finite number of relevant variables have to be chosen for a practical model description. The proper input and output of the system are an important part of the relevant variables.

  3. Towards Automated Binding Affinity Prediction Using an Iterative Linear Interaction Energy Approach

    Directory of Open Access Journals (Sweden)

    C. Ruben Vosmeer

    2014-01-01

    Full Text Available Binding affinity prediction of potential drugs to target and off-target proteins is an essential asset in drug development. These predictions require the calculation of binding free energies. In such calculations, it is a major challenge to properly account for both the dynamic nature of the protein and the possible variety of ligand-binding orientations, while keeping computational costs tractable. Recently, an iterative Linear Interaction Energy (LIE approach was introduced, in which results from multiple simulations of a protein-ligand complex are combined into a single binding free energy using a Boltzmann weighting-based scheme. This method was shown to reach experimental accuracy for flexible proteins while retaining the computational efficiency of the general LIE approach. Here, we show that the iterative LIE approach can be used to predict binding affinities in an automated way. A workflow was designed using preselected protein conformations, automated ligand docking and clustering, and a (semi-automated molecular dynamics simulation setup. We show that using this workflow, binding affinities of aryloxypropanolamines to the malleable Cytochrome P450 2D6 enzyme can be predicted without a priori knowledge of dominant protein-ligand conformations. In addition, we provide an outlook for an approach to assess the quality of the LIE predictions, based on simulation outcomes only.

  4. Enabling Automated Dynamic Demand Response: From Theory to Practice

    Energy Technology Data Exchange (ETDEWEB)

    Frincu, Marc; Chelmis, Charalampos; Aman, Saima; Saeed, Rizwan; Zois, Vasileios; Prasanna, Viktor

    2015-07-14

    Demand response (DR) is a technique used in smart grids to shape customer load during peak hours. Automated DR offers utilities a fine grained control and a high degree of confidence in the outcome. However the impact on the customer's comfort means this technique is more suited for industrial and commercial settings than for residential homes. In this paper we propose a system for achieving automated controlled DR in a heterogeneous environment. We present some of the main issues arising in building such a system, including privacy, customer satisfiability, reliability, and fast decision turnaround, with emphasis on the solutions we proposed. Based on the lessons we learned from empirical results we describe an integrated automated system for controlled DR on the USC microgrid. Results show that while on a per building per event basis the accuracy of our prediction and customer selection techniques varies, it performs well on average when considering several events and buildings.

  5. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    Science.gov (United States)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  6. Double-pass tapered amplifier diode laser with an output power of 1 W for an injection power of only 200 μW.

    Science.gov (United States)

    Bolpasi, V; von Klitzing, W

    2010-11-01

    A 1 W tapered amplifier requiring only 200 μW of injection power at 780 nm is presented in this paper. This is achieved by injecting the seeding light into the amplifier from its tapered side and feeding the amplified light back into the small side. The amplified spontaneous emission of the tapered amplifier is suppressed by 75 dB. The double-passed tapered laser, presented here, is extremely stable and reliable. The output beam remains well coupled to the optical fiber for a timescale of months, whereas the injection of the seed light did not require realignment for over a year of daily operation.

  7. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  8. Automated Cryocooler Monitor and Control System Software

    Science.gov (United States)

    Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad

    2011-01-01

    This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.

  9. The simulation of automated leading edge assembly

    OpenAIRE

    Song, Qingming

    2015-01-01

    Aircraft manufacturers are experiencing a fierce competition worldwide. Improving productivity, increasing throughput and reducing costs are influencing aircraft manufacturer’s future development. In order to improve competitiveness and provide sufficient and high quality products, it should reduce operations of aircraft assembly,majority of which are still in manual process, which limit production output. In contrast, these processes can be automated to replace manual opera...

  10. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  11. Fully automated segmentation of left ventricle using dual dynamic programming in cardiac cine MR images

    Science.gov (United States)

    Jiang, Luan; Ling, Shan; Li, Qiang

    2016-03-01

    Cardiovascular diseases are becoming a leading cause of death all over the world. The cardiac function could be evaluated by global and regional parameters of left ventricle (LV) of the heart. The purpose of this study is to develop and evaluate a fully automated scheme for segmentation of LV in short axis cardiac cine MR images. Our fully automated method consists of three major steps, i.e., LV localization, LV segmentation at end-diastolic phase, and LV segmentation propagation to the other phases. First, the maximum intensity projection image along the time phases of the midventricular slice, located at the center of the image, was calculated to locate the region of interest of LV. Based on the mean intensity of the roughly segmented blood pool in the midventricular slice at each phase, end-diastolic (ED) and end-systolic (ES) phases were determined. Second, the endocardial and epicardial boundaries of LV of each slice at ED phase were synchronously delineated by use of a dual dynamic programming technique. The external costs of the endocardial and epicardial boundaries were defined with the gradient values obtained from the original and enhanced images, respectively. Finally, with the advantages of the continuity of the boundaries of LV across adjacent phases, we propagated the LV segmentation from the ED phase to the other phases by use of dual dynamic programming technique. The preliminary results on 9 clinical cardiac cine MR cases show that the proposed method can obtain accurate segmentation of LV based on subjective evaluation.

  12. Order Division Automated System.

    Science.gov (United States)

    Kniemeyer, Justin M.; And Others

    This publication was prepared by the Order Division Automation Project staff to fulfill the Library of Congress' requirement to document all automation efforts. The report was originally intended for internal use only and not for distribution outside the Library. It is now felt that the library community at-large may have an interest in the…

  13. Output feedback control of heat transport mechanisms in parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2016-08-05

    This paper presents an output feedback control for distributed parabolic solar collectors. The controller aims at forcing the outlet temperature to track a desired reference in order to manage the produced heat despite the external disturbances. The proposed control strategy is derived using the distributed physical model of the system to avoid the loss of information due to model approximation schemes. The system dynamics are driven to follow reference dynamics defined by a transport equation with a constant velocity, which allows to control the transient behavior and the response time of the closed loop. The designed controller depends only on the accessible measured variables which makes it easy for real time implementation and useful for industrial plants. Simulation results show the efficiency of the reference tracking closed loop under different working conditions.

  14. Redesign lifts prep output 288%

    Energy Technology Data Exchange (ETDEWEB)

    Hamric, J

    1987-02-01

    This paper outlines the application of engineering creativity and how it brought output at an Ohio coal preparation plant up from 12,500 tpd to nearly four times that figure, 48,610 tpd. By streamlining the conveyor systems, removing surplus belt length and repositioning subplants the whole operation was able to run far more efficiently with a greater output. Various other alterations including the raw material supply and management and operating practices were also undertaken to provide a test for the achievements possible with such reorganization. The new developments have been in the following fields: fine coal cleaning, heavy media cyclones, feeders, bins, filter presses, dewatering equipment and settling tanks. Output is now limited only by the reduced demand by the Gavin power station nearby.

  15. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  16. Aggregate Supply and Potential Output

    OpenAIRE

    Razin, Assaf

    2004-01-01

    The New-Keynesian aggregate supply derives from micro-foundations an inflation-dynamics model very much like the tradition in the monetary literature. Inflation is primarily affected by: (i) economic slack; (ii) expectations; (iii) supply shocks; and (iv) inflation persistence. This paper extends the New Keynesian aggregate supply relationship to include also fluctuations in potential output, as an additional determinant of the relationship. Implications for monetary rules and to the estimati...

  17. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    Science.gov (United States)

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (Pdynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  18. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Hasnain, S.B.

    1992-01-01

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  19. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    Science.gov (United States)

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  20. Automated on-line liquid–liquid extraction system for temporal mass spectrometric analysis of dynamic samples

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Kai-Ta; Liu, Pei-Han [Department of Applied Chemistry, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China); Urban, Pawel L. [Department of Applied Chemistry, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China); Institute of Molecular Science, National Chiao Tung University, 1001 University Rd, Hsinchu, 300, Taiwan (China)

    2015-09-24

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid–liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053–2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h{sup −1}). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. - Highlights: • Mass spectrometric analysis normally requires sample preparation. • Liquid–liquid extraction can isolate analytes from complex matrices. • The proposed system automates

  1. Automated on-line liquid–liquid extraction system for temporal mass spectrometric analysis of dynamic samples

    International Nuclear Information System (INIS)

    Hsieh, Kai-Ta; Liu, Pei-Han; Urban, Pawel L.

    2015-01-01

    Most real samples cannot directly be infused to mass spectrometers because they could contaminate delicate parts of ion source and guides, or cause ion suppression. Conventional sample preparation procedures limit temporal resolution of analysis. We have developed an automated liquid–liquid extraction system that enables unsupervised repetitive treatment of dynamic samples and instantaneous analysis by mass spectrometry (MS). It incorporates inexpensive open-source microcontroller boards (Arduino and Netduino) to guide the extraction and analysis process. Duration of every extraction cycle is 17 min. The system enables monitoring of dynamic processes over many hours. The extracts are automatically transferred to the ion source incorporating a Venturi pump. Operation of the device has been characterized (repeatability, RSD = 15%, n = 20; concentration range for ibuprofen, 0.053–2.000 mM; LOD for ibuprofen, ∼0.005 mM; including extraction and detection). To exemplify its usefulness in real-world applications, we implemented this device in chemical profiling of pharmaceutical formulation dissolution process. Temporal dissolution profiles of commercial ibuprofen and acetaminophen tablets were recorded during 10 h. The extraction-MS datasets were fitted with exponential functions to characterize the rates of release of the main and auxiliary ingredients (e.g. ibuprofen, k = 0.43 ± 0.01 h"−"1). The electronic control unit of this system interacts with the operator via touch screen, internet, voice, and short text messages sent to the mobile phone, which is helpful when launching long-term (e.g. overnight) measurements. Due to these interactive features, the platform brings the concept of the Internet-of-Things (IoT) to the chemistry laboratory environment. - Highlights: • Mass spectrometric analysis normally requires sample preparation. • Liquid–liquid extraction can isolate analytes from complex matrices. • The proposed system automates the

  2. CO2 emissions, energy usage, and output in Central America

    International Nuclear Information System (INIS)

    Apergis, Nicholas; Payne, James E.

    2009-01-01

    This study extends the recent work of Ang (2007) [Ang, J.B., 2007. CO 2 emissions, energy consumption, and output in France. Energy Policy 35, 4772-4778] in examining the causal relationship between carbon dioxide emissions, energy consumption, and output within a panel vector error correction model for six Central American countries over the period 1971-2004. In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and emissions.

  3. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  4. Nonlinear Power-Level Control of the MHTGR Only with the Feedback Loop of Helium Temperature

    Directory of Open Access Journals (Sweden)

    Zhe Dong

    2013-02-01

    Full Text Available Power-level control is a crucial technique for the safe, stable and efficient operation of modular high temperature gas-cooled nuclear reactors (MHTGRs, which have strong inherent safety features and high outlet temperatures. The current power-level controllers of the MHTGRs need measurements of both the nuclear power and the helium temperature, which cannot provide satisfactory control performance and can even induce large oscillations when the neutron sensors are in error. In order to improve the fault tolerance of the control system, it is important to develop a power-level control strategy that only requires the helium temperature. The basis for developing this kind of control law is to give a state-observer of the MHTGR a relationship that only needs the measurement of helium temperature. With this in mind, a novel nonlinear state observer which only needs the measurement of helium temperature is proposed. This observer is globally convergent if there is no disturbance, and has the L2 disturbance attenuation performance if the disturbance is nonzero. The separation principle of this observer is also proven, which denotes that this observer can recover the performance of both globally asymptotic stabilizers and L2 disturbance attenuators. Then, a new dynamic output feedback power-level control strategy is established, which is composed of this observer and the well-built static state-feedback power-level control based upon iterative dissipation assignment (IDA-PLC. Finally, numerical simulation results show the high performance and feasibility of this newly-built dynamic output feedback power-level controller.

  5. galaxieEST: addressing EST identity through automated phylogenetic analysis.

    Science.gov (United States)

    Nilsson, R Henrik; Rajashekar, Balaji; Larsson, Karl-Henrik; Ursing, Björn M

    2004-07-05

    Research involving expressed sequence tags (ESTs) is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches return one or more pertinent, but not full, matches and

  6. Automation of the National Water Quality Laboratories, U. S. Geological Survey. I. Description of laboratory functions and definition of the automation project

    Energy Technology Data Exchange (ETDEWEB)

    Morris, W.F.; Ames, H.S.

    1977-07-01

    In January 1976, the Water Resources Division of the U.S. Geological Survey asked Lawrence Livermore Laboratory to conduct a feasibility study for automation of the National Water Quality (NWQ) Laboratory in Denver, Colorado (formerly Denver Central Laboratory). Results of the study were published in the Feasibility Study for Automation of the Central Laboratories, Lawrence Livermore Laboratory, Rept. UCRL-52001 (1976). Because the present system for processing water samples was found inadequate to meet the demands of a steadily increasing workload, new automation was recommended. In this document we present details necessary for future implementation of the new system, as well as descriptions of current laboratory automatic data processing and analytical facilities to better define the scope of the project and illustrate what the new system will accomplish. All pertinent inputs, outputs, and other operations that define the project are shown in functional designs.

  7. Automation of the National Water Quality Laboratories, U.S. Geological Survey. I. Description of laboratory functions and definition of the automation project

    International Nuclear Information System (INIS)

    Morris, W.F.; Ames, H.S.

    1977-01-01

    In January 1976, the Water Resources Division of the U.S. Geological Survey asked Lawrence Livermore Laboratory to conduct a feasibility study for automation of the National Water Quality (NWQ) Laboratory in Denver, Colorado (formerly Denver Central Laboratory). Results of the study were published in the Feasibility Study for Automation of the Central Laboratories, Lawrence Livermore Laboratory, Rept. UCRL-52001 (1976). Because the present system for processing water samples was found inadequate to meet the demands of a steadily increasing workload, new automation was recommended. In this document we present details necessary for future implementation of the new system, as well as descriptions of current laboratory automatic data processing and analytical facilities to better define the scope of the project and illustrate what the new system will accomplish. All pertinent inputs, outputs, and other operations that define the project are shown in functional designs

  8. Solvation Structure and Thermodynamic Mapping (SSTMap): An Open-Source, Flexible Package for the Analysis of Water in Molecular Dynamics Trajectories.

    Science.gov (United States)

    Haider, Kamran; Cruz, Anthony; Ramsey, Steven; Gilson, Michael K; Kurtzman, Tom

    2018-01-09

    We have developed SSTMap, a software package for mapping structural and thermodynamic water properties in molecular dynamics trajectories. The package introduces automated analysis and mapping of local measures of frustration and enhancement of water structure. The thermodynamic calculations are based on Inhomogeneous Fluid Solvation Theory (IST), which is implemented using both site-based and grid-based approaches. The package also extends the applicability of solvation analysis calculations to multiple molecular dynamics (MD) simulation programs by using existing cross-platform tools for parsing MD parameter and trajectory files. SSTMap is implemented in Python and contains both command-line tools and a Python module to facilitate flexibility in setting up calculations and for automated generation of large data sets involving analysis of multiple solutes. Output is generated in formats compatible with popular Python data science packages. This tool will be used by the molecular modeling community for computational analysis of water in problems of biophysical interest such as ligand binding and protein function.

  9. Translation: Aids, Robots, and Automation.

    Science.gov (United States)

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  10. Delay signatures in the chaotic intensity output of a quantum dot ...

    Indian Academy of Sciences (India)

    journal of. May 2016 physics pp. 1021–1030. Delay signatures in the chaotic intensity output ... Research in complex systems require quantitative predictions of their dynamics, even ... used methods for estimating delay in complex dynamics are autocorrelation function ..... Authors thank BRNS for its financial support.

  11. Public Investment and Output Performance: Evidence from Nigeria

    Directory of Open Access Journals (Sweden)

    Aregbeyen Omo

    2016-05-01

    Full Text Available This study examined the direct/indirect long-run relationships and dynamic interactions between public investment (PI and output performance in Nigeria using annual data spanning 1970-2010. A macro-econometric model derived from Keynes’ income-expenditure framework was employed. The model was disaggregated into demand and supply sides to trace the direct and indirect effects of PI on aggregate output. The direct supply side effect was assessed using the magnitude of PI multiplier coefficient, while the indirect effect of PI on the demand side was evaluated with marginal propensity to consume, accelerator coefficient and import multiplier. The results showed relatively less strong direct effect of PI on aggregate output, while the indirect effects were stronger with the import multiplier being the most pronounced. This is attributed to declining capital expenditure, poor implementation and low quality of PI projects due to widespread corruption. By and large, we concluded that PI exerted considerable influence on aggregate output.

  12. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B

    2014-01-01

    Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...... corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters...

  13. Office automation: a look beyond word processing

    OpenAIRE

    DuBois, Milan Ephriam, Jr.

    1983-01-01

    Approved for public release; distribution is unlimited Word processing was the first of various forms of office automation technologies to gain widespread acceptance and usability in the business world. For many, it remains the only form of office automation technology. Office automation, however, is not just word processing, although it does include the function of facilitating and manipulating text. In reality, office automation is not one innovation, or one office system, or one tech...

  14. Comparison of Size Modulation Standard Automated Perimetry and Conventional Standard Automated Perimetry with a 10-2 Test Program in Glaucoma Patients.

    Science.gov (United States)

    Hirasawa, Kazunori; Takahashi, Natsumi; Satou, Tsukasa; Kasahara, Masayuki; Matsumura, Kazuhiro; Shoji, Nobuyuki

    2017-08-01

    This prospective observational study compared the performance of size modulation standard automated perimetry with the Octopus 600 10-2 test program, with stimulus size modulation during testing, based on stimulus intensity and conventional standard automated perimetry, with that of the Humphrey 10-2 test program in glaucoma patients. Eighty-seven eyes of 87 glaucoma patients underwent size modulation standard automated perimetry with Dynamic strategy and conventional standard automated perimetry using the SITA standard strategy. The main outcome measures were global indices, point-wise threshold, visual defect size and depth, reliability indices, and test duration; these were compared between size modulation standard automated perimetry and conventional standard automated perimetry. Global indices and point-wise threshold values between size modulation standard automated perimetry and conventional standard automated perimetry were moderately to strongly correlated (p 33.40, p modulation standard automated perimetry than with conventional standard automated perimetry, but the visual-field defect size was smaller (p modulation-standard automated perimetry than on conventional standard automated perimetry. The reliability indices, particularly the false-negative response, of size modulation standard automated perimetry were worse than those of conventional standard automated perimetry (p modulation standard automated perimetry than with conventional standard automated perimetry (p = 0.02). Global indices and the point-wise threshold value of the two testing modalities correlated well. However, the potential of a large stimulus presented at an area with a decreased sensitivity with size modulation standard automated perimetry could underestimate the actual threshold in the 10-2 test protocol, as compared with conventional standard automated perimetry.

  15. Automated Flight Routing Using Stochastic Dynamic Programming

    Science.gov (United States)

    Ng, Hok K.; Morando, Alex; Grabbe, Shon

    2010-01-01

    Airspace capacity reduction due to convective weather impedes air traffic flows and causes traffic congestion. This study presents an algorithm that reroutes flights in the presence of winds, enroute convective weather, and congested airspace based on stochastic dynamic programming. A stochastic disturbance model incorporates into the reroute design process the capacity uncertainty. A trajectory-based airspace demand model is employed for calculating current and future airspace demand. The optimal routes minimize the total expected traveling time, weather incursion, and induced congestion costs. They are compared to weather-avoidance routes calculated using deterministic dynamic programming. The stochastic reroutes have smaller deviation probability than the deterministic counterpart when both reroutes have similar total flight distance. The stochastic rerouting algorithm takes into account all convective weather fields with all severity levels while the deterministic algorithm only accounts for convective weather systems exceeding a specified level of severity. When the stochastic reroutes are compared to the actual flight routes, they have similar total flight time, and both have about 1% of travel time crossing congested enroute sectors on average. The actual flight routes induce slightly less traffic congestion than the stochastic reroutes but intercept more severe convective weather.

  16. Automated Attitude Sensor Calibration: Progress and Plans

    Science.gov (United States)

    Sedlak, Joseph; Hashmall, Joseph

    2004-01-01

    This paper describes ongoing work a NASA/Goddard Space Flight Center to improve the quality of spacecraft attitude sensor calibration and reduce costs by automating parts of the calibration process. The new calibration software can autonomously preview data quality over a given time span, select a subset of the data for processing, perform the requested calibration, and output a report. This level of automation is currently being implemented for two specific applications: inertial reference unit (IRU) calibration and sensor alignment calibration. The IRU calibration utility makes use of a sequential version of the Davenport algorithm. This utility has been successfully tested with simulated and actual flight data. The alignment calibration is still in the early testing stage. Both utilities will be incorporated into the institutional attitude ground support system.

  17. Automated audiometry using apple iOS-based application technology.

    Science.gov (United States)

    Foulad, Allen; Bui, Peggy; Djalilian, Hamid

    2013-11-01

    The aim of this study is to determine the feasibility of an Apple iOS-based automated hearing testing application and to compare its accuracy with conventional audiometry. Prospective diagnostic study. Setting Academic medical center. An iOS-based software application was developed to perform automated pure-tone hearing testing on the iPhone, iPod touch, and iPad. To assess for device variations and compatibility, preliminary work was performed to compare the standardized sound output (dB) of various Apple device and headset combinations. Forty-two subjects underwent automated iOS-based hearing testing in a sound booth, automated iOS-based hearing testing in a quiet room, and conventional manual audiometry. The maximum difference in sound intensity between various Apple device and headset combinations was 4 dB. On average, 96% (95% confidence interval [CI], 91%-100%) of the threshold values obtained using the automated test in a sound booth were within 10 dB of the corresponding threshold values obtained using conventional audiometry. When the automated test was performed in a quiet room, 94% (95% CI, 87%-100%) of the threshold values were within 10 dB of the threshold values obtained using conventional audiometry. Under standardized testing conditions, 90% of the subjects preferred iOS-based audiometry as opposed to conventional audiometry. Apple iOS-based devices provide a platform for automated air conduction audiometry without requiring extra equipment and yield hearing test results that approach those of conventional audiometry.

  18. Direct output feedback control of discrete-time systems

    International Nuclear Information System (INIS)

    Lin, C.C.; Chung, L.L.; Lu, K.H.

    1993-01-01

    An optimal direct output feedback control algorithm is developed for discrete-time systems with the consideration of time delay in control force action. Optimal constant output feedback gains are obtained through variational process such that certain prescribed quadratic performance index is minimized. Discrete-time control forces are then calculated from the multiplication of output measurements by these pre-calculated feedback gains. According to the proposed algorithm, structural system is assured to remain stable even in the presence of time delay. The number of sensors and controllers may be very small as compared with the dimension of states. Numerical results show that direct velocity feedback control is more sensitive to time delay than state feedback but, is still quite effective in reducing the dynamic responses under earthquake excitation. (author)

  19. Automation of Publications in Official Statistics using R

    Directory of Open Access Journals (Sweden)

    Guido Schulz

    2018-03-01

    Full Text Available A key task of official statistical authorities is to collect and disseminate indicators periodically. Automation using a wide range of R packages bears massive potential to cut down the resources necessary for the creation of publications. Furthermore, automation in R has the potential to improve transparency, punctuality and coherence of statistical products. The dynamic reporting engine knitr in particular allows for an efficient combination of R’s functionalities of data retrieval, data manipulation and customizable plotting on the one hand, and the layout and typesetting flexibility of LaTex or other markup languages on the other. This allows official statistical authorities to produce either ready-to-print PDFs or interactive websites while adhering to their corporate design requirements. Furthermore, dynamic reporting makes it possible to update periodic publications automatically. A work in progress example of automated statistical country profiles – a product the German Federal Statistical Office regularly publishes based on a wide range of official international sources – will be presented to illustrate both advantages and challenges in the practical use of dynamic reporting using R and knitr in particular.

  20. CO{sub 2} emissions, energy usage, and output in Central America

    Energy Technology Data Exchange (ETDEWEB)

    Apergis, Nicholas [Department of Banking and Financial Management, University of Piraeus, Karaoli and Dimitriou 80, Piraeus, ATTIKI 18534 (Greece); Payne, James E. [College of Arts and Sciences, Illinois State University, Campus Box 4100, Normal, IL 61790-4100 (United States)

    2009-08-15

    This study extends the recent work of Ang (2007) [Ang, J.B., 2007. CO{sub 2} emissions, energy consumption, and output in France. Energy Policy 35, 4772-4778] in examining the causal relationship between carbon dioxide emissions, energy consumption, and output within a panel vector error correction model for six Central American countries over the period 1971-2004. In long-run equilibrium energy consumption has a positive and statistically significant impact on emissions while real output exhibits the inverted U-shape pattern associated with the Environmental Kuznets Curve (EKC) hypothesis. The short-run dynamics indicate unidirectional causality from energy consumption and real output, respectively, to emissions along with bidirectional causality between energy consumption and real output. In the long-run there appears to be bidirectional causality between energy consumption and emissions. (author)

  1. Exploration on Automated Software Requirement Document Readability Approaches

    OpenAIRE

    Chen, Mingda; He, Yao

    2017-01-01

    Context. The requirements analysis phase, as the very beginning of software development process, has been identified as a quite important phase in the software development lifecycle. Software Requirement Specification (SRS) is the output of requirements analysis phase, whose quality factors play an important role in the evaluation work. Readability is a quite important SRS quality factor, but there are few available automated approaches for readability measurement, because of the tight depend...

  2. Applications of the Automated SMAC Modal Parameter Extraction Package

    International Nuclear Information System (INIS)

    MAYES, RANDALL L.; DORRELL, LARRY R.; KLENKE, SCOTT E.

    1999-01-01

    An algorithm known as SMAC (Synthesize Modes And Correlate), based on principles of modal filtering, has been in development for a few years. The new capabilities of the automated version are demonstrated on test data from a complex shell/payload system. Examples of extractions from impact and shaker data are shown. The automated algorithm extracts 30 to 50 modes in the bandwidth from each column of the frequency response function matrix. Examples of the synthesized Mode Indicator Functions (MIFs) compared with the actual MIFs show the accuracy of the technique. A data set for one input and 170 accelerometer outputs can typically be reduced in an hour. Application to a test with some complex modes is also demonstrated

  3. Potentials and challenges associated with automated closed dynamic chamber measurements of soil CO2 fluxes

    Science.gov (United States)

    Görres, Carolyn-Monika; Kammann, Claudia; Ceulemans, Reinhart

    2015-04-01

    Soil respiration fluxes are influenced by natural factors such as climate and soil type, but also by anthropogenic activities in managed ecosystems. As a result, soil CO2 fluxes show a large intra- and interannual as well as intra- and intersite variability. Most of the available soil CO2 flux data giving insights into this variability have been measured with manually closed static chambers, but technological advances in the past 15 years have also led to an increased use of automated closed chamber systems. The great advantage of automated chambers in comparison to manually operated chambers is the higher temporal resolution of the flux data. This is especially important if we want to better understand the effects of short-term events, e.g. fertilization or heavy rainfall, on soil CO2 flux variability. However, the chamber method is an invasive measurement method which can potentially alter soil CO2 fluxes and lead to biased measurement results. In the peer-reviewed literature, many papers compare the field performance and results of different closed static chamber designs, or compare manual chambers with automated chamber systems, to identify potential biases in CO2 flux measurements, and thus help to reduce uncertainties in the flux data. However, inter-comparisons of different automated closed dynamic chamber systems are still lacking. Here we are going to present a field comparison of the most-cited automated chamber system, the LI-8100A Automated Soil Flux System, with the also commercially available Greenhouse Gas Monitoring System AGPS. Both measurement systems were installed side by side at a recently harvested poplar bioenergy plantation (POPFULL, http://uahost.uantwerpen.be/popfull/) from April 2014 until August 2014. The plantation provided optimal comparison conditions with a bare field situation after the harvest and a regrowing canopy resulting in a broad variety of microclimates. Furthermore, the plantation was planted in a double-row system with

  4. Automated Cryocooler Monitor and Control System

    Science.gov (United States)

    Britcliffe, Michael J.; Hanscon, Theodore R.; Fowler, Larry E.

    2011-01-01

    A system was designed to automate cryogenically cooled low-noise amplifier systems used in the NASA Deep Space Network. It automates the entire operation of the system including cool-down, warm-up, and performance monitoring. The system is based on a single-board computer with custom software and hardware to monitor and control the cryogenic operation of the system. The system provides local display and control, and can be operated remotely via a Web interface. The system controller is based on a commercial single-board computer with onboard data acquisition capability. The commercial hardware includes a microprocessor, an LCD (liquid crystal display), seven LED (light emitting diode) displays, a seven-key keypad, an Ethernet interface, 40 digital I/O (input/output) ports, 11 A/D (analog to digital) inputs, four D/A (digital to analog) outputs, and an external relay board to control the high-current devices. The temperature sensors used are commercial silicon diode devices that provide a non-linear voltage output proportional to temperature. The devices are excited with a 10-microamp bias current. The system is capable of monitoring and displaying three temperatures. The vacuum sensors are commercial thermistor devices. The output of the sensors is a non-linear voltage proportional to vacuum pressure in the 1-Torr to 1-millitorr range. Two sensors are used. One measures the vacuum pressure in the cryocooler and the other the pressure at the input to the vacuum pump. The helium pressure sensor is a commercial device that provides a linear voltage output from 1 to 5 volts, corresponding to a gas pressure from 0 to 3.5 MPa (approx. = 500 psig). Control of the vacuum process is accomplished with a commercial electrically operated solenoid valve. A commercial motor starter is used to control the input power of the compressor. The warm-up heaters are commercial power resistors sized to provide the appropriate power for the thermal mass of the particular system, and

  5. Synthesis Study on Transitions in Signal Infrastructure and Control Algorithms for Connected and Automated Transportation

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Hong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Young, Stan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sperling, Joshua [National Renewable Energy Lab. (NREL), Golden, CO (United States); Beck, John [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology, vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal

  6. Automation trust and attention allocation in multitasking workspace.

    Science.gov (United States)

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  7. Output Feedback Tracking Control of an Underactuated Quad-Rotor UAV

    National Research Council Canada - National Science Library

    Lee, DongBin; Burg, Timothy; Xian, Bin; Dawson, Darren

    2006-01-01

    ...) using output feedback (OFB). Specifically, an observer is designed to estimate the velocities and an output feedback controller is designed for a nonlinear UAV system in which only position and angles are measurable...

  8. Continuous dynamic monitoring of a lively footbridge for serviceability assessment and damage detection

    Science.gov (United States)

    Hu, Wei-Hua; Moutinho, Carlos; Caetano, Elsa; Magalhães, Filipe; Cunha, Álvaro

    2012-11-01

    This paper aims at analyzing the feasibility of applying a vibration based damage detection approach, based on Principal Components Analysis (PCA), to eliminate environmental effects using the large amount of high quality data continuously collected by the dynamic monitoring system of Pedro e Inês footbridge since 2007. Few works describe real data, regularly collected along several years by reliable continuous dynamic monitoring systems in bridge structures. One main contribution is to show a large difference between making academic research based on numerical simulations or limited experimental samples, and making validity tests of innovative procedures using large high quality databases collected in real structures. The monitoring system, installed with the only initial objective of checking the efficiency of vibration control devices used to mitigate lateral and vertical vibrations, was therefore further developed for research purposes by implementing LabVIEW based automated signal processing and output-only modal identification routines, that enabled the analysis of the correlation of modal estimates with the temperature and the vibration level, as well as the automatic tracking of modal parameters along several years. With the final purpose of detecting potential structural damage at an early stage, the Principal Components Analysis (PCA) was employed to effectively eliminate temperature effects, whereas Novelty Analysis on the residual errors of the PCA model was used to provide a statistical indication of damage. The efficiency of this vibration based damage detection approach was verified using 3 years of measurements at Pedro e Inês footbridge under operational conditions and simulating several realistic damage scenarios affecting the boundary conditions. It is demonstrated that such a dynamic monitoring system, apart from providing relevant instantaneous dynamic information, working as an alert system associated to the verification of vibration

  9. Nonlinear observer output-feedback MPC treatment scheduling for HIV

    Directory of Open Access Journals (Sweden)

    Zurakowski Ryan

    2011-05-01

    Full Text Available Abstract Background Mathematical models of the immune response to the Human Immunodeficiency Virus demonstrate the potential for dynamic schedules of Highly Active Anti-Retroviral Therapy to enhance Cytotoxic Lymphocyte-mediated control of HIV infection. Methods In previous work we have developed a model predictive control (MPC based method for determining optimal treatment interruption schedules for this purpose. In this paper, we introduce a nonlinear observer for the HIV-immune response system and an integrated output-feedback MPC approach for implementing the treatment interruption scheduling algorithm using the easily available viral load measurements. We use Monte-Carlo approaches to test robustness of the algorithm. Results The nonlinear observer shows robust state tracking while preserving state positivity both for continuous and discrete measurements. The integrated output-feedback MPC algorithm stabilizes the desired steady-state. Monte-Carlo testing shows significant robustness to modeling error, with 90% success rates in stabilizing the desired steady-state with 15% variance from nominal on all model parameters. Conclusions The possibility of enhancing immune responsiveness to HIV through dynamic scheduling of treatment is exciting. Output-feedback Model Predictive Control is uniquely well-suited to solutions of these types of problems. The unique constraints of state positivity and very slow sampling are addressable by using a special-purpose nonlinear state estimator, as described in this paper. This shows the possibility of using output-feedback MPC-based algorithms for this purpose.

  10. Adaptative synchronization in multi-output fractional-order complex dynamical networks and secure communications

    Science.gov (United States)

    Mata-Machuca, Juan L.; Aguilar-López, Ricardo

    2018-01-01

    This work deals with the adaptative synchronization of complex dynamical networks with fractional-order nodes and its application in secure communications employing chaotic parameter modulation. The complex network is composed of multiple fractional-order systems with mismatch parameters and the coupling functions are given to realize the network synchronization. We introduce a fractional algebraic synchronizability condition (FASC) and a fractional algebraic identifiability condition (FAIC) which are used to know if the synchronization and parameters estimation problems can be solved. To overcome these problems, an adaptative synchronization methodology is designed; the strategy consists in proposing multiple receiver systems which tend to follow asymptotically the uncertain transmitters systems. The coupling functions and parameters of the receiver systems are adjusted continually according to a convenient sigmoid-like adaptative controller (SLAC), until the measurable output errors converge to zero, hence, synchronization between transmitter and receivers is achieved and message signals are recovered. Indeed, the stability analysis of the synchronization error is based on the fractional Lyapunov direct method. Finally, numerical results corroborate the satisfactory performance of the proposed scheme by means of the synchronization of a complex network consisting of several fractional-order unified chaotic systems.

  11. Technical study for the automation and control of processes of the chemical processing plant for liquid radioactive waste at Racso Nuclear Center

    International Nuclear Information System (INIS)

    Quevedo D, M.; Ayala S, A.

    1997-01-01

    The purpose of this study is to introduce the development of an automation and control system in a chemical processing plant for liquid radioactive waste of low and medium activity. The control system established for the chemical processing plant at RACSO Nuclear Center is described. It is an on-off sequential type system with feedback. This type of control has been chosen according to the volumes to be treated at the plant as processing is carried out by batches. The system will be governed by a programmable controller (PLC), modular, with a minimum of 24 digital inputs, 01 analog input, 16 digital outputs and 01 analog input. Digital inputs and outputs are specifically found at the level sensors of the tanks and at the solenoid-type electro valve control. Analog inputs and outputs have been considered at the pH control. The comprehensive system has been divided into three control bonds, The bonds considered for the operation of the plant are described, the plant has storing, fitting, processing and clarifying tanks. National Instruments' Lookout software has been used for simulation, constituting an important tool not only for a design phase but also for a practical one since this software will be used as SCADA system. Finally, the advantages and benefits of this automation system are analyzed, radiation doses received by occupationally exposed workers are reduced and reliability on the operation on the system is increased. (authors)

  12. Finite-time output feedback stabilization of high-order uncertain nonlinear systems

    Science.gov (United States)

    Jiang, Meng-Meng; Xie, Xue-Jun; Zhang, Kemei

    2018-06-01

    This paper studies the problem of finite-time output feedback stabilization for a class of high-order nonlinear systems with the unknown output function and control coefficients. Under the weaker assumption that output function is only continuous, by using homogeneous domination method together with adding a power integrator method, introducing a new analysis method, the maximal open sector Ω of output function is given. As long as output function belongs to any closed sector included in Ω, an output feedback controller can be developed to guarantee global finite-time stability of the closed-loop system.

  13. Detection of Damage in a Lattice Mast Excited by Wind by Dynamic Measurements

    DEFF Research Database (Denmark)

    Pedersen, Lars; Brincker, Rune

    2007-01-01

    The paper illustrates the effectiveness of monitoring the dynamic response of a system for detection of damage herein using an output-only assessment scheme. The system is a 20 m height steel lattice mass excited by wind and the mast is instrumented with accelerometers picking up dynamic response...

  14. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States.

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-02-01

    Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. A set of command line-based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion.

  15. Robust Design of SAW Gas Sensors by Taguchi Dynamic Method

    Directory of Open Access Journals (Sweden)

    Hsun-Heng Tsai

    2009-02-01

    Full Text Available This paper adopts Taguchi’s signal-to-noise ratio analysis to optimize the dynamic characteristics of a SAW gas sensor system whose output response is linearly related to the input signal. The goal of the present dynamic characteristics study is to increase the sensitivity of the measurement system while simultaneously reducing its variability. A time- and cost-efficient finite element analysis method is utilized to investigate the effects of the deposited mass upon the resonant frequency output of the SAW biosensor. The results show that the proposed methodology not only reduces the design cost but also promotes the performance of the sensors.

  16. Flightdeck Automation Problems (FLAP) Model for Safety Technology Portfolio Assessment

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    NASA's Aviation Safety Program (AvSP) develops and advances methodologies and technologies to improve air transportation safety. The Safety Analysis and Integration Team (SAIT) conducts a safety technology portfolio assessment (PA) to analyze the program content, to examine the benefits and risks of products with respect to program goals, and to support programmatic decision making. The PA process includes systematic identification of current and future safety risks as well as tracking several quantitative and qualitative metrics to ensure the program goals are addressing prominent safety risks accurately and effectively. One of the metrics within the PA process involves using quantitative aviation safety models to gauge the impact of the safety products. This paper demonstrates the role of aviation safety modeling by providing model outputs and evaluating a sample of portfolio elements using the Flightdeck Automation Problems (FLAP) model. The model enables not only ranking of the quantitative relative risk reduction impact of all portfolio elements, but also highlighting the areas with high potential impact via sensitivity and gap analyses in support of the program office. Although the model outputs are preliminary and products are notional, the process shown in this paper is essential to a comprehensive PA of NASA's safety products in the current program and future programs/projects.

  17. A Review of Automated Decision Support System

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Intelligence AI that enable decision automation based on existing facts, knowledge ... The growing reliance on data impacts dynamic data extraction and retrieval of the ... entertainment, medical, and the web. III. DECISION ...

  18. Mission simulation as an approach to develop requirements for automation in Advanced Life Support Systems

    Science.gov (United States)

    Erickson, J. D.; Eckelkamp, R. E.; Barta, D. J.; Dragg, J.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    This paper examines mission simulation as an approach to develop requirements for automation and robotics for Advanced Life Support Systems (ALSS). The focus is on requirements and applications for command and control, control and monitoring, situation assessment and response, diagnosis and recovery, adaptive planning and scheduling, and other automation applications in addition to mechanized equipment and robotics applications to reduce the excessive human labor requirements to operate and maintain an ALSS. Based on principles of systems engineering, an approach is proposed to assess requirements for automation and robotics using mission simulation tools. First, the story of a simulated mission is defined in terms of processes with attendant types of resources needed, including options for use of automation and robotic systems. Next, systems dynamics models are used in simulation to reveal the implications for selected resource allocation schemes in terms of resources required to complete operational tasks. The simulations not only help establish ALSS design criteria, but also may offer guidance to ALSS research efforts by identifying gaps in knowledge about procedures and/or biophysical processes. Simulations of a planned one-year mission with 4 crewmembers in a Human Rated Test Facility are presented as an approach to evaluation of mission feasibility and definition of automation and robotics requirements.

  19. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  20. Burst firing enhances neural output correlation

    Directory of Open Access Journals (Sweden)

    Ho Ka eChan

    2016-05-01

    Full Text Available Neurons communicate and transmit information predominantly through spikes. Given that experimentally observed neural spike trains in a variety of brain areas can be highly correlated, it is important to investigate how neurons process correlated inputs. Most previous work in this area studied the problem of correlation transfer analytically by making significant simplifications on neural dynamics. Temporal correlation between inputs that arises from synaptic filtering, for instance, is often ignored when assuming that an input spike can at most generate one output spike. Through numerical simulations of a pair of leaky integrate-and-fire (LIF neurons receiving correlated inputs, we demonstrate that neurons in the presence of synaptic filtering by slow synapses exhibit strong output correlations. We then show that burst firing plays a central role in enhancing output correlations, which can explain the above-mentioned observation because synaptic filtering induces bursting. The observed changes of correlations are mostly on a long time scale. Our results suggest that other features affecting the prevalence of neural burst firing in biological neurons, e.g., adaptive spiking mechanisms, may play an important role in modulating the overall level of correlations in neural networks.

  1. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  2. An Optimized Grey Dynamic Model for Forecasting the Output of High-Tech Industry in China

    Directory of Open Access Journals (Sweden)

    Zheng-Xin Wang

    2014-01-01

    Full Text Available The grey dynamic model by convolution integral with the first-order derivative of the 1-AGO data and n series related, abbreviated as GDMC(1,n, performs well in modelling and forecasting of a grey system. To improve the modelling accuracy of GDMC(1,n, n interpolation coefficients (taken as unknown parameters are introduced into the background values of the n variables. The parameters optimization is formulated as a combinatorial optimization problem and is solved collectively using the particle swarm optimization algorithm. The optimized result has been verified by a case study of the economic output of high-tech industry in China. Comparisons of the obtained modelling results from the optimized GDMC(1,n model with the traditional one demonstrate that the optimal algorithm is a good alternative for parameters optimization of the GDMC(1,n model. The modelling results can assist the government in developing future policies regarding high-tech industry management.

  3. Design of output feedback UPFC controller for damping of electromechanical oscillations using PSO

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H. [Technical Engineering Dept., Univ. of Mohaghegh Ardabili, Ardabil (Iran); Shayanfar, H.A. [Center of Excellence for Power Automation and Operation, Electrical Engineering Dept., Iran Univ. of Science and Technology, Tehran (Iran); Jalilzadeh, S.; Safari, A. [Technical Engineering Dept., Zanjan Univ., Zanjan (Iran)

    2009-10-15

    In this paper, a novel method for the design of output feedback controller for unified power flow controller (UPFC) is developed. The selection of the output feedback gains for the UPFC controllers is converted to an optimization problem with the time domain-based objective function which is solved by a particle swarm optimization technique (PSO) that has a strong ability to find the most optimistic results. Only local and available state variables are adopted as the input signals of each controller for the decentralized design. Thus, structure of the designed UPFC controller is simple and easy to implement. To ensure the robustness of the proposed stabilizers, the design process takes into account a wide range of operating conditions and system configurations. The effectiveness of the proposed controller for damping low frequency oscillations is tested and demonstrated through nonlinear time-domain simulation and some performance indices studies. The results analysis reveals that the designed PSO-based output feedback UPFC damping controller has an excellent capability in damping power system low frequency oscillations and enhance greatly the dynamic stability of the power systems. Moreover, the system performance analysis under different operating conditions show that the {delta}{sub E} based controller is superior to both the m{sub B} based controller and conventional power system stablizer. (author)

  4. Automated Mobility Transitions: Governing Processes in the UK

    Directory of Open Access Journals (Sweden)

    Debbie Hopkins

    2018-03-01

    Full Text Available Contemporary systems of mobility are undergoing a transition towards automation. In the UK, this transition is being led by (often new partnerships between incumbent manufacturers and new entrants, in collaboration with national governments, local/regional councils, and research institutions. This paper first offers a framework for analyzing the governance of the transition, adapting ideas from the Transition Management (TM perspective, and then applies the framework to ongoing automated vehicle transition dynamics in the UK. The empirical analysis suggests that the UK has adopted a reasonably comprehensive approach to the governing of automated vehicle innovation but that this approach cannot be characterized as sufficiently inclusive, democratic, diverse and open. The lack of inclusivity, democracy, diversity and openness is symptomatic of the post-political character of how the UK’s automated mobility transition is being governed. The paper ends with a call for a reconfiguration of the automated vehicle transition in the UK and beyond, so that much more space is created for dissent and for reflexive and comprehensive big picture thinking on (automated mobility futures.

  5. Maintenance Maneuver Automation for an Adapted Cylindrical Shape TEC

    Directory of Open Access Journals (Sweden)

    Rafael Morales

    2016-09-01

    Full Text Available Several manufacturers have developed devices with which to harness tidal/current power in areas where the depth does not exceed 40 m. These are the so-called first generation Tidal Energy Converters (TEC, and they are usually fixed to the seabed by gravity. When carrying out maintenance tasks on these devices it is, therefore, necessary to remove the nacelles from their bases and raise them to the surface of the sea. They must subsequently be placed back on their bases. These tasks require special high performance ships, signifying high maintenance costs. The automation of emersion and immersion maneuvers will undoubtedly lead to lower costs, given that ships with less demanding requirements will be required for the aforementioned maintenance tasks. This research presents a simple two degrees of freedom dynamic model that can be used to control a first generation TEC that has been conceived of to harness energy from marine currents. The control of the system is carried out by means of a water ballast system located inside the nacelle of the main power unit and is used as an actuator to produce buoying vertical forces. A nonlinear control law based on a decoupling term for the closed loop depth and/or orientation control is also proposed in order to ensure adequate behavior when the TEC performs emersion and immersion maneuvers with only hydrostatic buoyancy forces. The control scheme is composed of an inner loop consisting of a linear and decoupled input/output relationship and the vector of friction and compressibility terms and an outer loop that operates with the tracking error vector in order to ensure the asymptotically exponential stability of the TEC posture. Finally, the effectiveness of the dynamic model and the controller approach is demonstrated by means of numerical simulations when the TEC is carrying out an emersion maneuver for the development of general maintenance tasks and an emersion maneuver for blade-cleaning maintenance

  6. Parameter identifiability of linear dynamical systems

    Science.gov (United States)

    Glover, K.; Willems, J. C.

    1974-01-01

    It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.

  7. Verification of experimental modal modeling using HDR (Heissdampfreaktor) dynamic test data

    International Nuclear Information System (INIS)

    Srinivasan, M.G.; Kot, C.A.; Hsieh, B.J.

    1983-01-01

    Experimental modal modeling involves the determination of the modal parameters of the model of a structure from recorded input-output data from dynamic tests. Though commercial modal analysis algorithms are being widely used in many industries their ability to identify a set of reliable modal parameters of an as-built nuclear power plant structure has not been systematically verified. This paper describes the effort to verify MODAL-PLUS, a widely used modal analysis code, using recorded data from the dynamic tests performed on the reactor building of the Heissdampfreaktor, situated near Frankfurt, Federal Republic of Germany. In the series of dynamic tests on HDR in 1979, the reactor building was subjected to forced vibrations from different types and levels of dynamic excitations. Two sets of HDR containment building input-output data were chosen for MODAL-PLUS analyses. To reduce the influence of nonlinear behavior on the results, these sets were chosen so that the levels of excitation are relatively low and about the same in the two sets. The attempted verification was only partially successful in that only one modal model, with a limited range of validity, could be synthesized and in that the goodness of fit could be verified only in this limited range

  8. Shielded cells transfer automation

    International Nuclear Information System (INIS)

    Fisher, J.J.

    1984-01-01

    Nuclear waste from shielded cells is removed, packaged, and transferred manually in many nuclear facilities. Radiation exposure is absorbed by operators during these operations and limited only through procedural controls. Technological advances in automation using robotics have allowed a production waste removal operation to be automated to reduce radiation exposure. The robotic system bags waste containers out of glove box and transfers them to a shielded container. Operators control the system outside the system work area via television cameras. 9 figures

  9. The value of risk: measuring the service output of U.S. commercial banks

    OpenAIRE

    Basu, Susanto; Inklaar, Robert; Wang, J. Christina

    2011-01-01

    Rather than charging direct fees, banks often charge implicitly for their services via interest spreads. As a result, much of bank output has to be estimated indirectly. In contrast to current statistical practice, dynamic optimizing models of banks argue that compensation for bearing systematic risk is not part of bank output. We apply these models and find that between 1997 and 2007, in the U.S. National Accounts, on average, bank output is overestimated by 21 percent and GDP is overestimat...

  10. Evaluation of automated vehicle technology for transit : [summary].

    Science.gov (United States)

    2014-01-01

    Automated transportation has been portrayed in : futuristic literature since the 19th century, but : making vehicles truly autonomous has only been : possible in recent decades with advanced control : and computer technologies. Automating cars is a :...

  11. Home Automation and Security System Using Android ADK

    OpenAIRE

    Deepali Javale; Mohd. Mohsin; Shreerang Nandanwar; Mayur Shingate

    2013-01-01

    Today we are living in 21st century where automation is playing important role in human life. Home automation allows us to control household appliances like light, door, fan, AC etc. It also provides home security and emergency system to be activated. Home automation not only refers to reduce human efforts but also energy efficiency and time saving. The main objective of home automation and security is to help handicapped and old aged people which will enable them to control home appliances a...

  12. Management of Industrial Processes with Programmable Logic Controller

    Directory of Open Access Journals (Sweden)

    Marius Tufoi

    2009-10-01

    Full Text Available In a modern economy, automation (the control is primarily to raise the competitiveness of a product, either directly through price or quality, or indirectly through the improvement of working conditions of staff productive. The control of industrial processes involves the management of dynamic systems that have continuous states. These systems are described by differential equations and, in general, analog inputs and outputs. Management of these systems is achieved, in general, with classical automation, by automation or with analog computers which contains modules with input / output analog performance. If states, inputs and outputs of a system can be modeled using binary variables, then these systems can be driven with Programmable Logic Controller.

  13. General Output Feedback Stabilization for Fractional Order Systems: An LMI Approach

    Directory of Open Access Journals (Sweden)

    Yiheng Wei

    2014-01-01

    Full Text Available This paper is concerned with the problem of general output feedback stabilization for fractional order linear time-invariant (FO-LTI systems with the fractional commensurate order 0<α<2. The objective is to design suitable output feedback controllers that guarantee the stability of the resulting closed-loop systems. Based on the slack variable method and our previous stability criteria, some new results in the form of linear matrix inequality (LMI are developed to the static and dynamic output feedback controllers synthesis for the FO-LTI system with 0<α<1. Furthermore, the results are extended to stabilize the FO-LTI systems with 1≤α<2. Finally, robust output feedback control is discussed. Numerical examples are given to illustrate the effectiveness of the proposed design methods.

  14. Semi-automated ontology generation and evolution

    Science.gov (United States)

    Stirtzinger, Anthony P.; Anken, Craig S.

    2009-05-01

    Extending the notion of data models or object models, ontology can provide rich semantic definition not only to the meta-data but also to the instance data of domain knowledge, making these semantic definitions available in machine readable form. However, the generation of an effective ontology is a difficult task involving considerable labor and skill. This paper discusses an Ontology Generation and Evolution Processor (OGEP) aimed at automating this process, only requesting user input when un-resolvable ambiguous situations occur. OGEP directly attacks the main barrier which prevents automated (or self learning) ontology generation: the ability to understand the meaning of artifacts and the relationships the artifacts have to the domain space. OGEP leverages existing lexical to ontological mappings in the form of WordNet, and Suggested Upper Merged Ontology (SUMO) integrated with a semantic pattern-based structure referred to as the Semantic Grounding Mechanism (SGM) and implemented as a Corpus Reasoner. The OGEP processing is initiated by a Corpus Parser performing a lexical analysis of the corpus, reading in a document (or corpus) and preparing it for processing by annotating words and phrases. After the Corpus Parser is done, the Corpus Reasoner uses the parts of speech output to determine the semantic meaning of a word or phrase. The Corpus Reasoner is the crux of the OGEP system, analyzing, extrapolating, and evolving data from free text into cohesive semantic relationships. The Semantic Grounding Mechanism provides a basis for identifying and mapping semantic relationships. By blending together the WordNet lexicon and SUMO ontological layout, the SGM is given breadth and depth in its ability to extrapolate semantic relationships between domain entities. The combination of all these components results in an innovative approach to user assisted semantic-based ontology generation. This paper will describe the OGEP technology in the context of the architectural

  15. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Robust back-stepping output feedback trajectory tracking for quadrotors via extended state observer and sigmoid tracking differentiator

    Science.gov (United States)

    Shao, Xingling; Liu, Jun; Wang, Honglun

    2018-05-01

    In this paper, a robust back-stepping output feedback trajectory tracking controller is proposed for quadrotors subject to parametric uncertainties and external disturbances. Based on the hierarchical control principle, the quadrotor dynamics is decomposed into translational and rotational subsystems to facilitate the back-stepping control design. With given model information incorporated into observer design, a high-order extended state observer (ESO) that relies only on position measurements is developed to estimate the remaining unmeasurable states and the lumped disturbances in rotational subsystem simultaneously. To overcome the problem of "explosion of complexity" in the back-stepping design, the sigmoid tracking differentiator (STD) is introduced to compute the derivative of virtual control laws. The advantage is that the proposed controller via output-feedback scheme not only can ensure good tracking performance using very limited information of quadrotors, but also has the ability of handling the undesired uncertainties. The stability analysis is established using the Lyapunov theory. Simulation results demonstrate the effectiveness of the proposed control scheme in achieving a guaranteed tracking performance with respect to an 8-shaped reference trajectory.

  17. A controls engineering approach for analyzing airplane input-output characteristics

    Science.gov (United States)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  18. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  19. Automated Assessment of Postural Stability (AAPS)

    Science.gov (United States)

    2016-10-01

    performed a battery of standard clinical tests of dynamic posture, whereas the fourth subject performed the stereotyped postures (e.g. movements restricted...Processing & Control [2] Napoli A, Ward C, Glass S, Tucker C, Obeid I (2016) “Automated Assessment of Postural Stability System,” IEEE Engineering in

  20. Participation through Automation: Fully Automated Critical PeakPricing in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Watson, David S.; Motegi, Naoya; Kiliccote,Sila; Linkugel, Eric

    2006-06-20

    California electric utilities have been exploring the use of dynamic critical peak prices (CPP) and other demand response programs to help reduce peaks in customer electric loads. CPP is a tariff design to promote demand response. Levels of automation in DR can be defined as follows: Manual Demand Response involves a potentially labor-intensive approach such as manually turning off or changing comfort set points at each equipment switch or controller. Semi-Automated Demand Response involves a pre-programmed demand response strategy initiated by a person via centralized control system. Fully Automated Demand Response does not involve human intervention, but is initiated at a home, building, or facility through receipt of an external communications signal. The receipt of the external signal initiates pre-programmed demand response strategies. They refer to this as Auto-DR. This paper describes the development, testing, and results from automated CPP (Auto-CPP) as part of a utility project in California. The paper presents the project description and test methodology. This is followed by a discussion of Auto-DR strategies used in the field test buildings. They present a sample Auto-CPP load shape case study, and a selection of the Auto-CPP response data from September 29, 2005. If all twelve sites reached their maximum saving simultaneously, a total of approximately 2 MW of DR is available from these twelve sites that represent about two million ft{sup 2}. The average DR was about half that value, at about 1 MW. These savings translate to about 0.5 to 1.0 W/ft{sup 2} of demand reduction. They are continuing field demonstrations and economic evaluations to pursue increasing penetrations of automated DR that has demonstrated ability to provide a valuable DR resource for California.

  1. 36 CFR 1193.43 - Output, display, and control functions.

    Science.gov (United States)

    2010-07-01

    ... for the use of the product, through at least one mode in enhanced auditory fashion (i.e., increased... and use the product, including but not limited to, text, static or dynamic images, icons, labels... of audio cutoff. Where a product delivers audio output through an external speaker, provide an...

  2. What makes an automated teller machine usable by blind users?

    Science.gov (United States)

    Manzke, J M; Egan, D H; Felix, D; Krueger, H

    1998-07-01

    Fifteen blind and sighted subjects, who featured as a control group for acceptance, were asked for their requirements for automated teller machines (ATMs). Both groups also tested the usability of a partially operational ATM mock-up. This machine was based on an existing cash dispenser, providing natural speech output, different function menus and different key arrangements. Performance and subjective evaluation data of blind and sighted subjects were collected. All blind subjects were able to operate the ATM successfully. The implemented speech output was the main usability factor for them. The different interface designs did not significantly affect performance and subjective evaluation. Nevertheless, design recommendations can be derived from the requirement assessment. The sighted subjects were rather open for design modifications, especially the implementation of speech output. However, there was also a mismatch of the requirements of the two subject groups, mainly concerning the key arrangement.

  3. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Meyer, R.A.

    1979-01-01

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  4. (YIP 2011) Unsteady Output-based Adaptive Simulation of Separated and Transitional Flows

    Science.gov (United States)

    2015-03-19

    Investigator Aerospace Eng. U. Michigan Marco Ceze Ph.D. student/postdoctoral associate Aerospace Eng. U. Michigan Steven Kast Ph.D. student Aerospace...13] S. M. Kast , M. A. Ceze, and K. J. Fidkowski. Output-adaptive solution strategies for unsteady aerodynamics on deformable domains. Seventh...International Conference on Computational Fluid Dynamics ICCFD7-3802, 2012. [14] S. M. Kast and K. J. Fidkowski. Output-based mesh adaptation for high order

  5. Automated Tracking of Cell Migration with Rapid Data Analysis.

    Science.gov (United States)

    DuChez, Brian J

    2017-09-01

    Cell migration is essential for many biological processes including development, wound healing, and metastasis. However, studying cell migration often requires the time-consuming and labor-intensive task of manually tracking cells. To accelerate the task of obtaining coordinate positions of migrating cells, we have developed a graphical user interface (GUI) capable of automating the tracking of fluorescently labeled nuclei. This GUI provides an intuitive user interface that makes automated tracking accessible to researchers with no image-processing experience or familiarity with particle-tracking approaches. Using this GUI, users can interactively determine a minimum of four parameters to identify fluorescently labeled cells and automate acquisition of cell trajectories. Additional features allow for batch processing of numerous time-lapse images, curation of unwanted tracks, and subsequent statistical analysis of tracked cells. Statistical outputs allow users to evaluate migratory phenotypes, including cell speed, distance, displacement, and persistence, as well as measures of directional movement, such as forward migration index (FMI) and angular displacement. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  6. A test matrix sequencer for research test facility automation

    Science.gov (United States)

    Mccartney, Timothy P.; Emery, Edward F.

    1990-01-01

    The hardware and software configuration of a Test Matrix Sequencer, a general purpose test matrix profiler that was developed for research test facility automation at the NASA Lewis Research Center, is described. The system provides set points to controllers and contact closures to data systems during the course of a test. The Test Matrix Sequencer consists of a microprocessor controlled system which is operated from a personal computer. The software program, which is the main element of the overall system is interactive and menu driven with pop-up windows and help screens. Analog and digital input/output channels can be controlled from a personal computer using the software program. The Test Matrix Sequencer provides more efficient use of aeronautics test facilities by automating repetitive tasks that were once done manually.

  7. Scientific Output of Croatian Universities: Comparison with Neighbouring Countries

    Directory of Open Access Journals (Sweden)

    Boris Podobnik

    2008-06-01

    Full Text Available We compared the Croatian research output with the neighboring countries and the Croatian universities with the largest Slovenian, Hungarian, and Serbian universities. As far as papers listed by Social Science Citation Index are concerned, since 2000 the University of Zagreb exhibits best results in social sciences compared to the competing universities, that is not the case in “hard” sciences. For the last 12 years, only the University of Ljubljana has shown better results in total research output than the University of Zagreb. The difference in research output between the University of Zagreb and the rest of the Croatian universities has been constantly decreasing. As a case study we compare research output at Faculty of Civil Engeenering on different Croatian universities. By analyzing European countries, we show a functional dependence between the gross domestic product (GDP and the research output. From this fit we conclude that the Croatian science exhibits research output as expected for the given level of GDP.

  8. Software complex AS (automation of spectrometry). User interface of experiment automation system implementation

    International Nuclear Information System (INIS)

    Astakhova, N.V.; Beskrovnyj, A.I.; Bogdzel', A.A.; Butorin, P.E.; Vasilovskij, S.G.; Gundorin, N.A.; Zlokazov, V.B.; Kutuzov, S.A.; Salamatin, I.M.; Shvetsov, V.N.

    2003-01-01

    An instrumental software complex for automation of spectrometry (AS) that enables prompt realization of experiment automation systems for spectrometers, which use data buferisation, has been developed. In the development new methods of programming and building of automation systems together with novel net technologies were employed. It is suggested that programs to schedule and conduct experiments should be based on the parametric model of the spectrometer, the approach that will make it possible to write programs suitable for any FLNP (Frank Laboratory of Neutron Physics) spectrometer and experimental technique applied and use different hardware interfaces for introducing the spectrometric data into the data acquisition system. The article describes the possibilities provided to the user in the field of scheduling and control of the experiment, data viewing, and control of the spectrometer parameters. The possibility of presenting the current spectrometer state, programs and the experimental data in the Internet in the form of dynamically formed protocols and graphs, as well as of the experiment control via the Internet is realized. To use the means of the Internet on the side of the client, applied programs are not needed. It suffices to know how to use the two programs to carry out experiments in the automated mode. The package is designed for experiments in condensed matter and nuclear physics and is ready for using. (author)

  9. Decomposing the misery index: A dynamic approach

    Directory of Open Access Journals (Sweden)

    Ivan K. Cohen

    2014-12-01

    Full Text Available The misery index (the unweighted sum of unemployment and inflation rates was probably the first attempt to develop a single statistic to measure the level of a population’s economic malaise. In this letter, we develop a dynamic approach to decompose the misery index using two basic relations of modern macroeconomics: the expectations-augmented Phillips curve and Okun’s law. Our reformulation of the misery index is closer in spirit to Okun’s idea. However, we are able to offer an improved version of the index, mainly based on output and unemployment. Specifically, this new Okun’s index measures the level of economic discomfort as a function of three key factors: (1 the misery index in the previous period; (2 the output gap in growth rate terms; and (3 cyclical unemployment. This dynamic approach differs substantially from the standard one utilised to develop the misery index, and allow us to obtain an index with five main interesting features: (1 it focuses on output, unemployment and inflation; (2 it considers only objective variables; (3 it allows a distinction between short-run and long-run phenomena; (4 it places more importance on output and unemployment rather than inflation; and (5 it weights recessions more than expansions.

  10. Dynamic Coordinated Shifting Control of Automated Mechanical Transmissions without a Clutch in a Plug-In Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Xinlei Liu

    2012-08-01

    Full Text Available On the basis of the shifting process of automated mechanical transmissions (AMTs for traditional hybrid electric vehicles (HEVs, and by combining the features of electric machines with fast response speed, the dynamic model of the hybrid electric AMT vehicle powertrain is built up, the dynamic characteristics of each phase of shifting process are analyzed, and a control strategy in which torque and speed of the engine and electric machine are coordinatively controlled to achieve AMT shifting control for a plug-in hybrid electric vehicle (PHEV without clutch is proposed. In the shifting process, the engine and electric machine are well controlled, and the shift jerk and power interruption and restoration time are reduced. Simulation and real car test results show that the proposed control strategy can more efficiently improve the shift quality for PHEVs equipped with AMTs.

  11. Automation of Taxiing

    Directory of Open Access Journals (Sweden)

    Jaroslav Bursík

    2017-01-01

    Full Text Available The article focuses on the possibility of automation of taxiing, which is the part of a flight, which, under adverse weather conditions, greatly reduces the operational usability of an airport, and is the only part of a flight that has not been affected by automation, yet. Taxiing is currently handled manually by the pilot, who controls the airplane based on information from visual perception. The article primarily deals with possible ways of obtaining navigational information, and its automatic transfer to the controls. Analyzed wand assessed were currently available technologies such as computer vision, Light Detection and Ranging and Global Navigation Satellite System, which are useful for navigation and their general implementation into an airplane was designed. Obstacles to the implementation were identified, too. The result is a proposed combination of systems along with their installation into airplane’s systems so that it is possible to use the automated taxiing.

  12. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  13. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    Science.gov (United States)

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  15. Output-only modal parameter estimator of linear time-varying structural systems based on vector TAR model and least squares support vector machine

    Science.gov (United States)

    Zhou, Si-Da; Ma, Yuan-Chen; Liu, Li; Kang, Jie; Ma, Zhi-Sai; Yu, Lei

    2018-01-01

    Identification of time-varying modal parameters contributes to the structural health monitoring, fault detection, vibration control, etc. of the operational time-varying structural systems. However, it is a challenging task because there is not more information for the identification of the time-varying systems than that of the time-invariant systems. This paper presents a vector time-dependent autoregressive model and least squares support vector machine based modal parameter estimator for linear time-varying structural systems in case of output-only measurements. To reduce the computational cost, a Wendland's compactly supported radial basis function is used to achieve the sparsity of the Gram matrix. A Gamma-test-based non-parametric approach of selecting the regularization factor is adapted for the proposed estimator to replace the time-consuming n-fold cross validation. A series of numerical examples have illustrated the advantages of the proposed modal parameter estimator on the suppression of the overestimate and the short data. A laboratory experiment has further validated the proposed estimator.

  16. Automated Selection of Hotspots (ASH): enhanced automated segmentation and adaptive step finding for Ki67 hotspot detection in adrenal cortical cancer.

    Science.gov (United States)

    Lu, Hao; Papathomas, Thomas G; van Zessen, David; Palli, Ivo; de Krijger, Ronald R; van der Spek, Peter J; Dinjens, Winand N M; Stubbs, Andrew P

    2014-11-25

    In prognosis and therapeutics of adrenal cortical carcinoma (ACC), the selection of the most active areas in proliferative rate (hotspots) within a slide and objective quantification of immunohistochemical Ki67 Labelling Index (LI) are of critical importance. In addition to intratumoral heterogeneity in proliferative rate i.e. levels of Ki67 expression within a given ACC, lack of uniformity and reproducibility in the method of quantification of Ki67 LI may confound an accurate assessment of Ki67 LI. We have implemented an open source toolset, Automated Selection of Hotspots (ASH), for automated hotspot detection and quantification of Ki67 LI. ASH utilizes NanoZoomer Digital Pathology Image (NDPI) splitter to convert the specific NDPI format digital slide scanned from the Hamamatsu instrument into a conventional tiff or jpeg format image for automated segmentation and adaptive step finding hotspots detection algorithm. Quantitative hotspot ranking is provided by the functionality from the open source application ImmunoRatio as part of the ASH protocol. The output is a ranked set of hotspots with concomitant quantitative values based on whole slide ranking. We have implemented an open source automated detection quantitative ranking of hotspots to support histopathologists in selecting the 'hottest' hotspot areas in adrenocortical carcinoma. To provide wider community easy access to ASH we implemented a Galaxy virtual machine (VM) of ASH which is available from http://bioinformatics.erasmusmc.nl/wiki/Automated_Selection_of_Hotspots . The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/13000_2014_216.

  17. Automated Root Tracking with "Root System Analyzer"

    Science.gov (United States)

    Schnepf, Andrea; Jin, Meina; Ockert, Charlotte; Bol, Roland; Leitner, Daniel

    2015-04-01

    Crucial factors for plant development are water and nutrient availability in soils. Thus, root architecture is a main aspect of plant productivity and needs to be accurately considered when describing root processes. Images of root architecture contain a huge amount of information, and image analysis helps to recover parameters describing certain root architectural and morphological traits. The majority of imaging systems for root systems are designed for two-dimensional images, such as RootReader2, GiA Roots, SmartRoot, EZ-Rhizo, and Growscreen, but most of them are semi-automated and involve mouse-clicks in each root by the user. "Root System Analyzer" is a new, fully automated approach for recovering root architectural parameters from two-dimensional images of root systems. Individual roots can still be corrected manually in a user interface if required. The algorithm starts with a sequence of segmented two-dimensional images showing the dynamic development of a root system. For each image, morphological operators are used for skeletonization. Based on this, a graph representation of the root system is created. A dynamic root architecture model helps to determine which edges of the graph belong to an individual root. The algorithm elongates each root at the root tip and simulates growth confined within the already existing graph representation. The increment of root elongation is calculated assuming constant growth. For each root, the algorithm finds all possible paths and elongates the root in the direction of the optimal path. In this way, each edge of the graph is assigned to one or more coherent roots. Image sequences of root systems are handled in such a way that the previous image is used as a starting point for the current image. The algorithm is implemented in a set of Matlab m-files. Output of Root System Analyzer is a data structure that includes for each root an identification number, the branching order, the time of emergence, the parent

  18. Robust ℋ∞ Dynamic Output Feedback Control Synthesis with Pole Placement Constraints for Offshore Wind Turbine Systems

    Directory of Open Access Journals (Sweden)

    Tore Bakka

    2012-01-01

    Full Text Available The problem of robust ℋ∞ dynamic output feedback control design with pole placement constraints is studied for a linear parameter-varying model of a floating wind turbine. A nonlinear model is obtained and linearized using the FAST software developed for wind turbines. The main contributions of this paper are threefold. Firstly, a family of linear models are represented based on an affine parameter-varying model structure for a wind turbine system. Secondly, the bounded parameter-varying parameters are removed using upper bounded inequalities in the control design process. Thirdly, the control problem is formulated in terms of linear matrix inequalities (LMIs. The simulation results show a comparison between controller design based on a constant linear model and a controller design for the linear parameter-varying model. The results show the effectiveness of our proposed design technique.

  19. SOS based robust H(∞) fuzzy dynamic output feedback control of nonlinear networked control systems.

    Science.gov (United States)

    Chae, Seunghwan; Nguang, Sing Kiong

    2014-07-01

    In this paper, a methodology for designing a fuzzy dynamic output feedback controller for discrete-time nonlinear networked control systems is presented where the nonlinear plant is modelled by a Takagi-Sugeno fuzzy model and the network-induced delays by a finite state Markov process. The transition probability matrix for the Markov process is allowed to be partially known, providing a more practical consideration of the real world. Furthermore, the fuzzy controller's membership functions and premise variables are not assumed to be the same as the plant's membership functions and premise variables, that is, the proposed approach can handle the case, when the premise of the plant are not measurable or delayed. The membership functions of the plant and the controller are approximated as polynomial functions, then incorporated into the controller design. Sufficient conditions for the existence of the controller are derived in terms of sum of square inequalities, which are then solved by YALMIP. Finally, a numerical example is used to demonstrate the validity of the proposed methodology.

  20. Building automation: Photovoltaic assisted thermal comfort management system for energy saving

    International Nuclear Information System (INIS)

    Khan, M Reyasudin Basir; Jidin, Razali; Shaaya, Sharifah Azwa; Pasupuleti, Jagadeesh

    2013-01-01

    Building automation plays an important key role in the means to reduce building energy consumption and to provide comfort for building occupants. It is often that air conditioning system operating features ignored in building automation which can result in thermal discomfort among building occupants. Most automation system for building is expensive and incurs high maintenance cost. Such system also does not support electricity demand side management system such as load shifting. This paper discusses on centralized monitoring system for room temperature and photovoltaic (PV) output for feasibility study of PV assisted air conditioning system in small office buildings. The architecture of the system consists of PV modules and sensor nodes located at each room. Wireless sensor network technology (WSN) been used for data transmission. The data from temperature sensors and PV modules transmitted to the host personal computer (PC) wirelessly using Zigbee modules. Microcontroller based USB data acquisition device used to receive data from sensor nodes and displays the data on PC.

  1. Building automation: Photovoltaic assisted thermal comfort management system for energy saving

    Science.gov (United States)

    Reyasudin Basir Khan, M.; Jidin, Razali; Pasupuleti, Jagadeesh; Azwa Shaaya, Sharifah

    2013-06-01

    Building automation plays an important key role in the means to reduce building energy consumption and to provide comfort for building occupants. It is often that air conditioning system operating features ignored in building automation which can result in thermal discomfort among building occupants. Most automation system for building is expensive and incurs high maintenance cost. Such system also does not support electricity demand side management system such as load shifting. This paper discusses on centralized monitoring system for room temperature and photovoltaic (PV) output for feasibility study of PV assisted air conditioning system in small office buildings. The architecture of the system consists of PV modules and sensor nodes located at each room. Wireless sensor network technology (WSN) been used for data transmission. The data from temperature sensors and PV modules transmitted to the host personal computer (PC) wirelessly using Zigbee modules. Microcontroller based USB data acquisition device used to receive data from sensor nodes and displays the data on PC.

  2. WARACS: Wrappers to Automate the Reconstruction of Ancestral Character States1

    Science.gov (United States)

    Gruenstaeudl, Michael

    2016-01-01

    Premise of the study: Reconstructions of ancestral character states are among the most widely used analyses for evaluating the morphological, cytological, or ecological evolution of an organismic lineage. The software application Mesquite remains the most popular application for such reconstructions among plant scientists, even though its support for automating complex analyses is limited. A software tool is needed that automates the reconstruction and visualization of ancestral character states with Mesquite and similar applications. Methods and Results: A set of command line–based Python scripts was developed that (a) communicates standardized input to and output from the software applications Mesquite, BayesTraits, and TreeGraph2; (b) automates the process of ancestral character state reconstruction; and (c) facilitates the visualization of reconstruction results. Conclusions: WARACS provides a simple tool that streamlines the reconstruction and visualization of ancestral character states over a wide array of parameters, including tree distribution, character state, and optimality criterion. PMID:26949580

  3. JPLEX: Java Simplex Implementation with Branch-and-Bound Search for Automated Test Assembly

    Science.gov (United States)

    Park, Ryoungsun; Kim, Jiseon; Dodd, Barbara G.; Chung, Hyewon

    2011-01-01

    JPLEX, short for Java simPLEX, is an automated test assembly (ATA) program. It is a mixed integer linear programming (MILP) solver written in Java. It reads in a configuration file, solves the minimization problem, and produces an output file for postprocessing. It implements the simplex algorithm to create a fully relaxed solution and…

  4. An automated procedure for the quality assurance of electron beam output and energy

    International Nuclear Information System (INIS)

    Woo, M.K.; Videla, N.G.

    2004-01-01

    In this article, we report on the development of a simple and accurate method for quality assurance of electron beam output and energy. Aluminum disks of thicknesses d max or d 50 of the particular electron energy are positioned sequentially over a parallel-plate ion chamber and the ratio of the two signals is compared to the standard. The positioning of the aluminum disks is carried out remotely and automatically to eliminate the necessity of multiple setups. One method utilizes the remote control feature of the treatment couch and another employs a motor-driven carousel. The superior sensitivity over a commercially available energy monitor is illustrated

  5. An Optimal Augmented Monotonic Tracking Controller for Aircraft Engines with Output Constraints

    Directory of Open Access Journals (Sweden)

    Jiakun Qin

    2017-01-01

    Full Text Available This paper proposes a novel min-max control scheme for aircraft engines, with the aim of transferring a set of regulated outputs between two set-points, while ensuring a set of auxiliary outputs remain within prescribed constraints. In view of this, an optimal augmented monotonic tracking controller (OAMTC is proposed, by considering a linear plant with input integration, to enhance the ability of the control system to reject uncertainty in system parameters and ensure no crossing limits. The key idea is to use the eigenvalue and eigenvector placement method and genetic algorithms to shape the output responses. The approach is validated by numerical simulation. The results show that the designed OAMTC controller can achieve a satisfactory dynamic and steady performance and keep the auxiliary outputs within constraints in the transient regime.

  6. Studies of works management and automation of nuclear power installations

    International Nuclear Information System (INIS)

    Besch, P.; Grossmann, J.; Hollasky, R.

    1989-01-01

    Erection and operation of nuclear power installations require investigations on their safety and availability. The works performed on the management of nuclear power plants and nuclear heating stations in the Working Group on Automation Engineering of the Dresden University of Technology are presented. Emphasis of the works is on simulation of dynamical performance of the plants and studies on the utilization of novel techniques concerning plant automation and process management. (author)

  7. An optimized method for automated analysis of algal pigments by HPLC

    NARCIS (Netherlands)

    van Leeuwe, M. A.; Villerius, L. A.; Roggeveld, J.; Visser, R. J. W.; Stefels, J.

    2006-01-01

    A recent development in algal pigment analysis by high-performance liquid chromatography (HPLC) is the application of automation. An optimization of a complete sampling and analysis protocol applied specifically in automation has not yet been performed. In this paper we show that automation can only

  8. Expressing Intervals in Automated Service Negotiation

    Science.gov (United States)

    Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.

    During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.

  9. Event-triggered output feedback control for distributed networked systems.

    Science.gov (United States)

    Mahmoud, Magdi S; Sabih, Muhammad; Elshafei, Moustafa

    2016-01-01

    This paper addresses the problem of output-feedback communication and control with event-triggered framework in the context of distributed networked control systems. The design problem of the event-triggered output-feedback control is proposed as a linear matrix inequality (LMI) feasibility problem. The scheme is developed for the distributed system where only partial states are available. In this scheme, a subsystem uses local observers and share its information to its neighbors only when the subsystem's local error exceeds a specified threshold. The developed method is illustrated by using a coupled cart example from the literature. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Automated processing of first-pass radionuclide angiocardiography by factor analysis of dynamic structures.

    Science.gov (United States)

    Cavailloles, F; Bazin, J P; Capderou, A; Valette, H; Herbert, J L; Di Paola, R

    1987-05-01

    A method for automatic processing of cardiac first-pass radionuclide study is presented. This technique, factor analysis of dynamic structures (FADS) provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. FADS has been applied to 76 studies. A description of factor patterns obtained in various pathological categories is presented. FADS provides easy diagnosis of shunts and tricuspid insufficiency. Quantitative information derived from the factors (cardiac output and mean transit time) were compared to those obtained by the region of interest method. Using FADS, a higher correlation with cardiac catheterization was found for cardiac output calculation. Thus compared to the ROI method, FADS presents obvious advantages: a good separation of overlapping cardiac chambers is obtained; this operator independant method provides more objective and reproducible results. A number of parameters of the cardio-pulmonary function can be assessed by first-pass radionuclide angiocardiography (RNA) [1,2]. Usually, they are calculated using time-activity curves (TAC) from regions of interest (ROI) drawn on the cardiac chambers and the lungs. This method has two main drawbacks: (1) the lack of inter and intra-observers reproducibility; (2) the problem of crosstalk which affects the evaluation of the cardio-pulmonary performance. The crosstalk on planar imaging is due to anatomical superimposition of the cardiac chambers and lungs. The activity measured in any ROI is the sum of the activity in several organs and 'decontamination' of the TAC cannot easily be performed using the ROI method [3]. Factor analysis of dynamic structures (FADS) [4,5] can solve the two problems mentioned above. It provides an automatic separation of anatomical structures according to their different temporal behaviour, even if they are superimposed. The resulting factors are estimates of the time evolution of the activity in each

  11. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1997-06-01

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  12. A conceptual model of the automated credibility assessment of the volunteered geographic information

    International Nuclear Information System (INIS)

    Idris, N H; Jackson, M J; Ishak, M H I

    2014-01-01

    The use of Volunteered Geographic Information (VGI) in collecting, sharing and disseminating geospatially referenced information on the Web is increasingly common. The potentials of this localized and collective information have been seen to complement the maintenance process of authoritative mapping data sources and in realizing the development of Digital Earth. The main barrier to the use of this data in supporting this bottom up approach is the credibility (trust), completeness, accuracy, and quality of both the data input and outputs generated. The only feasible approach to assess these data is by relying on an automated process. This paper describes a conceptual model of indicators (parameters) and practical approaches to automated assess the credibility of information contributed through the VGI including map mashups, Geo Web and crowd – sourced based applications. There are two main components proposed to be assessed in the conceptual model – metadata and data. The metadata component comprises the indicator of the hosting (websites) and the sources of data / information. The data component comprises the indicators to assess absolute and relative data positioning, attribute, thematic, temporal and geometric correctness and consistency. This paper suggests approaches to assess the components. To assess the metadata component, automated text categorization using supervised machine learning is proposed. To assess the correctness and consistency in the data component, we suggest a matching validation approach using the current emerging technologies from Linked Data infrastructures and using third party reviews validation. This study contributes to the research domain that focuses on the credibility, trust and quality issues of data contributed by web citizen providers

  13. Multi-model MPC with output feedback

    Directory of Open Access Journals (Sweden)

    J. M. Perez

    2014-03-01

    Full Text Available In this work, a new formulation is presented for the model predictive control (MPC of a process system that is represented by a finite set of models, each one corresponding to a different operating point. The general case is considered of systems with stable and integrating outputs in closed-loop with output feedback. For this purpose, the controller is based on a non-minimal order model where the state is built with the measured outputs and the manipulated inputs of the control system. Therefore, the state can be considered as perfectly known and, consequently, there is no need to include a state observer in the control loop. This property of the proposed modeling approach is convenient to extend previous stability results of the closed loop system with robust MPC controllers based on state feedback. The controller proposed here is based on the solution of two optimization problems that are solved sequentially at the same time step. The method is illustrated with a simulated example of the process industry. The rigorous simulation of the control of an adiabatic flash of a multi-component hydrocarbon mixture illustrates the application of the robust controller. The dynamic simulation of this process is performed using EMSO - Environment Model Simulation and Optimization. Finally, a comparison with a linear MPC using a single model is presented.

  14. Automated gray level index measurements reveal only minor cytoarchitectonic changes of Brodmann area 9 in schizophrenia.

    Science.gov (United States)

    Tepest, Ralf; Vogeley, Kai; Viebahn, Bettina; Schneider-Axmann, Thomas; Honer, William G; Falkai, Peter

    2008-07-15

    Using an automatized gray level index (GLI) method, we recently found cytoarchitectonic abnormalities in schizophrenia in Brodmann area 10 (BA10) [Vogeley, K., Tepest, R., Schneider-Axmann, T., Hutte, H., Zilles, K., Honer, W.G., Falkai, P., 2003. Automated image analysis of disturbed cytoarchitecture in Brodmann area 10 in schizophrenia, Schizophrenia Research 62, 133-140]. As another potential key region involved in the pathophysiology of schizophrenia, we have now investigated BA9 in the same sample consisting of 20 schizophrenic cases and 20 controls. The GLI value represents the area-percentage covered by perikarya in measuring fields of microscopic images. BA9 was analyzed with respect to the factors diagnosis and gender for six different compartments approximately corresponding to the neocortical layers. The main result in BA9 was a significant interaction of diagnosis and gender for GLI in layers IV and V on the left side. Subsequent analyses separately performed concerning gender revealed a significant GLI increase in layer V on the left side in male patients compared with controls. However, after an adjustment of error probabilities for multiple testing, differences did not reach significance. No GLI difference was observed in the sample between diagnostic groups for females and between the diagnostic groups in general. Comparisons with our BA10 results suggest that cytoarchitectural changes relevant to schizophrenia appear different in various Brodmann areas. Since increases in GLI were found only in selected layers (V and VI) of BA9, these findings do not support a generalized neuropil reduction across all cortical layers.

  15. New Sufficient LMI Conditions for Static Output Stabilization

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher

    2014-01-01

    This paper presents new linear matrix inequality conditions to the static output feedback stabilization problem. Although the conditions are only sufficient, numerical experiments show excellent success rates in finding a stabilizing controller....

  16. Dynamic Travel Time Prediction Models for Buses Using Only GPS Data

    Directory of Open Access Journals (Sweden)

    Wei Fan

    2015-01-01

    Full Text Available Providing real-time and accurate travel time information of transit vehicles can be very helpful as it assists passengers in planning their trips to minimize waiting times. The purpose of this research is to develop and compare dynamic travel time prediction models which can provide accurate prediction of bus travel time in order to give real-time information at a given downstream bus stop using only global positioning system (GPS data. Historical Average (HA, Kalman Filtering (KF and Artificial Neural Network (ANN models are considered and developed in this paper. A case has been studied by making use of the three models. Promising results are obtained from the case study, indicating that the models can be used to implement an Advanced Public Transport System. The implementation of this system could assist transit operators in improving the reliability of bus services, thus attracting more travelers to transit vehicles and helping relieve congestion. The performances of the three models were assessed and compared with each other under two criteria: overall prediction accuracy and robustness. It was shown that the ANN outperformed the other two models in both aspects. In conclusion, it is shown that bus travel time information can be reasonably provided using only arrival and departure time information at stops even in the absence of traffic-stream data.

  17. Science policy through stimulating scholarly output. Reanalyzing the Australian case

    Energy Technology Data Exchange (ETDEWEB)

    Van den Besselaar, P.; Heyman, U.; Sandström, U.

    2016-07-01

    There is a long standing debate about perverse effects of performance indicators. A main target is science policy using stimulation of output as instrument. The criticism is to a large extent based on a study of the Australian science policy in the early 1990s. Linda Butler studied the effects and argued that the effect was an growth of output, but also a decrease of average quality of the output. These results have been cited many times. In this paper we reanalyze this case and show that the analysis of Butler was wrong: the new Australian science policy did not only increase the output of the system, but also the quality went up. We discuss the implications. (Author)

  18. Innovative automation solutions applied to nuclear fuel production and inspection processes

    International Nuclear Information System (INIS)

    Vas, Ananth

    2012-01-01

    The nuclear industry in India is slated for fast paced growth in the coming years, with a great focus on increasing the capacity for producing, inspecting and finally reprocessing of nuclear fuel. Modern techniques of industrial automation such as robotics, machine vision and laser based systems have been deployed extensively to improve the productivity and output of existing and future installations, particularly for the fuel handling stages mentioned

  19. Multiobjective Output Feedback Control of a Class of Stochastic Hybrid Systems with State-Dependent Noise

    Directory of Open Access Journals (Sweden)

    S. Aberkane

    2007-01-01

    Full Text Available This paper deals with dynamic output feedback control of continuous-time active fault tolerant control systems with Markovian parameters (AFTCSMP and state-dependent noise. The main contribution is to formulate conditions for multiperformance design, related to this class of stochastic hybrid systems, that take into account the problematic resulting from the fact that the controller only depends on the fault detection and isolation (FDI process. The specifications and objectives under consideration include stochastic stability, ℋ2 and ℋ∞ (or more generally, stochastic integral quadratic constraints performances. Results are formulated as matrix inequalities. The theoretical results are illustrated using a classical example from literature.

  20. An EEG Data Investigation Using Only Artifacts

    Science.gov (United States)

    2017-02-22

    hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...some conditions, an automation feature was implemented to help the participants find the HVT. When the HVT was within the sensor footprint, a tone...EEG Data Investigation Using Only Artifacts 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 1 Chelsey

  1. Distributional dynamics following a technological revolution

    OpenAIRE

    David Andolfatto; Eric Smith

    2001-01-01

    In this paper we explore the link between technological change and the dynamics of employment, production, and the distribution of earnings. Technological change not only advances society's collective capability but also changes the relative productivities of its members. The latter effect establishes the likely winners and losers from advances in productive capabilities, provides a mechanism that can generate cyclical fluctuations in output as well as employment, and determines the evolution...

  2. Improvement of a mesoscale atmospheric dynamic model PHYSIC. Utilization of output from synoptic numerical prediction model for initial and boundary condition

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Yamazawa, Hiromi

    1995-03-01

    This report describes the improvement of the mesoscale atmospheric dynamic model which is a part of the atmospheric dispersion calculation model PHYSIC. To introduce large-scale meteorological changes into the mesoscale atmospheric dynamic model, it is necessary to make the initial and boundary conditions of the model by using GPV (Grid Point Value) which is the output of the numerical weather prediction model of JMA (Japan Meteorological Agency). Therefore, the program which preprocesses the GPV data to make a input file to PHYSIC was developed and the input process and the methods of spatial and temporal interpolation were improved to correspond to the file. Moreover, the methods of calculating the cloud amount and ground surface moisture from GPV data were developed and added to the model code. As the example of calculation by the improved model, the wind field simulations of a north-west monsoon in winter and a sea breeze in summer in the Tokai area were also presented. (author)

  3. Function allocation for humans and automation in the context of team dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. Joe; John O' Hara; Jacques Hugo; Johanna Oxstrand

    2015-07-01

    Within Human Factors Engineering, a decision-making process called function allocation (FA) is used during the design life cycle of complex systems to distribute the system functions, often identified through a functional requirements analysis, to all human and automated machine agents (or teammates) involved in controlling the system. Most FA methods make allocation decisions primarily by comparing the capabilities of humans and automation, but then also by considering secondary factors such as cost, regulations, and the health and safety of workers. The primary analysis of the strengths and weaknesses of humans and machines, however, is almost always considered in terms of individual human or machine capabilities. Yet, FA is fundamentally about teamwork in that the goal of the FA decision-making process is to determine what are the optimal allocations of functions among agents. Given this framing of FA, and the increasing use of and sophistication of automation, there are two related social psychological issues that current FA methods need to address more thoroughly. First, many principles for effective human teamwork are not considered as central decision points or in the iterative hypothesis and testing phase in most FA methods, when it is clear that social factors have numerous positive and negative effects on individual and team capabilities. Second, social psychological factors affecting team performance and can be difficult to translate to automated agents, and most FA methods currently do not account for this effect. The implications for these issues are discussed.

  4. Individual differences in the calibration of trust in automation.

    Science.gov (United States)

    Pop, Vlad L; Shrewsbury, Alex; Durso, Francis T

    2015-06-01

    The objective was to determine whether operators with an expectancy that automation is trustworthy are better at calibrating their trust to changes in the capabilities of automation, and if so, why. Studies suggest that individual differences in automation expectancy may be able to account for why changes in the capabilities of automation lead to a substantial change in trust for some, yet only a small change for others. In a baggage screening task, 225 participants searched for weapons in 200 X-ray images of luggage. Participants were assisted by an automated decision aid exhibiting different levels of reliability. Measures of expectancy that automation is trustworthy were used in conjunction with subjective measures of trust and perceived reliability to identify individual differences in trust calibration. Operators with high expectancy that automation is trustworthy were more sensitive to changes (both increases and decreases) in automation reliability. This difference was eliminated by manipulating the causal attribution of automation errors. Attributing the cause of automation errors to factors external to the automation fosters an understanding of tasks and situations in which automation differs in reliability and may lead to more appropriate trust. The development of interventions can lead to calibrated trust in automation. © 2014, Human Factors and Ergonomics Society.

  5. Automated electronic filter design

    CERN Document Server

    Banerjee, Amal

    2017-01-01

    This book describes a novel, efficient and powerful scheme for designing and evaluating the performance characteristics of any electronic filter designed with predefined specifications. The author explains techniques that enable readers to eliminate complicated manual, and thus error-prone and time-consuming, steps of traditional design techniques. The presentation includes demonstration of efficient automation, using an ANSI C language program, which accepts any filter design specification (e.g. Chebyschev low-pass filter, cut-off frequency, pass-band ripple etc.) as input and generates as output a SPICE(Simulation Program with Integrated Circuit Emphasis) format netlist. Readers then can use this netlist to run simulations with any version of the popular SPICE simulator, increasing accuracy of the final results, without violating any of the key principles of the traditional design scheme.

  6. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    Science.gov (United States)

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  7. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  8. Automated SEM and TEM sample preparation applied to copper/low k materials

    Science.gov (United States)

    Reyes, R.; Shaapur, F.; Griffiths, D.; Diebold, A. C.; Foran, B.; Raz, E.

    2001-01-01

    We describe the use of automated microcleaving for preparation of both SEM and TEM samples as done by SELA's new MC500 and TEMstation tools. The MC500 is an automated microcleaving tool that is capable of producing cleaves with 0.25 μm accuracy resulting in SEM-ready samples. The TEMstation is capable of taking a sample output from the MC500 (or from SELA's earlier MC200 tool) and producing a FIB ready slice of 25±5 μm, mounted on a TEM-washer and ready for FIB thinning to electron transparency for TEM analysis. The materials selected for the tool set evaluation mainly included the Cu/TaN/HOSP low-k system. The paper is divided into three sections, experimental approach, SEM preparation and analysis of HOSP low-k, and TEM preparation and analysis of Cu/TaN/HOSP low-k samples. For the samples discussed, data is presented to show the quality of preparation provided by these new automated tools.

  9. Comparing between predicted output temperature of flat-plate solar collector and experimental results: computational fluid dynamics and artificial neural network

    Directory of Open Access Journals (Sweden)

    F Nadi

    2017-05-01

    Full Text Available Introduction The significant of solar energy as a renewable energy source, clean and without damage to the environment, for the production of electricity and heat is of great importance. Furthermore, due to the oil crisis as well as reducing the cost of home heating by 70%, solar energy in the past two decades has been a favorite of many researchers. Solar collectors are devices for collecting solar radiant energy through which this energy is converted into heat and then heat is transferred to a fluid (usually air or water. Therefore, a key component in performance improvement of solar heating system is a solar collector optimization under different testing conditions. However, estimation of output parameters under different testing conditions is costly, time consuming and mostly impossible. As a result, smart use of neural networks as well as CFD (computational fluid dynamics to predict the properties with which desired output would have been acquired is valuable. To the best of our knowledge, there are no any studies that compare experimental results with CFD and ANN. Materials and Methods A corrugated galvanized iron sheet of 2 m length, 1 m wide and 0.5 mm in thickness was used as an absorber plate for absorbing the incident solar radiation (Fig. 1 and 2. Corrugations in absorber were caused turbulent air and improved heat transfer coefficient. Computational fluid dynamics K-ε turbulence model was used for simulation. The following assumptions are made in the analysis. (1 Air is a continuous medium and incompressible. (2 The flow is steady and possesses have turbulent flow characteristics, due to the high velocity of flow. (3 The thermal-physical properties of the absorber sheet and the absorber tube are constant with respect to the operating temperature. (4 The bottom side of the absorber tube and the absorber plate are assumed to be adiabatic. Artificial neural network In this research a one-hidden-layer feed-forward network based on the

  10. PleurAlert: an augmented chest drainage system with electronic sensing, automated alerts and internet connectivity.

    Science.gov (United States)

    Leeson, Cory E; Weaver, Robert A; Bissell, Taylor; Hoyer, Rachel; McClain, Corinne; Nelson, Douglas A; Samosky, Joseph T

    2012-01-01

    We have enhanced a common medical device, the chest tube drainage container, with electronic sensing of fluid volume, automated detection of critical alarm conditions and the ability to automatically send alert text messages to a nurse's cell phone. The PleurAlert system provides a simple touch-screen interface and can graphically display chest tube output over time. Our design augments a device whose basic function dates back 50 years by adding technology to automate and optimize a monitoring process that can be time consuming and inconvenient for nurses. The system may also enhance detection of emergency conditions and speed response time.

  11. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development.

    Directory of Open Access Journals (Sweden)

    Mark A Hayes

    Full Text Available Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus, a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  12. Seasonally-Dynamic Presence-Only Species Distribution Models for a Cryptic Migratory Bat Impacted by Wind Energy Development.

    Science.gov (United States)

    Hayes, Mark A; Cryan, Paul M; Wunder, Michael B

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn-the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as 'risk from turbines is highest in habitats between hoary bat summering and wintering grounds'. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  13. Seasonally-dynamic presence-only species distribution models for a cryptic migratory bat impacted by wind energy development

    Science.gov (United States)

    Hayes, Mark A.; Cryan, Paul M.; Wunder, Michael B.

    2015-01-01

    Understanding seasonal distribution and movement patterns of animals that migrate long distances is an essential part of monitoring and conserving their populations. Compared to migratory birds and other more conspicuous migrants, we know very little about the movement patterns of many migratory bats. Hoary bats (Lasiurus cinereus), a cryptic, wide-ranging, long-distance migrant, comprise a substantial proportion of the tens to hundreds of thousands of bat fatalities estimated to occur each year at wind turbines in North America. We created seasonally-dynamic species distribution models (SDMs) from 2,753 museum occurrence records collected over five decades in North America to better understand the seasonal geographic distributions of hoary bats. We used 5 SDM approaches: logistic regression, multivariate adaptive regression splines, boosted regression trees, random forest, and maximum entropy and consolidated outputs to generate ensemble maps. These maps represent the first formal hypotheses for sex- and season-specific hoary bat distributions. Our results suggest that North American hoary bats winter in regions with relatively long growing seasons where temperatures are moderated by proximity to oceans, and then move to the continental interior for the summer. SDMs suggested that hoary bats are most broadly distributed in autumn—the season when they are most susceptible to mortality from wind turbines; this season contains the greatest overlap between potentially suitable habitat and wind energy facilities. Comparing wind-turbine fatality data to model outputs could test many predictions, such as ‘risk from turbines is highest in habitats between hoary bat summering and wintering grounds’. Although future field studies are needed to validate the SDMs, this study generated well-justified and testable hypotheses of hoary bat migration patterns and seasonal distribution.

  14. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whitman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-01-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of sixteen plasma traces has been processed using this technique

  15. Automated computer analysis of plasma-streak traces from SCYLLAC

    International Nuclear Information System (INIS)

    Whiteman, R.L.; Jahoda, F.C.; Kruger, R.P.

    1977-11-01

    An automated computer analysis technique that locates and references the approximate centroid of single- or dual-streak traces from the Los Alamos Scientific Laboratory SCYLLAC facility is described. The technique also determines the plasma-trace width over a limited self-adjusting region. The plasma traces are recorded with streak cameras on Polaroid film, then scanned and digitized for processing. The analysis technique uses scene segmentation to separate the plasma trace from a reference fiducial trace. The technique employs two methods of peak detection; one for the plasma trace and one for the fiducial trace. The width is obtained using an edge-detection, or slope, method. Timing data are derived from the intensity modulation of the fiducial trace. To smooth (despike) the output graphs showing the plasma-trace centroid and width, a technique of ''twicing'' developed by Tukey was employed. In addition, an interactive sorting algorithm allows retrieval of the centroid, width, and fiducial data from any test shot plasma for post analysis. As yet, only a limited set of the plasma traces has been processed with this technique

  16. Estimation of Individual Cylinder Air-Fuel Ratio in Gasoline Engine with Output Delay

    Directory of Open Access Journals (Sweden)

    Changhui Wang

    2016-01-01

    Full Text Available The estimation of the individual cylinder air-fuel ratio (AFR with a single universal exhaust gas oxygen (UEGO sensor installed in the exhaust pipe is an important issue for the cylinder-to-cylinder AFR balancing control, which can provide high-quality torque generation and reduce emissions in multicylinder engine. In this paper, the system dynamic for the gas in exhaust pipe including the gas mixing, gas transport, and sensor dynamics is described as an output delay system, and a new method using the output delay system observer is developed to estimate the individual cylinder AFR. With the AFR at confluence point augmented as a system state, an observer for the augmented discrete system with output delay is designed to estimate the AFR at confluence point. Using the gas mixing model, a method with the designed observer to estimate the individual cylinder AFR is presented. The validity of the proposed method is verified by the simulation results from a spark ignition gasoline engine from engine software enDYNA by Tesis.

  17. A Computational Architecture for Programmable Automation Research

    Science.gov (United States)

    Taylor, Russell H.; Korein, James U.; Maier, Georg E.; Durfee, Lawrence F.

    1987-03-01

    This short paper describes recent work at the IBM T. J. Watson Research Center directed at developing a highly flexible computational architecture for research on sensor-based programmable automation. The system described here has been designed with a focus on dynamic configurability, layered user inter-faces and incorporation of sensor-based real time operations into new commands. It is these features which distinguish it from earlier work. The system is cur-rently being implemented at IBM for research purposes and internal use and is an outgrowth of programmable automation research which has been ongoing since 1972 [e.g., 1, 2, 3, 4, 5, 6] .

  18. Microcontroller for automation application

    Science.gov (United States)

    Cooper, H. W.

    1975-01-01

    The description of a microcontroller currently being developed for automation application was given. It is basically an 8-bit microcomputer with a 40K byte random access memory/read only memory, and can control a maximum of 12 devices through standard 15-line interface ports.

  19. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  20. Automated selection of areas of interest in dynamic studies and camera-cinematograpy of the heart

    International Nuclear Information System (INIS)

    Bitter, F.; Adam, W.E.; Kampmann, H.; Meyer, G.; Weller, R.

    1975-01-01

    Progress is reported in heart investigations using the first transit principle and the steady-state procedure for radionuclide scanning. Progress in the first transit principle relies on automated selection of areas of interest. A procedure has been developed which automatically performs the evaluation of the areas corresponding to the right heart, the lungs, and the left heart. A different procedure has been built up for dynamic lung studies with Xe-133 (Radiospirometry), which principally can be applied to any other organ investigation. R-wave time averaged procedures of the heart in steady state can be performed in direct or indirect manner. A direct procedure is described that leads eventually to a cinematographic presentation of the heart kinetics on the computer display. The analysis yields an exact outline of heart ventricles and auricles as prerequisite for determination of ejection fractions and clinically relevant data of the heart function

  1. Automation of the driving using the dynamic programming; Automatisierung des Treibens mittels diskreter dynamischer Programmierung

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zongru; Buerger, Sebastian; Lohmann, Boris [Technische Univ. Muenchen, Garching (Germany). Lehrstuhl fuer Regelungstechnik

    2009-07-01

    Driving is a metal forming process throughout hammering in cold state. It can create almost any 2D and 3D metal sheets using universal tools. During driving, many parameters of the tools and the sheets affect the forming process, which inhibits a complete automation. In this paper, a model based control for a 2D driving process is proposed. The process of stretching L-shaped metal sheets is analytically modelled. Three phases, namely hybrid deformations, material flow as well as springback and inverse bending, describe the deformation process at one stroke. This results in a nonlinear (non-affine), time-discrete state space model. A model predictive controller (MPC) is then designed to determine the optimal control inputs at every time step. Thereby, an objective function that describes the costs from a start angle to an end condition is minimized by means of discrete dynamic programming (DDP). (orig.)

  2. Unit 16 - Output

    OpenAIRE

    Unit 16, CC in GIS; Star, Jeffrey L.

    1990-01-01

    This unit discusses issues related to GIS output, including the different types of output possible and the hardware for producing each. It describes text, graphic and digital data that can be generated by a GIS as well as line printers, dot matrix printers/plotters, pen plotters, optical scanners and cathode ray tubes (CRTs) as technologies for generating the output.

  3. Consciousness can reduce the voltage of the output signal of solar cell

    Science.gov (United States)

    Cao, Dayong

    2010-10-01

    When the sun's light radiate on the solar cell, the solar cell can produce the output signal as the photocurrent. We use the Data Acquisition Modules to record the voltage of the output signals. The v1 is voltage of the output signal of solar cell1; The v2 is the one of solar cell2. And these two solar cells stay side by side. When we record the voltage of the output signal from the morning to the noon, the voltage of the output signals will go up, and the v1 is bigger than the v2 during this time. But when the experimenter use consciousness to reduce the voltage of the output signals. That is to say: not only natural light ratiade on two solar cells, but also consciousness act on two solar cells. Not only I can use consciousness to reduce the growth voltage of the output signals, but also can change the v1 to be littler than the v2. The experiment was conducted on Sep. 2010. There is the physical system of the mass, energy, space and time-MEST; There is the spirited system of the mind, consciousness, emotion and desire-MECD; the information system is the code system. We can use them to develop photoelectric principle, life technology and Nanotech of semiconductor for consciousness effect.

  4. Comparison of cardiac output optimization with an automated closed-loop goal-directed fluid therapy versus non standardized manual fluid administration during elective abdominal surgery: first prospective randomized controlled trial.

    Science.gov (United States)

    Lilot, Marc; Bellon, Amandine; Gueugnon, Marine; Laplace, Marie-Christine; Baffeleuf, Bruno; Hacquard, Pauline; Barthomeuf, Felicie; Parent, Camille; Tran, Thomas; Soubirou, Jean-Luc; Robinson, Philip; Bouvet, Lionel; Vassal, Olivia; Lehot, Jean-Jacques; Piriou, Vincent

    2018-01-27

    An intraoperative automated closed-loop system for goal-directed fluid therapy has been successfully tested in silico, in vivo and in a clinical case-control matching. This trial compared intraoperative cardiac output (CO) in patients managed with this closed-loop system versus usual practice in an academic medical center. The closed-loop system was connected to a CO monitoring system and delivered automated colloid fluid boluses. Moderate to high-risk abdominal surgical patients were randomized either to the closed-loop or the manual group. Intraoperative final CO was the primary endpoint. Secondary endpoints were intraoperative overall mean cardiac index (CI), increase from initial to final CI, intraoperative fluid volume and postoperative outcomes. From January 2014 to November 2015, 46 patients were randomized. There was a lower initial CI (2.06 vs. 2.51 l min -1 m -2 , p = 0.042) in the closed-loop compared to the control group. No difference in final CO and in overall mean intraoperative CI was observed between groups. A significant relative increase from initial to final CI values was observed in the closed-loop but not the control group (+ 28.6%, p = 0.006 vs. + 1.2%, p = 0.843). No difference was found for intraoperative fluid management and postoperative outcomes between groups. There was no significant impact on the primary study endpoint, but this was found in a context of unexpected lower initial CI in the closed-loop group.Trial registry number ID-RCB/EudraCT: 2013-A00770-45. ClinicalTrials.gov Identifier NCT01950845, date of registration: 17 September 2013.

  5. Multiple output timing and trigger generator

    Energy Technology Data Exchange (ETDEWEB)

    Wheat, Robert M. [Los Alamos National Laboratory; Dale, Gregory E [Los Alamos National Laboratory

    2009-01-01

    In support of the development of a multiple stage pulse modulator at the Los Alamos National Laboratory, we have developed a first generation, multiple output timing and trigger generator. Exploiting Commercial Off The Shelf (COTS) Micro Controller Units (MCU's), the timing and trigger generator provides 32 independent outputs with a timing resolution of about 500 ns. The timing and trigger generator system is comprised of two MCU boards and a single PC. One of the MCU boards performs the functions of the timing and signal generation (the timing controller) while the second MCU board accepts commands from the PC and provides the timing instructions to the timing controller. The PC provides the user interface for adjusting the on and off timing for each of the output signals. This system provides 32 output or timing signals which can be pre-programmed to be in an on or off state for each of 64 time steps. The width or duration of each of the 64 time steps is programmable from 2 {micro}s to 2.5 ms with a minimum time resolution of 500 ns. The repetition rate of the programmed pulse train is only limited by the time duration of the programmed event. This paper describes the design and function of the timing and trigger generator system and software including test results and measurements.

  6. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    Science.gov (United States)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  7. User manual for two simple postscript output FORTRAN plotting routines

    Science.gov (United States)

    Nguyen, T. X.

    1991-01-01

    Graphics is one of the important tools in engineering analysis and design. However, plotting routines that generate output on high quality laser printers normally come in graphics packages, which tend to be expensive and system dependent. These factors become important for small computer systems or desktop computers, especially when only some form of a simple plotting routine is sufficient. With the Postscript language becoming popular, there are more and more Postscript laser printers now available. Simple, versatile, low cost plotting routines that can generate output on high quality laser printers are needed and standard FORTRAN language plotting routines using output in Postscript language seems logical. The purpose here is to explain two simple FORTRAN plotting routines that generate output in Postscript language.

  8. Vehicle Dynamics and Control

    CERN Document Server

    Rajamani, Rajesh

    2012-01-01

    Vehicle Dynamics and Control provides a comprehensive coverage of vehicle control systems and the dynamic models used in the development of these control systems. The control system applications covered in the book include cruise control, adaptive cruise control, ABS, automated lane keeping, automated highway systems, yaw stability control, engine control, passive, active and semi-active suspensions, tire-road friction coefficient estimation, rollover prevention, and hybrid electric vehicle. In developing the dynamic model for each application, an effort is made to both keep the model simple enough for control system design but at the same time rich enough to capture the essential features of the dynamics. A special effort has been made to explain the several different tire models commonly used in literature and to interpret them physically. In the second edition of the book, chapters on roll dynamics, rollover prevention and hybrid electric vehicles have been added, and the chapter on electronic stability co...

  9. Output upstreamness and input downstreamness of industries/countries in world production

    NARCIS (Netherlands)

    Miller, Ronald E.; Temurshoev, Umed

    2013-01-01

    Using the world input-output tables available from the WIOD project (www.wiod.org), we quantify production line positions of 35 industries for 40 countries and the rest of the world region over 1996-2009. In contrast to the previous related literature we do not focus only on the output supply chain,

  10. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  11. INVESTIGATION OF NEURAL NETWORK ALGORITHM FOR DETECTION OF NETWORK HOST ANOMALIES IN THE AUTOMATED SEARCH FOR XSS VULNERABILITIES AND SQL INJECTIONS

    Directory of Open Access Journals (Sweden)

    Y. D. Shabalin

    2016-03-01

    Full Text Available A problem of aberrant behavior detection for network communicating computer is discussed. A novel approach based on dynamic response of computer is introduced. The computer is suggested as a multiple-input multiple-output (MIMO plant. To characterize dynamic response of the computer on incoming requests a correlation between input data rate and observed output response (outgoing data rate and performance metrics is used. To distinguish normal and aberrant behavior of the computer one-class neural network classifieris used. General idea of the algorithm is shortly described. Configuration of network testbed for experiments with real attacks and their detection is presented (the automated search for XSS and SQL injections. Real found-XSS and SQL injection attack software was used to model the intrusion scenario. It would be expectable that aberrant behavior of the server will reveal itself by some instantaneous correlation response which will be significantly different from any of normal ones. It is evident that correlation picture of attacks from different malware running, the site homepage overriding on the server (so called defacing, hardware and software failures will differ from correlation picture of normal functioning. Intrusion detection algorithm is investigated to estimate false positive and false negative rates in relation to algorithm parameters. The importance of correlation width value and threshold value selection was emphasized. False positive rate was estimated along the time series of experimental data. Some ideas about enhancement of the algorithm quality and robustness were mentioned.

  12. Radiographic examination takes on an automated image

    International Nuclear Information System (INIS)

    Aman, J.

    1988-01-01

    Automation can be effectively applied to nondestructive testing (NDT). Until recently, film radiography used in NDT was largely a manual process, involving the shooting of a series of x-rays, manually positioned and manually processed. In other words, much radiographic work is being done the way it was over 50 years ago. Significant advances in automation have changed the face of manufacturing, and industry has shared in the benefits brought by such progress. The handling of parts, which was once responsible for a large measure of labor costs, is now assigned to robotic equipment. In nondestructive testing processes, some progress has been achieved in automation - for example, in real-time imaging systems. However, only recently have truly automated NDT begun to emerge. There are two major reasons to introduce automation into NDT - reliability and productivity. Any process or technique that can improve the reliability of parts testing could easily justify the capital investments required

  13. Toward fully automated processing of dynamic susceptibility contrast perfusion MRI for acute ischemic cerebral stroke.

    Science.gov (United States)

    Kim, Jinsuh; Leira, Enrique C; Callison, Richard C; Ludwig, Bryan; Moritani, Toshio; Magnotta, Vincent A; Madsen, Mark T

    2010-05-01

    We developed fully automated software for dynamic susceptibility contrast (DSC) MR perfusion-weighted imaging (PWI) to efficiently and reliably derive critical hemodynamic information for acute stroke treatment decisions. Brain MR PWI was performed in 80 consecutive patients with acute nonlacunar ischemic stroke within 24h after onset of symptom from January 2008 to August 2009. These studies were automatically processed to generate hemodynamic parameters that included cerebral blood flow and cerebral blood volume, and the mean transit time (MTT). To develop reliable software for PWI analysis, we used computationally robust algorithms including the piecewise continuous regression method to determine bolus arrival time (BAT), log-linear curve fitting, arrival time independent deconvolution method and sophisticated motion correction methods. An optimal arterial input function (AIF) search algorithm using a new artery-likelihood metric was also developed. Anatomical locations of the automatically determined AIF were reviewed and validated. The automatically computed BAT values were statistically compared with estimated BAT by a single observer. In addition, gamma-variate curve-fitting errors of AIF and inter-subject variability of AIFs were analyzed. Lastly, two observes independently assessed the quality and area of hypoperfusion mismatched with restricted diffusion area from motion corrected MTT maps and compared that with time-to-peak (TTP) maps using the standard approach. The AIF was identified within an arterial branch and enhanced areas of perfusion deficit were visualized in all evaluated cases. Total processing time was 10.9+/-2.5s (mean+/-s.d.) without motion correction and 267+/-80s (mean+/-s.d.) with motion correction on a standard personal computer. The MTT map produced with our software adequately estimated brain areas with perfusion deficit and was significantly less affected by random noise of the PWI when compared with the TTP map. Results of image

  14. Automated segmentation of reference tissue for prostate cancer localization in dynamic contrast enhanced MRI

    Science.gov (United States)

    Vos, Pieter C.; Hambrock, Thomas; Barentsz, Jelle O.; Huisman, Henkjan J.

    2010-03-01

    For pharmacokinetic (PK) analysis of Dynamic Contrast Enhanced (DCE) MRI the arterial input function needs to be estimated. Previously, we demonstrated that PK parameters have a significant better discriminative performance when per patient reference tissue was used, but required manual annotation of reference tissue. In this study we propose a fully automated reference tissue segmentation method that tackles this limitation. The method was tested with our Computer Aided Diagnosis (CADx) system to study the effect on the discriminating performance for differentiating prostate cancer from benign areas in the peripheral zone (PZ). The proposed method automatically segments normal PZ tissue from DCE derived data. First, the bladder is segmented in the start-to-enhance map using the Otsu histogram threshold selection method. Second, the prostate is detected by applying a multi-scale Hessian filter to the relative enhancement map. Third, normal PZ tissue was segmented by threshold and morphological operators. The resulting segmentation was used as reference tissue to estimate the PK parameters. In 39 consecutive patients carcinoma, benign and normal tissue were annotated on MR images by a radiologist and a researcher using whole mount step-section histopathology as reference. PK parameters were computed for each ROI. Features were extracted from the set of ROIs using percentiles to train a support vector machine that was used as classifier. Prospective performance was estimated by means of leave-one-patient-out cross validation. A bootstrap resampling approach with 10,000 iterations was used for estimating the bootstrap mean AUCs and 95% confidence intervals. In total 42 malignant, 29 benign and 37 normal regions were annotated. For all patients, normal PZ was successfully segmented. The diagnostic accuracy obtained for differentiating malignant from benign lesions using a conventional general patient plasma profile showed an accuracy of 0.64 (0.53-0.74). Using the

  15. On/Off Power Supply of the Electron Beam Machine HV Automation by Using PCL-718 and PCLD-786

    International Nuclear Information System (INIS)

    Sudiyanto; Suyono, Djoko; Salam, Aminus; Ngatinu; Sudaryanto; Wiyana, Badi

    1996-01-01

    HV on/off power supply of the electron beam machine HV automation by using PCL-718 and PCLD-786 have been done. During the simulation experiments by using PCL-718 ADC-12 bit and PCLD-786 driver relay and Turbo-C software have been multiplexes 16 differential digital output channels which controlled on/off ac/dc relay of the electron beam machine power supply. Two PCLD-786 can be cascaded to expand the digital output channels which controlled 32 ac/dc relay

  16. Cassini Tour Atlas Automated Generation

    Science.gov (United States)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2011-01-01

    During the Cassini spacecraft s cruise phase and nominal mission, the Cassini Science Planning Team developed and maintained an online database of geometric and timing information called the Cassini Tour Atlas. The Tour Atlas consisted of several hundreds of megabytes of EVENTS mission planning software outputs, tables, plots, and images used by mission scientists for observation planning. Each time the nominal mission trajectory was altered or tweaked, a new Tour Atlas had to be regenerated manually. In the early phases of Cassini s Equinox Mission planning, an a priori estimate suggested that mission tour designers would develop approximately 30 candidate tours within a short period of time. So that Cassini scientists could properly analyze the science opportunities in each candidate tour quickly and thoroughly so that the optimal series of orbits for science return could be selected, a separate Tour Atlas was required for each trajectory. The task of manually generating the number of trajectory analyses in the allotted time would have been impossible, so the entire task was automated using code written in five different programming languages. This software automates the generation of the Cassini Tour Atlas database. It performs with one UNIX command what previously took a day or two of human labor.

  17. Automated systems to identify relevant documents in product risk management

    Science.gov (United States)

    2012-01-01

    Background Product risk management involves critical assessment of the risks and benefits of health products circulating in the market. One of the important sources of safety information is the primary literature, especially for newer products which regulatory authorities have relatively little experience with. Although the primary literature provides vast and diverse information, only a small proportion of which is useful for product risk assessment work. Hence, the aim of this study is to explore the possibility of using text mining to automate the identification of useful articles, which will reduce the time taken for literature search and hence improving work efficiency. In this study, term-frequency inverse document-frequency values were computed for predictors extracted from the titles and abstracts of articles related to three tumour necrosis factors-alpha blockers. A general automated system was developed using only general predictors and was tested for its generalizability using articles related to four other drug classes. Several specific automated systems were developed using both general and specific predictors and training sets of different sizes in order to determine the minimum number of articles required for developing such systems. Results The general automated system had an area under the curve value of 0.731 and was able to rank 34.6% and 46.2% of the total number of 'useful' articles among the first 10% and 20% of the articles presented to the evaluators when tested on the generalizability set. However, its use may be limited by the subjective definition of useful articles. For the specific automated system, it was found that only 20 articles were required to develop a specific automated system with a prediction performance (AUC 0.748) that was better than that of general automated system. Conclusions Specific automated systems can be developed rapidly and avoid problems caused by subjective definition of useful articles. Thus the efficiency of

  18. Adaptive neural control for dual-arm coordination of humanoid robot with unknown nonlinearities in output mechanism.

    Science.gov (United States)

    Liu, Zhi; Chen, Ci; Zhang, Yun; Chen, C L P

    2015-03-01

    To achieve an excellent dual-arm coordination of the humanoid robot, it is essential to deal with the nonlinearities existing in the system dynamics. The literatures so far on the humanoid robot control have a common assumption that the problem of output hysteresis could be ignored. However, in the practical applications, the output hysteresis is widely spread; and its existing limits the motion/force performances of the robotic system. In this paper, an adaptive neural control scheme, which takes the unknown output hysteresis and computational efficiency into account, is presented and investigated. In the controller design, the prior knowledge of system dynamics is assumed to be unknown. The motion error is guaranteed to converge to a small neighborhood of the origin by Lyapunov's stability theory. Simultaneously, the internal force is kept bounded and its error can be made arbitrarily small.

  19. A central pattern generator producing alternative outputs: pattern, strength, and dynamics of premotor synaptic input to leech heart motor neurons.

    Science.gov (United States)

    Norris, Brian J; Weaver, Adam L; Wenning, Angela; García, Paul S; Calabrese, Ronald L

    2007-11-01

    The central pattern generator (CPG) for heartbeat in medicinal leeches consists of seven identified pairs of segmental heart interneurons and one unidentified pair. Four of the identified pairs and the unidentified pair of interneurons make inhibitory synaptic connections with segmental heart motor neurons. The CPG produces a side-to-side asymmetric pattern of intersegmental coordination among ipsilateral premotor interneurons corresponding to a similarly asymmetric fictive motor pattern in heart motor neurons, and asymmetric constriction pattern of the two tubular hearts, synchronous and peristaltic. Using extracellular recordings from premotor interneurons and voltage-clamp recordings of ipsilateral segmental motor neurons in 69 isolated nerve cords, we assessed the strength and dynamics of premotor inhibitory synaptic output onto the entire ensemble of heart motor neurons and the associated conduction delays in both coordination modes. We conclude that premotor interneurons establish a stereotypical pattern of intersegmental synaptic connectivity, strengths, and dynamics that is invariant across coordination modes, despite wide variations among preparations. These data coupled with a previous description of the temporal pattern of premotor interneuron activity and relative phasing of motor neuron activity in the two coordination modes enable a direct assessment of how premotor interneurons through their temporal pattern of activity and their spatial pattern of synaptic connectivity, strengths, and dynamics coordinate segmental motor neurons into a functional pattern of activity.

  20. BREED: a CDC-7600 computer program for the automation of breeder reactor design analysis (LWBR Development Program)

    International Nuclear Information System (INIS)

    Candelore, N.R.; Maher, C.M.

    1985-03-01

    BREED is an executive CDC-7600 program which was developed to facilitate the sequence of calculations and movement of data through a prescribed series of breeder reactor design computer programs in an uninterrupted single-job mode. It provides the capability to interface different application programs into a single computer run to provide a complete design function. The automation that can be achieved as a result of using BREED significantly reduces not only the time required for data preparation and hand transfer of data, but also the time required to complete an iteration of the total design effort. Data processing within a technical discipline and data transfer between technical disciplines can be accommodated. The input/output data processing is achieved with BREED by using a set of simple, easily understood user commands, usually short descriptive words, which the user inserts in his input deck. The input deck completely identifies and controls the calculational sequence needed to produce a desired end product. This report has been prepared to provide instructional material on the use of BREED and its user-oriented procedures to facilitate computer automation of design calculations

  1. A Pseudo Fractional-N Clock Generator with 50% Duty Cycle Output

    Science.gov (United States)

    Yang, Wei-Bin; Lo, Yu-Lung; Chao, Ting-Sheng

    A proposed pseudo fractional-N clock generator with 50% duty cycle output is presented by using the pseudo fractional-N controller for SoC chips and the dynamic frequency scaling applications. The different clock frequencies can be generated with the particular phase combinations of a four-stage voltage-controlled oscillator (VCO). It has been fabricated in a 0.13µm CMOS technology, and work with a supply voltage of 1.2V. According to measured results, the frequency range of the proposed pseudo fractional-N clock generator is from 71.4MHz to 1GHz and the peak-to-peak jitter is less than 5% of the output period. Duty cycle error rates of the output clock frequencies are from 0.8% to 2% and the measured power dissipation of the pseudo fractional-N controller is 146µW at 304MHz.

  2. Integrated plant automation using programmable logic controllers

    International Nuclear Information System (INIS)

    Qureshi, S.A.

    2002-01-01

    In the world of automation the Programmable Logic Controller (PLC) has became for control. It now not only replaces the earlier relay logic controls but also has taken over many additional control functions. Initially the PLC was used to replace relay logic, but is ever-increasing range of functions means that it is found in many and more complex applications. As the structure of the PLC is based on the same principles as those employed in computer architecture, it is capable of performance not only relay switching tasks, but also other applications such as counting, calculating, comparing and the processing of analogue signals. Due to the simplicity of entering and modifying the programmed instructions to suit the requirements of the process under control, the PLC is truly a versatile and flexible device that can be employed easily and efficiently to repeatedly control tasks that vary in nature and complexes. A photograph of the Siemens S-5 95U. To illustrate the advantage of using a PLC over a traditional relay logic system, consider a control system with 20 input/output points. This assembly could comprise 60-80 relays, some counter/timers and a great deal of wiring. This assembly would be cumbersome with a power consumption of 30-40VA. A considerable time would be required to design, test and commission the assembly and once it is in full working order any desired modification, even of minor nature, could require major hardware changes. (author)

  3. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  4. Moving toward the automation of the systematic review process: a summary of discussions at the second meeting of International Collaboration for the Automation of Systematic Reviews (ICASR).

    Science.gov (United States)

    O'Connor, Annette M; Tsafnat, Guy; Gilbert, Stephen B; Thayer, Kristina A; Wolfe, Mary S

    2018-01-09

    The second meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 3-4 October 2016 in Philadelphia, Pennsylvania, USA. ICASR is an interdisciplinary group whose aim is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. Having automated tools for systematic review should enable more transparent and timely review, maximizing the potential for identifying and translating research findings to practical application. The meeting brought together multiple stakeholder groups including users of summarized research, methodologists who explore production processes and systematic review quality, and technologists such as software developers, statisticians, and vendors. This diversity of participants was intended to ensure effective communication with numerous stakeholders about progress toward automation of systematic reviews and stimulate discussion about potential solutions to identified challenges. The meeting highlighted challenges, both simple and complex, and raised awareness among participants about ongoing efforts by various stakeholders. An outcome of this forum was to identify several short-term projects that participants felt would advance the automation of tasks in the systematic review workflow including (1) fostering better understanding about available tools, (2) developing validated datasets for testing new tools, (3) determining a standard method to facilitate interoperability of tools such as through an application programming interface or API, and (4) establishing criteria to evaluate the quality of tools' output. ICASR 2016 provided a beneficial forum to foster focused discussion about tool development and resources and reconfirm ICASR members' commitment toward systematic reviews' automation.

  5. Improving the driver-automation interaction: an approach using automation uncertainty.

    Science.gov (United States)

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  6. Estimation of international output-energy relation. Effects of alternative output measures

    International Nuclear Information System (INIS)

    Shrestha, R.M.

    2000-01-01

    This paper analyzes the output-energy relationship with alternative measures of output and energy. Our analysis rejects the hypothesis of non-diminishing returns to energy consumption when GDP at purchasing power parities is used as the output measure unlike the case with GNP at market exchange rates. This finding also holds when energy input includes the usage of both commercial and traditional fuels. 13 refs

  7. DIMITRI 1.0: Beschrijving en toepassing van een dynamisch input-output model

    NARCIS (Netherlands)

    Wilting HC; Blom WF; Thomas R; Idenburg AM; LAE

    2001-01-01

    DIMITRI, the Dynamic Input-Output Model to study the Impacts of Technology Related Innovations, was developed in the framework of the RIVM Environment and Economy project to answer questions about interrelationships between economy, technology and the environment. DIMITRI, a meso-economic model,

  8. Automation: is it really different this time?

    Science.gov (United States)

    Wajcman, Judy

    2017-03-01

    This review examines several recent books that deal with the impact of automation and robotics on the future of jobs. Most books in this genre predict that the current phase of digital technology will create massive job loss in an unprecedented way, that is, that this wave of automation is different from previous waves. Uniquely digital technology is said to automate professional occupations for the first time. This review critically examines these claims, puncturing some of the hyperbole about automation, robotics and Artificial Intelligence. The review argues for a more nuanced analysis of the politics of technology and provides some critical distance on Silicon Valley's futurist discourse. Only by insisting that futures are always social can public bodies, rather than autonomous markets and endogenous technologies, become central to disentangling, debating and delivering those futures. © London School of Economics and Political Science 2017.

  9. The SSM/PMAD automated test bed project

    Science.gov (United States)

    Lollar, Louis F.

    1991-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) autonomous subsystem project was initiated in 1984. The project's goal has been to design and develop an autonomous, user-supportive PMAD test bed simulating the SSF Hab/Lab module(s). An eighteen kilowatt SSM/PMAD test bed model with a high degree of automated operation has been developed. This advanced automation test bed contains three expert/knowledge based systems that interact with one another and with other more conventional software residing in up to eight distributed 386-based microcomputers to perform the necessary tasks of real-time and near real-time load scheduling, dynamic load prioritizing, and fault detection, isolation, and recovery (FDIR).

  10. Dynamics of nonlinear feedback control.

    Science.gov (United States)

    Snippe, H P; van Hateren, J H

    2007-05-01

    Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.

  11. Orchestrating the management of patients with high-output stomas.

    Science.gov (United States)

    McDonald, Alison

    Working in isolation, managing high-output stomas can be stressful and difficult, with patient outcomes varying significantly. For the stoma care clinical nurse specialist, managing the choice of stoma appliance is only a small part of the care provided. To standardise and improve outcomes for patients with high-output stomas, team working is required. After contacting other stoma care services and using guidance from the High Impact Actions for Stoma Care document ( Coloplast, 2010 ), it was evident that the team should put together an algorithm/flow chart to guide both specialists and ward nursing staff in the evidence-based and standardised management of patients with high-output stomas. This article presents the flowchart that was produced and uses case studies to demonstrate improvements.

  12. Acquisition and production of skilled behavior in dynamic decision-making tasks: Modeling strategic behavior in human-automation interaction: Why and aid can (and should) go unused

    Science.gov (United States)

    Kirlik, Alex

    1991-01-01

    Advances in computer and control technology offer the opportunity for task-offload aiding in human-machine systems. A task-offload aid (e.g., an autopilot, an intelligent assistant) can be selectively engaged by the human operator to dynamically delegate tasks to an automated system. Successful design and performance prediction in such systems requires knowledge of the factors influencing the strategy the operator develops and uses for managing interaction with the task-offload aid. A model is presented that shows how such strategies can be predicted as a function of three task context properties (frequency and duration of secondary tasks and costs of delaying secondary tasks) and three aid design properties (aid engagement and disengagement times, aid performance relative to human performance). Sensitivity analysis indicates how each of these contextual and design factors affect the optimal aid aid usage strategy and attainable system performance. The model is applied to understanding human-automation interaction in laboratory experiments on human supervisory control behavior. The laboratory task allowed subjects freedom to determine strategies for using an autopilot in a dynamic, multi-task environment. Modeling results suggested that many subjects may indeed have been acting appropriately by not using the autopilot in the way its designers intended. Although autopilot function was technically sound, this aid was not designed with due regard to the overall task context in which it was placed. These results demonstrate the need for additional research on how people may strategically manage their own resources, as well as those provided by automation, in an effort to keep workload and performance at acceptable levels.

  13. Economic and workflow analysis of a blood bank automated system.

    Science.gov (United States)

    Shin, Kyung-Hwa; Kim, Hyung Hoi; Chang, Chulhun L; Lee, Eun Yup

    2013-07-01

    This study compared the estimated costs and times required for ABO/Rh(D) typing and unexpected antibody screening using an automated system and manual methods. The total cost included direct and labor costs. Labor costs were calculated on the basis of the average operator salaries and unit values (minutes), which was the hands-on time required to test one sample. To estimate unit values, workflows were recorded on video, and the time required for each process was analyzed separately. The unit values of ABO/Rh(D) typing using the manual method were 5.65 and 8.1 min during regular and unsocial working hours, respectively. The unit value was less than 3.5 min when several samples were tested simultaneously. The unit value for unexpected antibody screening was 2.6 min. The unit values using the automated method for ABO/Rh(D) typing, unexpected antibody screening, and both simultaneously were all 1.5 min. The total cost of ABO/Rh(D) typing of only one sample using the automated analyzer was lower than that of testing only one sample using the manual technique but higher than that of testing several samples simultaneously. The total cost of unexpected antibody screening using an automated analyzer was less than that using the manual method. ABO/Rh(D) typing using an automated analyzer incurs a lower unit value and cost than that using the manual technique when only one sample is tested at a time. Unexpected antibody screening using an automated analyzer always incurs a lower unit value and cost than that using the manual technique.

  14. Comparison of stochastic resonance in static and dynamical nonlinearities

    International Nuclear Information System (INIS)

    Ma, Yumei; Duan, Fabing

    2014-01-01

    We compare the stochastic resonance (SR) effects in parallel arrays of static and dynamical nonlinearities via the measure of output signal-to-noise ratio (SNR). For a received noisy periodic signal, parallel arrays of both static and dynamical nonlinearities can enhance the output SNR by optimizing the internal noise level. The static nonlinearity is easily implementable, while the dynamical nonlinearity has more parameters to be tuned, at the risk of not exploiting the beneficial role of internal noise components. It is of interest to note that, for an input signal buried in the external Laplacian noise, we show that the dynamical nonlinearity is superior to the static nonlinearity in obtaining a better output SNR. This characteristic is assumed to be closely associated with the kurtosis of noise distribution. - Highlights: • Comparison of SR effects in arrays of both static and dynamical nonlinearities. • Static nonlinearity is easily implementable for the SNR enhancement. • Dynamical nonlinearity yields a better output SNR for external Laplacian noise

  15. Automated testing of electro-optical systems; Proceedings of the Meeting, Orlando, FL, Apr. 7, 8, 1988

    International Nuclear Information System (INIS)

    Nestler, J.; Richardson, P.I.

    1988-01-01

    Various papers on the automated testing of electrooptical systems are presented. Individual topics addressed include: simultaneous automated testing of Thematic Mapper dynamic spatial performance characteristics, results of objective automatic minimum resolvable temperature testing of thermal imagers using a proposed new figure of merit, test and manufacture of three-mirror laboratory telescope, calculation of apparent delta-T errors for band-limited detectors, and automated laser seeker performance evaluation system

  16. Unmet needs in automated cytogenetics

    International Nuclear Information System (INIS)

    Bender, M.A.

    1976-01-01

    Though some, at least, of the goals of automation systems for analysis of clinical cytogenetic material seem either at hand, like automatic metaphase finding, or at least likely to be met in the near future, like operator-assisted semi-automatic analysis of banded metaphase spreads, important areas of cytogenetic analsis, most importantly the determination of chromosomal aberration frequencies in populations of cells or in samples of cells from people exposed to environmental mutagens, await practical methods of automation. Important as are the clinical diagnostic applications, it is apparent that increasing concern over the clastogenic effects of the multitude of potentially clastogenic chemical and physical agents to which human populations are being increasingly exposed, and the resulting emergence of extensive cytogenetic testing protocols, makes the development of automation not only economically feasible but almost mandatory. The nature of the problems involved, and acutal of possible approaches to their solution, are discussed

  17. Analyzing the impacts of final demand changes on total output using input-output approach: The case of Japanese ICT sectors

    Science.gov (United States)

    Zuhdi, Ubaidillah

    2014-03-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on "pure change" condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change.

  18. Automation in airport security X-ray screening of cabin baggage: Examining benefits and possible implementations of automated explosives detection.

    Science.gov (United States)

    Hättenschwiler, Nicole; Sterchi, Yanik; Mendes, Marcia; Schwaninger, Adrian

    2018-10-01

    Bomb attacks on civil aviation make detecting improvised explosive devices and explosive material in passenger baggage a major concern. In the last few years, explosive detection systems for cabin baggage screening (EDSCB) have become available. Although used by a number of airports, most countries have not yet implemented these systems on a wide scale. We investigated the benefits of EDSCB with two different levels of automation currently being discussed by regulators and airport operators: automation as a diagnostic aid with an on-screen alarm resolution by the airport security officer (screener) or EDSCB with an automated decision by the machine. The two experiments reported here tested and compared both scenarios and a condition without automation as baseline. Participants were screeners at two international airports who differed in both years of work experience and familiarity with automation aids. Results showed that experienced screeners were good at detecting improvised explosive devices even without EDSCB. EDSCB increased only their detection of bare explosives. In contrast, screeners with less experience (tenure automated decision provided better human-machine detection performance than on-screen alarm resolution and no automation. This came at the cost of slightly higher false alarm rates on the human-machine system level, which would still be acceptable from an operational point of view. Results indicate that a wide-scale implementation of EDSCB would increase the detection of explosives in passenger bags and automated decision instead of automation as diagnostic aid with on screen alarm resolution should be considered. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Automating radiochemistry: Considerations for commerical suppliers of devices

    International Nuclear Information System (INIS)

    Schmidt, D.G.

    1993-01-01

    The fundamental decision to automate a particular radiochemical synthesis for in house use depends primarily on the demand for the compound and the total number of studies to be carried out with that compound. For a commercial supplier of automated chemistry systems, much more goes in to the decision to design, develop and produce a particular automated chemistry system. There is a dramatic difference in design effort between an industrial environment and an academic environment. An in house system must be built only once and needs only to incrementally simplify the synthesis process. A commercial product must: have reasonable manufacturing costs; be easy to use; be aesthetically pleasing; be easy to install and service; be functionally integral with other equipment sold by the manufacturer; be marketable within the regulatory environment; address radiation safety issues. This paper discusses issues that guide commercial suppliers in the formation of their product lines

  20. Combined process automation for large-scale EEG analysis.

    Science.gov (United States)

    Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E

    2012-01-01

    Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Development of the RSAC Automation System for Reload Core of WH NPP

    International Nuclear Information System (INIS)

    Choi, Yu Sun; Bae, Sung Man; Koh, Byung Marn; Hong, Sun Kwan

    2006-01-01

    The Nuclear Design for Reload Core of Westinghouse Nuclear Power Plant consists of 'Reload Core Model Search', 'Safety Analysis(RSAC)', 'NDR(Nuclear Design Report) and OCAP(Operational Core Analysis Package Generation)' phases. Since scores of calculations for various accidents are required to confirm that the safety analysis assumptions are valid, the Safety Analysis(RSAC) is the most important and time and effort consuming phase of reload core design sequence. The Safety Analysis Automation System supports core designer by the automation of safety analysis calculations in 'Safety Analysis' phase(about 20 calculations). More than 10 kinds of codes, APA(ALPHA/PHOENIX/ANC), APOLLO, VENUS, PHIRE XEFIT, INCORE, etc. are being used for Safety Analysis calculations. Westinghouse code system needs numerous inputs and outputs, so the possibility of human errors could not be ignored during Safety Analysis calculations. To remove these inefficiencies, all input files for Safety Analysis calculations are automatically generated and executed by this Safety Analysis Automation System. All calculation notes are generated and the calculation results are summarized in RSAC (Reload Safety Analysis Checklist) by this system. Therefore, The Safety Analysis Automation System helps the reload core designer to perform safety analysis of the reload core model instantly and correctly

  2. Optimizing transformations for automated, high throughput analysis of flow cytometry data.

    Science.gov (United States)

    Finak, Greg; Perez, Juan-Manuel; Weng, Andrew; Gottardo, Raphael

    2010-11-04

    In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. We compare the performance of parameter-optimized and default-parameter (in flowCore) data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter-optimized transformations improve visualization, reduce

  3. Optimizing transformations for automated, high throughput analysis of flow cytometry data

    Directory of Open Access Journals (Sweden)

    Weng Andrew

    2010-11-01

    Full Text Available Abstract Background In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations. Results We compare the performance of parameter-optimized and default-parameter (in flowCore data transformations on real and simulated data by measuring the variation in the locations of cell populations across samples, discovered via automated gating in both the scatter and fluorescence channels. We find that parameter

  4. Robust output observer-based control of neutral uncertain systems with discrete and distributed time delays: LMI optimization approach

    International Nuclear Information System (INIS)

    Chen, J.-D.

    2007-01-01

    In this paper, the robust control problem of output dynamic observer-based control for a class of uncertain neutral systems with discrete and distributed time delays is considered. Linear matrix inequality (LMI) optimization approach is used to design the new output dynamic observer-based controls. Three classes of observer-based controls are proposed and the maximal perturbed bound is given. Based on the results of this paper, the constraint of matrix equality is not necessary for designing the observer-based controls. Finally, a numerical example is given to illustrate the usefulness of the proposed method

  5. Automated design of complex dynamic systems.

    Directory of Open Access Journals (Sweden)

    Michiel Hermans

    Full Text Available Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems.

  6. Automatic Grader of MT Outputs in Colloquial Style by Using Multiple Edit Distances

    Science.gov (United States)

    Akiba, Yasuhiro; Imamura, Kenji; Sumita, Eiichiro; Nakaiwa, Hiromi; Yamamoto, Seiichi; Okuno, Hiroshi G.

    This paper addresses the challenging problem of automating the human's intelligent ability to evaluate output from machine translation (MT) systems, which are subsystems of Speech-to-Speech MT (SSMT) systems. Conventional automatic MT evaluation methods include BLEU, which MT researchers have frequently used. BLEU is unsuitable for SSMT evaluation for two reasons. First, BLEU assesses errors lightly at the beginning or ending of translations and heavily in the middle, although the assessments should be independent from the positions. Second, BLEU lacks tolerance in accepting colloquial sentences with small errors, although such errors do not prevent us from continuing conversation. In this paper, the authors report a new evaluation method called RED that automatically grades each MT output by using a decision tree (DT). The DT is learned from training examples that are encoded by using multiple edit distances and their grades. The multiple edit distances are normal edit dista nce (ED) defined by insertion, deletion, and replacement, as well as extensions of ED. The use of multiple edit distances allows more tolerance than either ED or BLEU. Each evaluated MT output is assigned a grade by using the DT. RED and BLEU were compared for the task of evaluating SSMT systems, which have various performances, on a spoken language corpus, ATR's Basic Travel Expression Corpus (BTEC). Experimental results showed that RED significantly outperformed BLEU.

  7. Semi-automated Robust Quantification of Lesions (SRQL Toolbox

    Directory of Open Access Journals (Sweden)

    Kaori Ito

    2017-02-01

    Full Text Available Quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain on recovery. However, the wide variability in lesion characteristics across individuals makes manual lesion segmentation a challenging and often subjective process. This makes it difficult to combine stroke lesion data across multiple research sites, due to subjective differences in how lesions may be defined. We developed the Semi-automated Robust Quantification of Lesions (SRQL; https://github.com/npnl/SRQL; DOI: 10.5281/zenodo.267213 Toolbox that performs several analysis steps: 1 a white matter intensity correction that removes healthy white matter voxels from the lesion mask, thereby making lesions slightly more robust to subjective errors; 2 an automated report of descriptive statistics on lesions for simplified comparison between or across groups, and 3 an option to perform analyses in both native and standard space to facilitate analyses in either space, or comparisons between spaces. Here, we describe the methods implemented in the toolbox and demonstrate the outputs of the SRQL toolbox.

  8. Compressive power spectrum sensing for vibration-based output-only system identification of structural systems in the presence of noise

    Science.gov (United States)

    Tau Siesakul, Bamrung; Gkoktsi, Kyriaki; Giaralis, Agathoklis

    2015-05-01

    Motivated by the need to reduce monetary and energy consumption costs of wireless sensor networks in undertaking output-only/operational modal analysis of engineering structures, this paper considers a multi-coset analog-toinformation converter for structural system identification from acceleration response signals of white noise excited linear damped structures sampled at sub-Nyquist rates. The underlying natural frequencies, peak gains in the frequency domain, and critical damping ratios of the vibrating structures are estimated directly from the sub-Nyquist measurements and, therefore, the computationally demanding signal reconstruction step is by-passed. This is accomplished by first employing a power spectrum blind sampling (PSBS) technique for multi-band wide sense stationary stochastic processes in conjunction with deterministic non-uniform multi-coset sampling patterns derived from solving a weighted least square optimization problem. Next, modal properties are derived by the standard frequency domain peak picking algorithm. Special attention is focused on assessing the potential of the adopted PSBS technique, which poses no sparsity requirements to the sensed signals, to derive accurate estimates of modal structural system properties from noisy sub- Nyquist measurements. To this aim, sub-Nyquist sampled acceleration response signals corrupted by various levels of additive white noise pertaining to a benchmark space truss structure with closely spaced natural frequencies are obtained within an efficient Monte Carlo simulation-based framework. Accurate estimates of natural frequencies and reasonable estimates of local peak spectral ordinates and critical damping ratios are derived from measurements sampled at about 70% below the Nyquist rate and for SNR as low as 0db demonstrating that the adopted approach enjoys noise immunity.

  9. COA based robust output feedback UPFC controller design

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H., E-mail: hshayeghi@gmail.co [Technical Engineering Department, University of Mohaghegh Ardabili, Ardabil (Iran, Islamic Republic of); Shayanfar, H.A. [Center of Excellence for Power System Automation and Operation, Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of); Jalilzadeh, S.; Safari, A. [Technical Engineering Department, Zanjan University, Zanjan (Iran, Islamic Republic of)

    2010-12-15

    In this paper, a novel method for the design of output feedback controller for unified power flow controller (UPFC) using chaotic optimization algorithm (COA) is developed. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from the local optimum, is a promising tool for the engineering applications. The selection of the output feedback gains for the UPFC controllers is converted to an optimization problem with the time domain-based objective function which is solved by a COA based on Lozi map. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization problem introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. To ensure the robustness of the proposed stabilizers, the design process takes into account a wide range of operating conditions and system configurations. The effectiveness of the proposed controller for damping low frequency oscillations is tested and demonstrated through non-linear time-domain simulation and some performance indices studies. The results analysis reveals that the designed COA based output feedback UPFC damping controller has an excellent capability in damping power system low frequency oscillations and enhance greatly the dynamic stability of the power systems.

  10. Output hardcopy devices

    CERN Document Server

    Durbeck, Robert

    1988-01-01

    Output Hardcopy Devices provides a technical summary of computer output hardcopy devices such as plotters, computer output printers, and CRT generated hardcopy. Important related technical areas such as papers, ribbons and inks, color techniques, controllers, and character fonts are also covered. Emphasis is on techniques primarily associated with printing, as well as the plotting capabilities of printing devices that can be effectively used for computer graphics in addition to their various printing functions. Comprised of 19 chapters, this volume begins with an introduction to vector and ras

  11. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    Science.gov (United States)

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.

  12. Tri-State Current Source Inverter With Improved Dynamic Performance

    DEFF Research Database (Denmark)

    Blaabjerg, Frede; Loh, Poh Chiang; Wong, Chow Pang

    2008-01-01

    Traditional dc-ac current source inverter (CSI) has a right-half-plane (RHP) zero in its control-to-output transfer function. This RHP zero causes the inverter output to fall before rising when a step increase in command reference is required (commonly known as non-minimum-phase effect). To achieve...... a better dynamic response, this paper proposes the design of a tri-state CSI using only an additional semiconductor switch for introducing unique freewheeling states to the traditional six active and three null states of a CSI. With the freewheeling states inserted appropriately within the inverter state...... sequence, the inductive boosting and discharging intervals can be decoupled, allowing the RHP zero to be eliminated with only minor circuit modifications (high level control schemes like predictive and multiloop voltage/current control remain unchanged). The designed inverter can be controlled using...

  13. Modeling and control of the output current of a Reformed Methanol Fuel Cell system

    DEFF Research Database (Denmark)

    Justesen, Kristian Kjær; Andreasen, Søren Juhl; Pasupathi, Sivakumar

    2015-01-01

    In this work, a dynamic Matlab SIMULINK model of the relationship between the fuel cell current set point of a Reformed Methanol Fuel Cell system and the output current of the system is developed. The model contains an estimated fuel cell model, based on a polarization curve and assumed first order...... dynamics, as well as a battery model based on an equivalent circuit model and a balance of plant power consumption model. The models are tuned with experimental data and verified using a verification data set. The model is used to develop an output current controller which can control the charge current...... of the battery. The controller is a PI controller with feedforward and anti-windup. The performance of the controller is tested and verified on the physical system....

  14. Age-dependent terminal declines in reproductive output in a wild bird.

    Directory of Open Access Journals (Sweden)

    Martijn Hammers

    Full Text Available In many iteroparous species individual fitness components, such as reproductive output, first increase with age and then decline during late-life. However, individuals differ greatly in reproductive lifespan, but reproductive declines may only occur in the period just before their death as a result of an age-independent decline in physiological condition. To fully understand reproductive senescence it is important to investigate to what extent declines in late-life reproduction can be explained by age, time until death, or both. However, the study of late-life fitness performance in natural populations is challenging as the exact birth and death dates of individuals are often not known, and most individuals succumb to extrinsic mortality before reaching old age. Here, we used an exceptional long-term longitudinal dataset of individuals from a natural, closed, and predator-free population of the Seychelles warbler (Acrocephalus sechellensis to investigate reproductive output, both in relation to age and to the time until the death of an individual (reverse-age approach. We observed an initial age-dependent increase in reproductive output that was followed by a decline in old age. However, we found no significant decline in reproductive output in the years directly preceding death. Although post-peak reproductive output declined with age, this pattern differed between terminal and non-terminal reproductive attempts, and the age-dependence of the terminal breeding attempt explained much of the variation in age-specific reproductive output. In fact, terminal declines in reproductive output were steeper in very old individuals. These results indicate that not only age-dependent, but also age-independent factors, such as physiological condition, need to be considered to understand reproductive senescence in wild-living animals.

  15. Finding the Root Causes of Statistical Inconsistency in Community Earth System Model Output

    Science.gov (United States)

    Milroy, D.; Hammerling, D.; Baker, A. H.

    2017-12-01

    Baker et al (2015) developed the Community Earth System Model Ensemble Consistency Test (CESM-ECT) to provide a metric for software quality assurance by determining statistical consistency between an ensemble of CESM outputs and new test runs. The test has proved useful for detecting statistical difference caused by compiler bugs and errors in physical modules. However, detection is only the necessary first step in finding the causes of statistical difference. The CESM is a vastly complex model comprised of millions of lines of code which is developed and maintained by a large community of software engineers and scientists. Any root cause analysis is correspondingly challenging. We propose a new capability for CESM-ECT: identifying the sections of code that cause statistical distinguishability. The first step is to discover CESM variables that cause CESM-ECT to classify new runs as statistically distinct, which we achieve via Randomized Logistic Regression. Next we use a tool developed to identify CESM components that define or compute the variables found in the first step. Finally, we employ the application Kernel GENerator (KGEN) created in Kim et al (2016) to detect fine-grained floating point differences. We demonstrate an example of the procedure and advance a plan to automate this process in our future work.

  16. External Suction and Fluid Output in Chest Drains After Lobectomy

    DEFF Research Database (Denmark)

    Lijkendijk, Marike; Neckelmann, Kirsten; Licht, Peter B

    2018-01-01

    influences the amount of fluid. METHODS: We randomly assigned (1:1) 106 patients who underwent lobectomy to either low (-5 cm H2O) or high (-20 cm H2O) external suction using an electronic chest drainage system. Only one chest drain was allowed, and we used strict algorithms for chest drain removal, which...... was delegated to staff nurses: air leakage less than 20 mL/min for 6 hours regardless of fluid output, provided it was serous. The primary end point was fluid output after 24 and 48 hours. RESULTS: Mean fluid output was significantly higher with high suction after both 24 (338 ± 265 mL versus 523 ± 215 m...

  17. Active chatter suppression with displacement-only measurement in turning process

    Science.gov (United States)

    Ma, Haifeng; Wu, Jianhua; Yang, Liuqing; Xiong, Zhenhua

    2017-08-01

    Regenerative chatter is a major hindrance for achieving high quality and high production rate in machining processes. Various active controllers have been proposed to mitigate chatter. However, most of existing controllers were developed on the basis of multi-states feedback of the system and state observers were usually needed. Moreover, model parameters of the machining process (mass, damping and stiffness) were required in existing active controllers. In this study, an active sliding mode controller, which employs a dynamic output feedback sliding surface for the unmatched condition and an adaptive law for disturbance estimation, is designed, analyzed, and validated for chatter suppression in turning process. Only displacement measurement is required by this approach. Other sensors and state observers are not needed. Moreover, it facilitates a rapid implementation since the designed controller is established without using model parameters of the turning process. Theoretical analysis, numerical simulations and experiments on a computer numerical control (CNC) lathe are presented. It shows that the chatter can be substantially attenuated and the chatter-free region can be significantly expanded with the presented method.

  18. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  19. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  20. Validation of Fourier decomposition MRI with dynamic contrast-enhanced MRI using visual and automated scoring of pulmonary perfusion in young cystic fibrosis patients

    International Nuclear Information System (INIS)

    Bauman, Grzegorz; Puderbach, Michael; Heimann, Tobias; Kopp-Schneider, Annette; Fritzsching, Eva; Mall, Marcus A.; Eichinger, Monika

    2013-01-01

    Purpose: To validate Fourier decomposition (FD) magnetic resonance (MR) imaging in cystic fibrosis (CF) patients with dynamic contrast-enhanced (DCE) MR imaging. Materials and methods: Thirty-four CF patients (median age 4.08 years; range 0.16–30) were examined on a 1.5-T MR imager. For FD MR imaging, sets of lung images were acquired using an untriggered two-dimensional balanced steady-state free precession sequence. Perfusion-weighted images were obtained after correction of the breathing displacement and Fourier analysis of the cardiac frequency from the time-resolved data sets. DCE data sets were acquired with a three-dimensional gradient echo sequence. The FD and DCE images were visually assessed for perfusion defects by two readers independently (R1, R2) using a field based scoring system (0–12). Software was used for perfusion impairment evaluation (R3) of segmented lung images using an automated threshold. Both imaging and evaluation methods were compared for agreement and tested for concordance between FD and DCE imaging. Results: Good or acceptable intra-reader agreement was found between FD and DCE for visual and automated scoring: R1 upper and lower limits of agreement (ULA, LLA): 2.72, −2.5; R2: ULA, LLA: ±2.5; R3: ULA: 1.5, LLA: −2. A high concordance was found between visual and automated scoring (FD: 70–80%, DCE: 73–84%). Conclusions: FD MR imaging provides equivalent diagnostic information to DCE MR imaging in CF patients. Automated assessment of regional perfusion defects using FD and DCE MR imaging is comparable to visual scoring but allows for percentage-based analysis

  1. Validation of Fourier decomposition MRI with dynamic contrast-enhanced MRI using visual and automated scoring of pulmonary perfusion in young cystic fibrosis patients

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, Grzegorz, E-mail: g.bauman@dkfz.de [German Cancer Research Center, Division of Medical Physics in Radiology, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Puderbach, Michael, E-mail: m.puderbach@dkfz.de [Chest Clinics at the University of Heidelberg, Clinics for Interventional and Diagnostic Radiology, Amalienstr. 5, 69126 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); Heimann, Tobias, E-mail: t.heimann@dkfz.de [German Cancer Research Center, Division of Medical and Biological Informatics, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Kopp-Schneider, Annette, E-mail: kopp@dkfz.de [German Cancer Research Center, Division of Biostatistics, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany); Fritzsching, Eva, E-mail: eva.fritzsching@med.uni-heidelberg.de [University Hospital Heidelberg, Department of Translational Pulmonology and Division of Pediatric Pulmonology and Allergy and Cystic Fibrosis Center, Im Neuenheimer Feld 430, Heidelberg (Germany); Mall, Marcus A., E-mail: marcus.mall@med.uni-heidelberg.de [Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); University Hospital Heidelberg, Department of Translational Pulmonology and Division of Pediatric Pulmonology and Allergy and Cystic Fibrosis Center, Im Neuenheimer Feld 430, Heidelberg (Germany); Eichinger, Monika, E-mail: m.eichinger@dkfz.de [Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (Germany); German Cancer Research Center, Division of Radiology, Im Neuenheimer Feld 223, 69120 Heidelberg (Germany)

    2013-12-01

    Purpose: To validate Fourier decomposition (FD) magnetic resonance (MR) imaging in cystic fibrosis (CF) patients with dynamic contrast-enhanced (DCE) MR imaging. Materials and methods: Thirty-four CF patients (median age 4.08 years; range 0.16–30) were examined on a 1.5-T MR imager. For FD MR imaging, sets of lung images were acquired using an untriggered two-dimensional balanced steady-state free precession sequence. Perfusion-weighted images were obtained after correction of the breathing displacement and Fourier analysis of the cardiac frequency from the time-resolved data sets. DCE data sets were acquired with a three-dimensional gradient echo sequence. The FD and DCE images were visually assessed for perfusion defects by two readers independently (R1, R2) using a field based scoring system (0–12). Software was used for perfusion impairment evaluation (R3) of segmented lung images using an automated threshold. Both imaging and evaluation methods were compared for agreement and tested for concordance between FD and DCE imaging. Results: Good or acceptable intra-reader agreement was found between FD and DCE for visual and automated scoring: R1 upper and lower limits of agreement (ULA, LLA): 2.72, −2.5; R2: ULA, LLA: ±2.5; R3: ULA: 1.5, LLA: −2. A high concordance was found between visual and automated scoring (FD: 70–80%, DCE: 73–84%). Conclusions: FD MR imaging provides equivalent diagnostic information to DCE MR imaging in CF patients. Automated assessment of regional perfusion defects using FD and DCE MR imaging is comparable to visual scoring but allows for percentage-based analysis.

  2. Data Structure Analysis to Represent Basic Models of Finite State Automation

    Directory of Open Access Journals (Sweden)

    V. V. Gurenko

    2015-01-01

    Full Text Available Complex system engineering based on the automaton models requires a reasoned data structure selection to implement them. The problem of automaton representation and data structure selection to be used in it has been understudied. Arbitrary data structure selection for automaton model software implementation leads to unnecessary computational burden and reduces the developed system efficiency. This article proposes an approach to the reasoned selection of data structures to represent finite algoristic automaton basic models and gives practical considerations based on it.Static and dynamic data structures are proposed for three main ways to assign Mealy and Moore automatons: a transition table, a matrix of coupling and a transition graph. A thirddimensional array, a rectangular matrix and a matrix of lists are the static structures. Dynamic structures are list-oriented structures: two-level and three-level Ayliff vectors and a multi-linked list. These structures allow us to store all required information about finite state automaton model components - characteristic set cardinalities and data of transition and output functions.A criterion system is proposed for data structure comparative evaluation in virtue of algorithmic features of automata theory problems. The criteria focused on capacitive and time computational complexity of operations performed in tasks such as equivalent automaton conversions, proving of automaton equivalence and isomorphism, and automaton minimization.A data structure comparative analysis based on the criterion system has done for both static and dynamic type. The analysis showed advantages of the third-dimensional array, matrix and two-level Ayliff vector. These are structures that assign automaton by transition table. For these structures an experiment was done to measure the execution time of automation operations included in criterion system.The analysis of experiment results showed that a dynamic structure - two

  3. Formal analysis of design process dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design

  4. Formal Analysis of Design Process Dynamics

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2010-01-01

    This paper presents a formal analysis of design process dynamics. Such a formal analysis is a prerequisite to come to a formal theory of design and for the development of automated support for the dynamics of design processes. The analysis was geared toward the identification of dynamic design

  5. From Static Output Feedback to Structured Robust Static Output Feedback: A Survey

    OpenAIRE

    Sadabadi , Mahdieh ,; Peaucelle , Dimitri

    2016-01-01

    This paper reviews the vast literature on static output feedback design for linear time-invariant systems including classical results and recent developments. In particular, we focus on static output feedback synthesis with performance specifications, structured static output feedback, and robustness. The paper provides a comprehensive review on existing design approaches including iterative linear matrix inequalities heuristics, linear matrix inequalities with rank constraints, methods with ...

  6. Preliminary Retrospective Analysis of Daily Tomotherapy Output Constancy Checks Using Statistical Process Control.

    Science.gov (United States)

    Mezzenga, Emilio; D'Errico, Vincenzo; Sarnelli, Anna; Strigari, Lidia; Menghi, Enrico; Marcocci, Francesco; Bianchini, David; Benassi, Marcello

    2016-01-01

    The purpose of this study was to retrospectively evaluate the results from a Helical TomoTherapy Hi-Art treatment system relating to quality controls based on daily static and dynamic output checks using statistical process control methods. Individual value X-charts, exponentially weighted moving average charts, and process capability and acceptability indices were used to monitor the treatment system performance. Daily output values measured from January 2014 to January 2015 were considered. The results obtained showed that, although the process was in control, there was an out-of-control situation in the principal maintenance intervention for the treatment system. In particular, process capability indices showed a decreasing percentage of points in control which was, however, acceptable according to AAPM TG148 guidelines. Our findings underline the importance of restricting the acceptable range of daily output checks and suggest a future line of investigation for a detailed process control of daily output checks for the Helical TomoTherapy Hi-Art treatment system.

  7. Analyzing the impacts of final demand changes on total output using input-output approach: The case of Japanese ICT sectors

    International Nuclear Information System (INIS)

    Zuhdi, Ubaidillah

    2014-01-01

    The purpose of this study is to analyze the impacts of final demand changes on total output of Japanese Information and Communication Technologies (ICT) sectors in future time. This study employs one of analysis tool in Input-Output (IO) analysis, demand-pull IO quantity model, in achieving the purpose. There are three final demand changes used in this study, namely (1) export, (2) import, and (3) outside households consumption changes. This study focuses on ''pure change'' condition, the condition that final demand changes only appear in analyzed sectors. The results show that export and outside households consumption modifications give positive impact while opposite impact could be seen in import change

  8. Correlation between renew able energy source's energy output and load

    International Nuclear Information System (INIS)

    Ali, G.H.M.; El-Zeftawy, A.A.

    1996-01-01

    The common problem to all renew energy sources (RESs) is the mismatch between their energy output and load demand. In remote areas, the solution of this problem is in general employing a small diesel-generator or a storage battery. But, the storage battery is a major cost element of RESs and small diesel-generator is unreliable and costly. Therefore, a proposed technique has been introduced in this work to determine correlation between the energy output of wind energy systems (WES) and isolated loads. solar photovoltaic power system (PVS) and two of energy storage facilities are used here for this correlation. The proposed technique includes also two models for optimizing the generation and costs of WES accompanied with PVS, storage battery and water storage (reservoir) to accommodate an isolated load. The proposed technique is applied with the dynamic programming to coordinate the energy output of a WES with residential and pumping load in remote area of egypt. The results of this application reveal that minimization of both capacity of the storage battery and the whole power system cost are obtained. 4 figs

  9. Current Comparative Table (CCT) automates customized searches of dynamic biological databases.

    Science.gov (United States)

    Landsteiner, Benjamin R; Olson, Michael R; Rutherford, Robert

    2005-07-01

    The Current Comparative Table (CCT) software program enables working biologists to automate customized bioinformatics searches, typically of remote sequence or HMM (hidden Markov model) databases. CCT currently supports BLAST, hmmpfam and other programs useful for gene and ortholog identification. The software is web based, has a BioPerl core and can be used remotely via a browser or locally on Mac OS X or Linux machines. CCT is particularly useful to scientists who study large sets of molecules in today's evolving information landscape because it color-codes all result files by age and highlights even tiny changes in sequence or annotation. By empowering non-bioinformaticians to automate custom searches and examine current results in context at a glance, CCT allows a remote database submission in the evening to influence the next morning's bench experiment. A demonstration of CCT is available at http://orb.public.stolaf.edu/CCTdemo and the open source software is freely available from http://sourceforge.net/projects/orb-cct.

  10. Automated driving and autonomous functions on road vehicles

    Science.gov (United States)

    Gordon, T. J.; Lidberg, M.

    2015-07-01

    In recent years, road vehicle automation has become an important and popular topic for research and development in both academic and industrial spheres. New developments have received extensive coverage in the popular press, and it may be said that the topic has captured the public imagination. Indeed, the topic has generated interest across a wide range of academic, industry and governmental communities, well beyond vehicle engineering; these include computer science, transportation, urban planning, legal, social science and psychology. While this follows a similar surge of interest - and subsequent hiatus - of Automated Highway Systems in the 1990s, the current level of interest is substantially greater, and current expectations are high. It is common to frame the new technologies under the banner of 'self-driving cars' - robotic systems potentially taking over the entire role of the human driver, a capability that does not fully exist at present. However, this single vision leads one to ignore the existing range of automated systems that are both feasible and useful. Recent developments are underpinned by substantial and long-term trends in 'computerisation' of the automobile, with developments in sensors, actuators and control technologies to spur the new developments in both industry and academia. In this paper, we review the evolution of the intelligent vehicle and the supporting technologies with a focus on the progress and key challenges for vehicle system dynamics. A number of relevant themes around driving automation are explored in this article, with special focus on those most relevant to the underlying vehicle system dynamics. One conclusion is that increased precision is needed in sensing and controlling vehicle motions, a trend that can mimic that of the aerospace industry, and similarly benefit from increased use of redundant by-wire actuators.

  11. Wireless Home Automation System using IoT

    Directory of Open Access Journals (Sweden)

    Alexandra MIHALACHE

    2017-01-01

    Full Text Available Nowadays, the chance of having an automated home is no longer a fancy luxury, but a reality accessible to a wide range of consumers, because smart home systems have replaced those that only automated the home in the past. More and more solutions based on IoT are being devel-oped to transform homes into smart ones, but the problem is that the benefits of home automa-tion are still not clear to everyone as they are not promoted enough, so we cannot talk about a broad mass of consumers already using integrated or DIY solutions to improve their lives. In this paper, I will present a home automation system using Arduino Uno integrated with rele-vant modules which are used to allow remote control of lights or fans, changes being made on the basis of different sensors data. The system is designed to be low cost and expandable, bring-ing accessibility, convenience and energy efficiency.

  12. Static and Dynamic Path Planning Using Incremental Heuristic Search

    OpenAIRE

    Khattab, Asem

    2018-01-01

    Path planning is an important component in any highly automated vehicle system. In this report, the general problem of path planning is considered first in partially known static environments where only static obstacles are present but the layout of the environment is changing as the agent acquires new information. Attention is then given to the problem of path planning in dynamic environments where there are moving obstacles in addition to the static ones. Specifically, a 2D car-like agent t...

  13. ROBUST CONTROL ALGORITHM FOR MULTIVARIABLE PLANTS WITH QUANTIZED OUTPUT

    Directory of Open Access Journals (Sweden)

    A. A. Margun

    2017-01-01

    Full Text Available The paper deals with robust output control algorithm for multivariable plants under disturbances. A plant is described by the system of linear differential equations with known relative degrees. Plant parameters are unknown but belong to the known closed bounded set. Plant state vector is unmeasured. Plant output is measured only via static quantizer. Control system algorithm is based on the high gain feedback method. Developed controller provides exponential convergence of tracking error to the bounded area. The area bounds depend on quantizer parameters and the value of external disturbances. Experimental approbation of the proposed control algorithm is performed with the use of Twin Rotor MIMO System laboratory bench. This bench is a helicopter like model with two degrees of freedom (pitch and yaw. DC motors are used as actuators. The output signals are measured via optical encoders. Mathematical model of laboratory bench is obtained. Proposed algorithm was compared with proportional - integral – differential controller in conditions of output quantization. Obtained results have confirmed the efficiency of proposed controller.

  14. On the dynamics of aggregate output, electricity consumption and exports in Malaysia: Evidence from multivariate Granger causality tests

    International Nuclear Information System (INIS)

    Lean, Hooi Hooi; Smyth, Russell

    2010-01-01

    This paper employs annual data from 1971 to 2006 to examine the causal relationship between aggregate output, electricity consumption, exports, labor and capital in a multivariate model for Malaysia. We find that there is bidirectional Granger causality running between aggregate output and electricity consumption. The policy implication of this result is that Malaysia should adopt the dual strategy of increasing investment in electricity infrastructure and stepping up electricity conservation policies to reduce unnecessary wastage of electricity, in order to avoid the negative effect of reducing electricity consumption on aggregate output. We also find support for the export-led hypothesis which states Granger causality runs from exports to aggregate output. This result is consistent with Malaysia pursuing a successful export-orientated strategy. (author)

  15. Nonlinear Dynamic Models in Advanced Life Support

    Science.gov (United States)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  16. Acute bouts of wheel running decrease cocaine self-administration: Influence of exercise output.

    Science.gov (United States)

    Smith, Mark A; Fronk, Gaylen E; Zhang, Huailin; Magee, Charlotte P; Robinson, Andrea M

    Exercise is associated with lower rates of drug use in human populations and decreases drug self-administration in laboratory animals. Most of the existing literature examining the link between exercise and drug use has focused on chronic, long-term exercise, and very few studies have examined the link between exercise output (i.e., amount of exercise) and drug self-administration. The purpose of this study was to examine the effects of acute bouts of exercise on cocaine self-administration, and to determine whether these effects were dependent on exercise output and the time interval between exercise and drug self-administration. Female rats were trained to run in automated running wheels, implanted with intravenous catheters, and allowed to self-administer cocaine on a fixed ratio (FR1) schedule of reinforcement. Immediately prior to each test session, subjects engaged in acute bouts of exercise in which they ran for 0, 30, or 60min at 12m/min. Acute bouts of exercise before test sessions decreased cocaine self-administration in an output-dependent manner, with the greatest reduction in cocaine intake observed in the 60-min exercise condition. Exercise did not reduce cocaine self-administration when wheel running and test sessions were separated by 12h, and exercise did not reduce responding maintained by food or responding during a saline substitution test. These data indicate that acute bouts of exercise decrease cocaine self-administration in a time- and output-dependent manner. These results also add to a growing body of literature suggesting that physical activity may be an effective component of drug abuse treatment programs. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Automated pH Control of Nutrient Solution in a Hydroponic Plant Growth System

    Science.gov (United States)

    Smith, B.; Dogan, N.; Aglan, H.; Mortley, D.; Loretan, P.

    1998-01-01

    Over, the years, NASA has played an important role in providing to and the development of automated nutrient delivery and monitoring, systems for growing crops hydroponically for long term space missions. One example are the systems used in the Biomass Production Chamber (BPC) at Kennedy Space Center (KSC). The current KSC monitoring system is based on an engineering workstation using standard analog/digital input/output hardware and custom written software. The monitoring system uses completely separate sensors to provide a check of control sensor accuracy and has the ability to graphically display and store data form past experiment so that they are available for data analysis [Fortson, 1992]. In many cases, growing systems have not been fitted with the kind of automated control systems as used at KSC. The Center for Food and Environmental Systems for Human Exploration of Space (CFESH) located on the campus of Tuskegee University, has effectively grown sweetpotatoes and peanuts hydroponically for the past five years. However they have adjusted the pH electrical conductivity and volume of the hydroponic nutrient solution only manually at times when the solution was to be replenished or changed out according to its protocol (e.g. one-week, two-week, or two-day cycle). But the pH of the nutrient solution flowing through the channel is neither known nor controlled between the update, change out, or replenishment period. Thus, the pH of the nutrient solution is not held at an optimum level over the span of the plant's growth cycle. To solve this dilemma, an automated system for the control and data logging of pH data relative to sweetpotato production using the nutrient film technique (NFT) has been developed, This paper discusses a microprocessor-based system, which was designed to monitor, control, and record the pH of a nutrient solution used for growing sweetpotatoes using NFT.

  18. Data for automated, high-throughput microscopy analysis of intracellular bacterial colonies using spot detection.

    Science.gov (United States)

    Ernstsen, Christina L; Login, Frédéric H; Jensen, Helene H; Nørregaard, Rikke; Møller-Jensen, Jakob; Nejsum, Lene N

    2017-10-01

    Quantification of intracellular bacterial colonies is useful in strategies directed against bacterial attachment, subsequent cellular invasion and intracellular proliferation. An automated, high-throughput microscopy-method was established to quantify the number and size of intracellular bacterial colonies in infected host cells (Detection and quantification of intracellular bacterial colonies by automated, high-throughput microscopy, Ernstsen et al., 2017 [1]). The infected cells were imaged with a 10× objective and number of intracellular bacterial colonies, their size distribution and the number of cell nuclei were automatically quantified using a spot detection-tool. The spot detection-output was exported to Excel, where data analysis was performed. In this article, micrographs and spot detection data are made available to facilitate implementation of the method.

  19. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1988-01-01

    Automation, the removal of the human element in inspection has not been generally applied to film radiographic NDT. The justification for automation is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 3x10 (to the power of nine) bits per 14x17. This is equivalent to 2200 computer floppy disks parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, a computer aided interpretation appears on the horizon. A unit which laser scans a 14x27 (inch) film in 6-8 seconds can digitize film in information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for film digital radiography system) is moving toward 50 micron (16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. (Author). 4 refs.; 21 figs

  20. Automated analysis in generic groups

    Science.gov (United States)

    Fagerholm, Edvard

    This thesis studies automated methods for analyzing hardness assumptions in generic group models, following ideas of symbolic cryptography. We define a broad class of generic and symbolic group models for different settings---symmetric or asymmetric (leveled) k-linear groups --- and prove ''computational soundness'' theorems for the symbolic models. Based on this result, we formulate a master theorem that relates the hardness of an assumption to solving problems in polynomial algebra. We systematically analyze these problems identifying different classes of assumptions and obtain decidability and undecidability results. Then, we develop automated procedures for verifying the conditions of our master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome is an automated tool, the Generic Group Analyzer, which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption. Structure-preserving signatures are signature schemes defined over bilinear groups in which messages, public keys and signatures are group elements, and the verification algorithm consists of evaluating ''pairing-product equations''. Recent work on structure-preserving signatures studies optimality of these schemes in terms of the number of group elements needed in the verification key and the signature, and the number of pairing-product equations in the verification algorithm. While the size of keys and signatures is crucial for many applications, another aspect of performance is the time it takes to verify a signature. The most expensive operation during verification is the computation of pairings. However, the concrete number of pairings is not captured by the number of pairing-product equations considered in earlier work. We consider the question of what is the minimal number of pairing computations needed to verify structure-preserving signatures. We build an

  1. Turbulent Output-Based Anisotropic Adaptation

    Science.gov (United States)

    Park, Michael A.; Carlson, Jan-Renee

    2010-01-01

    Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.

  2. Nuclide identifier and grat data reader application for ORIGEN output file

    International Nuclear Information System (INIS)

    Arif Isnaeni

    2011-01-01

    ORIGEN is a one-group depletion and radioactive decay computer code developed at the Oak Ridge National Laboratory (ORNL). ORIGEN takes one-group neutronics calculation providing various nuclear material characteristics (the buildup, decay and processing of radioactive materials). ORIGEN output is a text-based file, ORIGEN output file contains only numbers in the form of group data nuclide, nuclide identifier and grat. This application was created to facilitate data collection nuclide identifier and grat, this application also has a function to acquire mass number data and calculate mass (gram) for each nuclide. Output from these applications can be used for computer code data input for neutronic calculations such as MCNP. (author)

  3. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  4. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  5. The Value of Risk : Measuring the Service Output of U.S. Commercial Banks

    NARCIS (Netherlands)

    Basu, Susanto; Inklaar, Robert; Wang, J. Christina

    2008-01-01

    Rather than charging direct fees, banks often charge implicitly for their services via interest spreads. As a result, much of bank output has to be estimated indirectly. In contrast to current statistical practice, dynamic optimizing models of banks argue that compensation for bearing systematic

  6. Subjective and objective assessment of manual, supported, and automated vehicle control

    NARCIS (Netherlands)

    Vos, A.P. de; Godthelp, J.; Käppler, W.D.

    1998-01-01

    In this paper subjective and objective assessments of vehicle control are illustrated by means of ex-periments concerning manipulation of vehicle dynamics, driver support, and automated driving. Subjective ratings are discussed in relation to objective performance measures.

  7. The output, incomes and assets-capital relations in individual farms

    Directory of Open Access Journals (Sweden)

    Roma Ryś-Jurek

    2009-01-01

    Full Text Available In this article an attempt was made to analyse the output, incomes as well as other components of assets and sources that provided their financing in Polish individual farms, in comparison with farms from other EU countries. A special emphasis was put on examination of the interrelations between income, output and stocks observed within individual farms. Research was based on the FADN database that included basic information about average individual farms in years 2004-2006. The research showed, that (among other things the average output and family farm income were three times lower in Poland than the average in the Union. Also the increase of income was possible only thanks to the subsidies from the Union. According to the regression models, in Poland the positive influence on the increase of family farm income had stocks, crops and livestock output. While in the EU positive influence had crops and livestock production and negative influence had the stocks on an income’s growth.

  8. Towards life-long mobility: Accessible transportation with automation

    OpenAIRE

    Jeon, M; Politis, Ioannis; Shladover, SE; Sutter, C; Terken, JMB; Poppinga, B

    2016-01-01

    Despite the prevalent discussions on automated vehicles, little research has been conducted with a focus on inclusiveness of traditionally excluded populations from driving. Even though we may envision a future where everyone can drive with perfect automation, the problem will not be that simple. As with any other problem domains, we need to scrutinize all the design considerations - not only each population's characteristics (capabilities and limitations), but also the entire system, technol...

  9. Concentration dynamics in lakes and reservoirs. Studies using radioactive tracers

    International Nuclear Information System (INIS)

    Gilath, Ch.

    1983-01-01

    The use of radioactive tracers for the investigation of concentration dynamics of inert soluble matter in lakes and reservoirs is reviewed. Shallow and deep stratified lakes are considered. The mechanism of mixing in lakes, flow pattern and input - output response are discussed. The methodology of the use of radioactive tracers for concentration dynamic studies is described. Examples of various investigations are reviewed. The dynamics of shallow lakes can be found and expressed in terms of transfer functions, axial dispersion models, residence time distributions and sometimes only semiquantitative information about the flow pattern. The dynamics of deep, stratified lakes is more complex and difficult to investigate with tracers. Flow pattern, horizontal and vertical eddy diffusivities, mass transfer between the hypolimnion and epilimnion are tools used for describing this dynamics. (author)

  10. GDP Growth, Potential Output, and Output Gaps in Mexico

    OpenAIRE

    Ebrima A Faal

    2005-01-01

    This paper analyzes the sources of Mexico's economic growth since the 1960s and compares various decompositions of historical growth into its trend and cyclical components. The role of the implied output gaps in the inflationary process is then assessed. Looking ahead, the paper presents medium-term paths for GDP based on alternative assumptions for productivity growth rates. The results indicate that the most important factor underlying the slowdown in output growth was a decline in trend to...

  11. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that the system can detect the misbehaving parties who caused that failure. Accountability is an intuitively stronger property than verifiability as the latter only rests on the possibility of detecting the failure of a goal. A plethora of accountability and verifiability definitions have been proposed...... in the literature. Those definitions are either very specific to the protocols in question, hence not applicable in other scenarios, or too general and widely applicable but requiring complicated and hard to follow manual proofs. In this paper, we advance formal definitions of verifiability and accountability...... that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...

  12. The standard laboratory module approach to automation of the chemical laboratory

    International Nuclear Information System (INIS)

    Hollen, R.M.; Erkkila, T.H.

    1993-01-01

    Automation of the technology and practice of environmental laboratory automation has not been as rapid or complete as one might expect. Confined to autosamplers and limited robotic systems, our ability to apply production concepts to environmental analytical analysis is not great. With the impending remediation of our hazardous waste sites in the US, only the application of production chemistry techniques will even begin to provide those responsible with the necessary knowledge to accomplish the cleanup expeditiously and safely. Tightening regulatory requirements have already mandated staggering increases in sampling and characterization needs with the future only guaranteeing greater demands. The Contaminant Analysis Automation Program has been initiated by our government to address these current and future characterization by application of a new robotic paradigm for analytical chemistry. By using standardized modular instruments, named Standard Laboratory Modules, flexible automation systems can rapidly be configured to apply production techniques to our nations environmental problems at-site

  13. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    International Nuclear Information System (INIS)

    Lee, Myung Eun; Kim, Jong Hyo; Woo, Bo Yeong; Ko, Micheal D.; Jamshidi, Neema

    2017-01-01

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics

  14. Smartnotebook: A semi-automated approach to protein sequential NMR resonance assignments

    International Nuclear Information System (INIS)

    Slupsky, Carolyn M.; Boyko, Robert F.; Booth, Valerie K.; Sykes, Brian D.

    2003-01-01

    Complete and accurate NMR spectral assignment is a prerequisite for high-throughput automated structure determination of biological macromolecules. However, completely automated assignment procedures generally encounter difficulties for all but the most ideal data sets. Sources of these problems include difficulty in resolving correlations in crowded spectral regions, as well as complications arising from dynamics, such as weak or missing peaks, or atoms exhibiting more than one peak due to exchange phenomena. Smartnotebook is a semi-automated assignment software package designed to combine the best features of the automated and manual approaches. The software finds and displays potential connections between residues, while the spectroscopist makes decisions on which connection is correct, allowing rapid and robust assignment. In addition, smartnotebook helps the user fit chains of connected residues to the primary sequence of the protein by comparing the experimentally determined chemical shifts with expected shifts derived from a chemical shift database, while providing bookkeeping throughout the assignment procedure

  15. Parameters Investigation of Mathematical Model of Productivity for Automated Line with Availability by DMAIC Methodology

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2014-01-01

    Full Text Available Automated line is widely applied in industry especially for mass production with less variety product. Productivity is one of the important criteria in automated line as well as industry which directly present the outputs and profits. Forecast of productivity in industry accurately in order to achieve the customer demand and the forecast result is calculated by using mathematical model. Mathematical model of productivity with availability for automated line has been introduced to express the productivity in terms of single level of reliability for stations and mechanisms. Since this mathematical model of productivity with availability cannot achieve close enough productivity compared to actual one due to lack of parameters consideration, the enhancement of mathematical model is required to consider and add the loss parameters that is not considered in current model. This paper presents the investigation parameters of productivity losses investigated by using DMAIC (Define, Measure, Analyze, Improve, and Control concept and PACE Prioritization Matrix (Priority, Action, Consider, and Eliminate. The investigated parameters are important for further improvement of mathematical model of productivity with availability to develop robust mathematical model of productivity in automated line.

  16. The value of risk : measuring the service output of U.S. commercial banks

    NARCIS (Netherlands)

    Basu, Susanto; Inklaar, Robert; Wang, J. Christina

    Banks often charge implicitly for their services via interest spreads, instead of explicit fees. Much of bank output thus has to be estimated indirectly. In contrast to current statistical practice, dynamic optimizing models of banks argue that compensation for bearing systematic risk is not part of

  17. The contaminant analysis automation robot implementation for the automated laboratory

    International Nuclear Information System (INIS)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-01-01

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLM when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation

  18. Output variability caused by random seeds in a multi-agent transport simulation model

    DEFF Research Database (Denmark)

    Paulsen, Mads; Rasmussen, Thomas Kjær; Nielsen, Otto Anker

    2018-01-01

    Dynamic transport simulators are intended to support decision makers in transport-related issues, and as such it is valuable that the random variability of their outputs is as small as possible. In this study we analyse the output variability caused by random seeds of a multi-agent transport...... simulator (MATSim) when applied to a case study of Santiago de Chile. Results based on 100 different random seeds shows that the relative accuracies of estimated link loads tend to increase with link load, but that relative errors of up to 10 % do occur even for links with large volumes. Although...

  19. Development of an automated core model for nuclear reactors

    International Nuclear Information System (INIS)

    Mosteller, R.D.

    1998-01-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input

  20. Light output from six battery operated dental curing lights

    Energy Technology Data Exchange (ETDEWEB)

    Shimokawa, Carlos Alberto Kenji, E-mail: carlos.shimokawa@usp.br [University of São Paulo, School of Dentistry, Restorative Dentistry, Avenida Professor Lineu Prestes, 2227, 05508-000, São Paulo, São Paulo (Brazil); Dalhousie University, Faculty of Dentistry, Dental Clinical Sciences, 5981 University Avenue, B3H 4R2, Halifax, Nova Scotia (Canada); Turbino, Míriam Lacalle, E-mail: miturbin@usp.br [University of São Paulo, School of Dentistry, Restorative Dentistry, Avenida Professor Lineu Prestes, 2227, 05508-000, São Paulo, São Paulo (Brazil); Harlow, Jessie Eudora, E-mail: jessie.harlow@dal.ca [Dalhousie University, Faculty of Dentistry, Dental Clinical Sciences, 5981 University Avenue, B3H 4R2, Halifax, Nova Scotia (Canada); Price, Hannah Louise, E-mail: hannlprice@gmail.com [Dalhousie University, Faculty of Dentistry, Dental Clinical Sciences, 5981 University Avenue, B3H 4R2, Halifax, Nova Scotia (Canada); Price, Richard Bengt, E-mail: richard.price@dal.ca [Dalhousie University, School of Biomedical Engineering and Faculty of Dentistry, 5981 University Avenue, B3H 4R2, Halifax, Nova Scotia (Canada)

    2016-12-01

    Light Curing Units (LCUs) are used daily in almost every dental office to photocure resins, but because the light is so bright, the user is unable to tell visually if there are any differences between different LCUs. This study evaluated the light output from six dental LCUs: Elipar Deep Cure-S (3M ESPE), Bluephase G2 (Ivoclar Vivadent), Translux 2Wave (Heraeus Kulzer), Optilight Prime (Gnatus), Slim Blast (First Medica) and Led.B (Guilin Woodpecker) with a fully charged battery, after 50, and again after 100, 20 second light exposures. For each situation, the radiant power was measured 10 times with a laboratory-grade power meter. Then, the emission spectrum was measured using a fiber-optic spectrometer followed by an analysis of the light beam profile. It was found there were significant differences in the LCU power and the irradiance values between the LCUs (p < 0.01). The Optilight Prime and Slim Blast LCUs showed a significant reduction in light output after a 50 and 100 exposures, while Bluephase G2 exhibited a significant reduction only after 100 exposures (p < 0.01). The Bluephase G2 and Translux 2 Wave delivered an emission spectrum that had two distinct wavelength emission peaks. Only the Elipar Deep Cure-S and Bluephase G2 LCUs displayed homogeneous light beam profiles, the other LCUs exhibited highly non-homogeneous light beam profiles. It was concluded that contemporary LCUs could have very different light output characteristics. Both manufacturers and researchers should provide more information about the light output from LCUs. - Highlights: • The six LCUs delivered significantly different light output characteristics. • The use of a single irradiance value does not adequately describe the light output from a curing light. • Small differences in the tip area, or how it is defined, will have a large effect on the calculated irradiance. • In some cases there were large portions of the light tip that emitted less than 400 mW/cm². • The radiant

  1. Light output from six battery operated dental curing lights

    International Nuclear Information System (INIS)

    Shimokawa, Carlos Alberto Kenji; Turbino, Míriam Lacalle; Harlow, Jessie Eudora; Price, Hannah Louise; Price, Richard Bengt

    2016-01-01

    Light Curing Units (LCUs) are used daily in almost every dental office to photocure resins, but because the light is so bright, the user is unable to tell visually if there are any differences between different LCUs. This study evaluated the light output from six dental LCUs: Elipar Deep Cure-S (3M ESPE), Bluephase G2 (Ivoclar Vivadent), Translux 2Wave (Heraeus Kulzer), Optilight Prime (Gnatus), Slim Blast (First Medica) and Led.B (Guilin Woodpecker) with a fully charged battery, after 50, and again after 100, 20 second light exposures. For each situation, the radiant power was measured 10 times with a laboratory-grade power meter. Then, the emission spectrum was measured using a fiber-optic spectrometer followed by an analysis of the light beam profile. It was found there were significant differences in the LCU power and the irradiance values between the LCUs (p < 0.01). The Optilight Prime and Slim Blast LCUs showed a significant reduction in light output after a 50 and 100 exposures, while Bluephase G2 exhibited a significant reduction only after 100 exposures (p < 0.01). The Bluephase G2 and Translux 2 Wave delivered an emission spectrum that had two distinct wavelength emission peaks. Only the Elipar Deep Cure-S and Bluephase G2 LCUs displayed homogeneous light beam profiles, the other LCUs exhibited highly non-homogeneous light beam profiles. It was concluded that contemporary LCUs could have very different light output characteristics. Both manufacturers and researchers should provide more information about the light output from LCUs. - Highlights: • The six LCUs delivered significantly different light output characteristics. • The use of a single irradiance value does not adequately describe the light output from a curing light. • Small differences in the tip area, or how it is defined, will have a large effect on the calculated irradiance. • In some cases there were large portions of the light tip that emitted less than 400 mW/cm². • The radiant

  2. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  3. Plant automation-application to SBWR project

    International Nuclear Information System (INIS)

    Rodriguez Rodriguez, C.

    1995-01-01

    In accordance with the requirements set out in the URD (Utility Requirements Document) issued by the EPRI (Electrical Power Research Institute), the design of new reactors, whether evolutionary or passive, shall taken into account the systematic automation of functions relating to normal plant operation. The objectives established are to: =2E Simplify operator-performed tasks =2E Reduce the risk of operator-error by considering human factors in the allocation of tasks =2E Improve man-machine reliability =2E Increase the availability of the plant In previous designs, automation has only been considered from the point of view of compliance with regulatory requirements for safety-related systems, or in isolated cases, as a method of protecting the investment where there is a risk of damage to main equipment. The use of digital technology has prevented the systematic pursuit of such objectives in the design of automated systems for processes associated with normal plant operation (startup, load follow, normal shutdown, etc) from being excessively complex and therefore costly to undertake. This paper describes how the automation of the aforementioned normal plant operation activities has been approached in General Electric's SBWR (Simplified Boiling Water Reactor) design. (Author)

  4. Large scale oil lease automation and electronic custody transfer

    International Nuclear Information System (INIS)

    Price, C.R.; Elmer, D.C.

    1995-01-01

    Typically, oil field production operations have only been automated at fields with long term production profiles and enhanced recovery. The automation generally consists of monitoring and control at the wellhead and centralized facilities. However, Union Pacific Resources Co. (UPRC) has successfully implemented a large scale automation program for rapid-decline primary recovery Austin Chalk wells where purchasers buy and transport oil from each individual wellsite. This project has resulted in two significant benefits. First, operators are using the system to re-engineer their work processes. Second, an inter-company team created a new electronic custody transfer method. This paper will describe: the progression of the company's automation objectives in the area; the field operator's interaction with the system, and the related benefits; the research and development of the new electronic custody transfer method

  5. Carbon emissions, energy consumption and output: A threshold analysis on the causal dynamics in emerging African economies

    International Nuclear Information System (INIS)

    Mensah, Justice Tei

    2014-01-01

    Following the recent global economic downturn, attention has gradually shifted towards emerging economies which have experienced robust growth amidst sluggish growth of the world economy. A significant number of these emerging economies are in Africa. Rising growth in these economies is associated with surging demand for energy to propel the engines of growth, with direct implications on emissions into the atmosphere. Further, these economies are constantly being shaped by series of structural reforms with direct and indirect effects on growth, demand for energy, etc. To this end, this paper examines the causal dynamics among energy use, real GDP and CO 2 emissions in the presence of regime shifts in six emerging African economies using the Gregory and Hansen (1996a). J. Econ. 70, 99–126 threshold cointegration and the Toda and Yamamoto (1995). J. Econometrics. 66, 225–250 Granger causality techniques. Results confirm the presence of regime shift effects in the long run inter-linkages among energy use, real GDP and CO 2 emissions in the countries considered, thus indicating that structural changes have both economic and environmental effects. Hence, integration of energy and environmental policies into development plans is imperative towards attaining sustainable growth and development. - Highlights: • The paper examines the causal dynamics among output, energy demand and carbon emissions in the presence of regime shifts. • Regime shift have significant effects on the nexus among energy use, real GDP and CO 2 emissions. • Results suggest that structural changes in selected countries have both economic and environmental effects. • Integration of energy and environmental policies into development plans is desirable

  6. Rules-based analysis with JBoss Drools: adding intelligence to automation

    International Nuclear Information System (INIS)

    Ley, E. de; Jacobs, D.

    2012-01-01

    Rule engines are specialized software systems for applying conditional actions (if/then rules) on data. They are also known as 'production rule systems'. Rules engines are less-known as software technology than the traditional procedural, object-oriented, scripting or dynamic development languages. This is a pity, as their usage may offer an important enrichment to a development toolbox. JBoss Drools is an open-source rules engine that can easily be embedded in any Java application. Through an integration in our Passerelle process automation suite, we have been able to provide advanced solutions for intelligent process automation, complex event processing, system monitoring and alarming, automated repair etc. This platform has been proven for many years as an automated diagnosis and repair engine for Belgium's largest telecom provider, and it is being piloted at Synchrotron Soleil for device monitoring and alarming. After an introduction to rules engines in general and JBoss Drools in particular, we will present its integration in a solution platform, some important principles and a practical use case. (authors)

  7. NIF ICCS Test Controller for Automated and Manual Testing

    International Nuclear Information System (INIS)

    Zielinski, J S

    2007-01-01

    The National Ignition Facility (NIF) Integrated Computer Control System (ICCS) is a large (1.5 MSLOC), hierarchical, distributed system that controls all aspects of the NIF laser [1]. The ICCS team delivers software updates to the NIF facility throughout the year to support shot operations and commissioning activities. In 2006, there were 48 releases of ICCS: 29 full releases, 19 patches. To ensure the quality of each delivery, thousands of manual and automated tests are performed using the ICCS Test Controller test infrastructure. The TestController system provides test inventory management, test planning, automated test execution and manual test logging, release testing summaries and test results search, all through a web browser interface. Automated tests include command line based frameworks server tests and Graphical User Interface (GUI) based Java tests. Manual tests are presented as a checklist-style web form to be completed by the tester. The results of all tests, automated and manual, are kept in a common repository that provides data to dynamic status reports. As part of the 3-stage ICCS release testing strategy, the TestController system helps plan, evaluate and track the readiness of each release to the NIF facility

  8. Semiconducting double-dot exchange-only qubit dynamics in the presence of magnetic and charge noises

    Science.gov (United States)

    Ferraro, E.; Fanciulli, M.; De Michielis, M.

    2018-06-01

    The effects of magnetic and charge noises on the dynamical evolution of the double-dot exchange-only qubit (DEOQ) is theoretically investigated. The DEOQ consisting of three electrons arranged in an electrostatically defined double quantum dot deserves special interest in quantum computation applications. Its advantages are in terms of fabrication, control and manipulation in view of implementation of fast single and two-qubit operations through only electrical tuning. The presence of the environmental noise due to nuclear spins and charge traps, in addition to fluctuations in the applied magnetic field and charge fluctuations on the electrostatic gates adopted to confine the electrons, is taken into account including random magnetic field and random coupling terms in the Hamiltonian. The behavior of the return probability as a function of time for initial conditions of interest is presented. Moreover, through an envelope-fitting procedure on the return probabilities, coherence times are extracted when model parameters take values achievable experimentally in semiconducting devices.

  9. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  10. Automated quantification of proliferation with automated hot-spot selection in phosphohistone H3/MART1 dual-stained stage I/II melanoma.

    Science.gov (United States)

    Nielsen, Patricia Switten; Riber-Hansen, Rikke; Schmidt, Henrik; Steiniche, Torben

    2016-04-09

    Staging of melanoma includes quantification of a proliferation index, i.e., presumed melanocytic mitoses of H&E stains are counted manually in hot spots. Yet, its reproducibility and prognostic impact increases by immunohistochemical dual staining for phosphohistone H3 (PHH3) and MART1, which also may enable fully automated quantification by image analysis. To ensure manageable workloads and repeatable measurements in modern pathology, the study aimed to present an automated quantification of proliferation with automated hot-spot selection in PHH3/MART1-stained melanomas. Formalin-fixed, paraffin-embedded tissue from 153 consecutive stage I/II melanoma patients was immunohistochemically dual-stained for PHH3 and MART1. Whole slide images were captured, and the number of PHH3/MART1-positive cells was manually and automatically counted in the global tumor area and in a manually and automatically selected hot spot, i.e., a fixed 1-mm(2) square. Bland-Altman plots and hypothesis tests compared manual and automated procedures, and the Cox proportional hazards model established their prognostic impact. The mean difference between manual and automated global counts was 2.9 cells/mm(2) (P = 0.0071) and 0.23 cells per hot spot (P = 0.96) for automated counts in manually and automatically selected hot spots. In 77 % of cases, manual and automated hot spots overlapped. Fully manual hot-spot counts yielded the highest prognostic performance with an adjusted hazard ratio of 5.5 (95 % CI, 1.3-24, P = 0.024) as opposed to 1.3 (95 % CI, 0.61-2.9, P = 0.47) for automated counts with automated hot spots. The automated index and automated hot-spot selection were highly correlated to their manual counterpart, but altogether their prognostic impact was noticeably reduced. Because correct recognition of only one PHH3/MART1-positive cell seems important, extremely high sensitivity and specificity of the algorithm is required for prognostic purposes. Thus, automated

  11. Nomenclature and basic concepts in automation in the clinical laboratory setting: a practical glossary.

    Science.gov (United States)

    Evangelopoulos, Angelos A; Dalamaga, Maria; Panoutsopoulos, Konstantinos; Dima, Kleanthi

    2013-01-01

    In the early 80s, the word automation was used in the clinical laboratory setting referring only to analyzers. But in late 80s and afterwards, automation found its way into all aspects of the diagnostic process, embracing not only the analytical but also the pre- and post-analytical phase. While laboratories in the eastern world, mainly Japan, paved the way for laboratory automation, US and European laboratories soon realized the benefits and were quick to follow. Clearly, automation and robotics will be a key survival tool in a very competitive and cost-concious healthcare market. What sets automation technology apart from so many other efficiency solutions are the dramatic savings that it brings to the clinical laboratory. Further standardization will assure the success of this revolutionary new technology. One of the main difficulties laboratory managers and personnel must deal with when studying solutions to reengineer a laboratory is familiarizing themselves with the multidisciplinary and technical terminology of this new and exciting field. The present review/glossary aims at giving an overview of the most frequently used terms within the scope of laboratory automation and to put laboratory automation on a sounder linguistic basis.

  12. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    Science.gov (United States)

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  13. Mass Spectra-Based Framework for Automated Structural Elucidation of Metabolome Data to Explore Phytochemical Diversity

    Science.gov (United States)

    Matsuda, Fumio; Nakabayashi, Ryo; Sawada, Yuji; Suzuki, Makoto; Hirai, Masami Y.; Kanaya, Shigehiko; Saito, Kazuki

    2011-01-01

    A novel framework for automated elucidation of metabolite structures in liquid chromatography–mass spectrometer metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method. PMID:22645535

  14. Mass spectra-based framework for automated structural elucidation of metabolome data to explore phytochemical diversity

    Directory of Open Access Journals (Sweden)

    Fumio eMatsuda

    2011-08-01

    Full Text Available A novel framework for automated elucidation of metabolite structures in liquid chromatography-mass spectrometer (LC-MS metabolome data was constructed by integrating databases. High-resolution tandem mass spectra data automatically acquired from each metabolite signal were used for database searches. Three distinct databases, KNApSAcK, ReSpect, and the PRIMe standard compound database, were employed for the structural elucidation. The outputs were retrieved using the CAS metabolite identifier for identification and putative annotation. A simple metabolite ontology system was also introduced to attain putative characterization of the metabolite signals. The automated method was applied for the metabolome data sets obtained from the rosette leaves of 20 Arabidopsis accessions. Phenotypic variations in novel Arabidopsis metabolites among these accessions could be investigated using this method.

  15. Automated Verification of Spatial Resolution in Remotely Sensed Imagery

    Science.gov (United States)

    Davis, Bruce; Ryan, Robert; Holekamp, Kara; Vaughn, Ronald

    2011-01-01

    Image spatial resolution characteristics can vary widely among sources. In the case of aerial-based imaging systems, the image spatial resolution characteristics can even vary between acquisitions. In these systems, aircraft altitude, speed, and sensor look angle all affect image spatial resolution. Image spatial resolution needs to be verified with estimators that include the ground sample distance (GSD), the modulation transfer function (MTF), and the relative edge response (RER), all of which are key components of image quality, along with signal-to-noise ratio (SNR) and dynamic range. Knowledge of spatial resolution parameters is important to determine if features of interest are distinguishable in imagery or associated products, and to develop image restoration algorithms. An automated Spatial Resolution Verification Tool (SRVT) was developed to rapidly determine the spatial resolution characteristics of remotely sensed aerial and satellite imagery. Most current methods for assessing spatial resolution characteristics of imagery rely on pre-deployed engineered targets and are performed only at selected times within preselected scenes. The SRVT addresses these insufficiencies by finding uniform, high-contrast edges from urban scenes and then using these edges to determine standard estimators of spatial resolution, such as the MTF and the RER. The SRVT was developed using the MATLAB programming language and environment. This automated software algorithm assesses every image in an acquired data set, using edges found within each image, and in many cases eliminating the need for dedicated edge targets. The SRVT automatically identifies high-contrast, uniform edges and calculates the MTF and RER of each image, and when possible, within sections of an image, so that the variation of spatial resolution characteristics across the image can be analyzed. The automated algorithm is capable of quickly verifying the spatial resolution quality of all images within a data

  16. Automation and Mankind

    Science.gov (United States)

    1960-08-07

    limited by the cap- abilities of the human organism in the matter of control of its processes. In our time, the speeds of technological processes are...in many cases limited by conditions of control. The speed of human reaction is limited and therefore, at pre- sent, only processes of a relatively...forwiard, It can e foreseer thast automIation will comp~letely free Mans -Pn work unler conlitions’ of high texpemratures pressures,, anid nollutA-: or

  17. A5: Automated Analysis of Adversarial Android Applications

    Science.gov (United States)

    2014-06-03

    A5: Automated Analysis of Adversarial Android Applications Timothy Vidas, Jiaqi Tan, Jay Nahata, Chaur Lih Tan, Nicolas Christin...detecting, on the device itself, that an application is malicious is much more complex without elevated privileges . In other words, given the...interface via website. Blasing et al. [7] describe another dynamic analysis system for Android . Their system focuses on classifying input applications as

  18. Changes in motivational and higher level cognitive processes when interacting with in-vehicle automation

    OpenAIRE

    Beggiato, Matthias

    2014-01-01

    Many functions that at one time could only be performed by humans can nowadays be carried out by machines. Automation impacts many areas of life including work, home, communication and mobility. In the driving context, in-vehicle automation is considered to provide solutions for environmental, economic, safety and societal challenges. However, automation changes the driving task and the human-machine interaction. Thus, the expected benefit of in-vehicle automation can be undermined by changes...

  19. Changes in motivational and higher level cognitive processes when interacting with in-vehicle automation

    OpenAIRE

    Beggiato, Matthias

    2015-01-01

    Many functions that at one time could only be performed by humans can nowadays be carried out by machines. Automation impacts many areas of life including work, home, communication and mobility. In the driving context, in-vehicle automation is considered to provide solutions for environmental, economic, safety and societal challenges. However, automation changes the driving task and the human-machine interaction. Thus, the expected benefit of in-vehicle automation can be undermined by changes...

  20. Optimization of automation: I. Estimation method of cognitive automation rates reflecting the effects of automation on human operators in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Seong, Poong Hyun

    2014-01-01

    Highlights: • We propose an estimation method of the automation rate by taking the advantages of automation as the estimation measures. • We conduct the experiments to examine the validity of the suggested method. • The higher the cognitive automation rate is, the greater the decreased rate of the working time will be. • The usefulness of the suggested estimation method is proved by statistical analyses. - Abstract: Since automation was introduced in various industrial fields, the concept of the automation rate has been used to indicate the inclusion proportion of automation among all work processes or facilities. Expressions of the inclusion proportion of automation are predictable, as is the ability to express the degree of the enhancement of human performance. However, many researchers have found that a high automation rate does not guarantee high performance. Therefore, to reflect the effects of automation on human performance, this paper proposes a new estimation method of the automation rate that considers the effects of automation on human operators in nuclear power plants (NPPs). Automation in NPPs can be divided into two types: system automation and cognitive automation. Some general descriptions and characteristics of each type of automation are provided, and the advantages of automation are investigated. The advantages of each type of automation are used as measures of the estimation method of the automation rate. One advantage was found to be a reduction in the number of tasks, and another was a reduction in human cognitive task loads. The system and the cognitive automation rate were proposed as quantitative measures by taking advantage of the aforementioned benefits. To quantify the required human cognitive task loads and thus suggest the cognitive automation rate, Conant’s information-theory-based model was applied. The validity of the suggested method, especially as regards the cognitive automation rate, was proven by conducting

  1. Arduino-based automation of a DNA extraction system.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Ryu, Mun-Ho; Kim, Jong-Won

    2015-01-01

    There have been many studies to detect infectious diseases with the molecular genetic method. This study presents an automation process for a DNA extraction system based on microfluidics and magnetic bead, which is part of a portable molecular genetic test system. This DNA extraction system consists of a cartridge with chambers, syringes, four linear stepper actuators, and a rotary stepper actuator. The actuators provide a sequence of steps in the DNA extraction process, such as transporting, mixing, and washing for the gene specimen, magnetic bead, and reagent solutions. The proposed automation system consists of a PC-based host application and an Arduino-based controller. The host application compiles a G code sequence file and interfaces with the controller to execute the compiled sequence. The controller executes stepper motor axis motion, time delay, and input-output manipulation. It drives the stepper motor with an open library, which provides a smooth linear acceleration profile. The controller also provides a homing sequence to establish the motor's reference position, and hard limit checking to prevent any over-travelling. The proposed system was implemented and its functionality was investigated, especially regarding positioning accuracy and velocity profile.

  2. Automated Controller Synthesis for non-Deterministic Piecewise-Affine Hybrid Systems

    DEFF Research Database (Denmark)

    Grunnet, Jacob Deleuran

    formations. This thesis uses a hybrid systems model of a satellite formation with possible actuator faults as a motivating example for developing an automated control synthesis method for non-deterministic piecewise-affine hybrid systems (PAHS). The method does not only open an avenue for further research...... in fault tolerant satellite formation control, but can be used to synthesise controllers for a wide range of systems where external events can alter the system dynamics. The synthesis method relies on abstracting the hybrid system into a discrete game, finding a winning strategy for the game meeting...... game and linear optimisation solvers for controller refinement. To illustrate the efficacy of the method a reoccurring satellite formation example including actuator faults has been used. The end result is the application of PAHSCTRL on the example showing synthesis and simulation of a fault tolerant...

  3. Carnot efficiency at divergent power output

    Science.gov (United States)

    Polettini, Matteo; Esposito, Massimiliano

    2017-05-01

    The widely debated feasibility of thermodynamic machines achieving Carnot efficiency at finite power has been convincingly dismissed. Yet, the common wisdom that efficiency can only be optimal in the limit of infinitely slow processes overlooks the dual scenario of infinitely fast processes. We corroborate that efficient engines at divergent power output are not theoretically impossible, framing our claims within the theory of Stochastic Thermodynamics. We inspect the case of an electronic quantum dot coupled to three particle reservoirs to illustrate the physical rationale.

  4. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    Science.gov (United States)

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  5. Automation in the clinical microbiology laboratory.

    Science.gov (United States)

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Dynamic adaptive policymaking for the sustainable city: The case of automated taxis

    Directory of Open Access Journals (Sweden)

    Warren E. Walker

    2017-06-01

    Full Text Available By 2050, about two-thirds of the world’s people are expected to live in urban areas. But, the economic viability and sustainability of city centers is threatened by problems related to transport, such as pollution, congestion, and parking. Much has been written about automated vehicles and demand responsive transport. The combination of these potentially disruptive developments could reduce these problems. However, implementation is held back by uncertainties, including public acceptance, liability, and privacy. So, their potential to reduce urban transport problems may not be fully realized. We propose an adaptive approach to implementation that takes some actions right away and creates a framework for future actions that allows for adaptations over time as knowledge about performance and acceptance of the new system (called ‘automated taxis’ accumulates and critical events for implementation take place. The adaptive approach is illustrated in the context of a hypothetical large city.

  7. Clever Toolbox - the Art of Automated Genre Classification

    DEFF Research Database (Denmark)

    2005-01-01

    Automatic musical genre classification can be defined as the science of finding computer algorithms that a digitized sound clip as input and yield a musical genre as output. The goal of automated genre classification is, of course, that the musical genre should agree with the human classificasion....... This demo illustrates an approach to the problem that first extract frequency-based sound features followed by a "linear regression" classifier. The basic features are the so-called mel-frequency cepstral coefficients (MFCCs), which are extracted on a time-scale of 30 msec. From these MFCC features, auto......) is subsequently used for classification. This classifier is rather simple; current research investigates more advanced methods of classification....

  8. Automated Multivariate Optimization Tool for Energy Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, P. G.; Griffith, B. T.; Long, N.; Torcellini, P. A.; Crawley, D.

    2006-07-01

    Building energy simulations are often used for trial-and-error evaluation of ''what-if'' options in building design--a limited search for an optimal solution, or ''optimization''. Computerized searching has the potential to automate the input and output, evaluate many options, and perform enough simulations to account for the complex interactions among combinations of options. This paper describes ongoing efforts to develop such a tool. The optimization tool employs multiple modules, including a graphical user interface, a database, a preprocessor, the EnergyPlus simulation engine, an optimization engine, and a simulation run manager. Each module is described and the overall application architecture is summarized.

  9. Advanced automation concepts applied to Experimental Breeder Reactor-II startup

    International Nuclear Information System (INIS)

    Berkan, R.C.; Upadhyaya, B.R.; Bywater, R.L.

    1991-08-01

    The major objective of this work is to demonstrate through simulations that advanced liquid-metal reactor plants can be operated from low power by computer control. Development of an automatic control system with this objective will help resolve specific issues and provide proof through demonstration that automatic control for plant startup is feasible. This paper presents an advanced control system design for startup of the Experimental Breeder Reactor-2 (EBR-2) located at Idaho Falls, Idaho. The design incorporates recent methods in nonlinear control with advanced diagnostics techniques such as neural networks to form an integrated architecture. The preliminary evaluations are obtained in a simulated environment by a low-order, valid nonlinear model. Within the framework of phase 1 research, the design includes an inverse dynamics controller, a fuzzy controller, and an artificial neural network controller. These three nonlinear control modules are designed to follow the EBR-2 startup trajectories in a multi-input/output regime. They are coordinated by a supervisory routine to yield a fault-tolerant, parallel operation. The control system operates in three modes: manual, semiautomatic, and fully automatic control. The simulation results of the EBR-2 startup transients proved the effectiveness of the advanced concepts. The work presented in this paper is a preliminary feasibility analysis and does not constitute a final design of an automated startup control system for EBR-2. 14 refs., 43 figs

  10. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  11. Robust output-feedback control to eliminate stick-slip oscillations in drill-string systems

    NARCIS (Netherlands)

    Vromen, T.G.M.; Dai, C.H.; van de Wouw, N.; Oomen, T.A.E.; Astrid, P.; Nijmeijer, H.

    2015-01-01

    The aim of this paper is to design a robust output-feedback controller to eliminate torsional stick-slip vibrations. A multi-modal model of the torsional dynamics with a nonlinear bit-rock interaction model is used. The controller design is based on skewed-μ DK-iteration and the stability of the

  12. Greenhouse gas footprinting for small businesses - The use of input-output data

    International Nuclear Information System (INIS)

    Berners-Lee, M.; Howard, D.C.; Moss, J.; Kaivanto, K.; Scott, W.A.

    2011-01-01

    To mitigate anthropogenic climate change greenhouse gas emissions (GHG) must be reduced; their major source is man's use of energy. A key way to manage emissions is for the energy consumer to understand their impact and the consequences of changing their activities. This paper addresses the challenge of delivering relevant, practical and reliable greenhouse gas 'footprint' information for small and medium sized businesses. The tool we describe is capable of ascribing parts of the total footprint to specific actions to which the business can relate and is sensitive enough to reflect the consequences of change. It provides a comprehensive description of all emissions for each business and sets them in the context of local, national and global statistics. It includes the GHG costs of all goods and services irrespective of their origin and without double accounting. We describe the development and use of the tool, which draws upon both national input-output data and process-based life cycle analysis techniques; a hybrid model. The use of national data sets the output in context and makes the results consistent with national and global targets, while the life cycle techniques provide a means of reflecting the dynamics of actions. The model is described in some detail along with a rationale and a short discussion of validity. As the tool is designed for small commercial users, we have taken care to combine rigour with practicality; parameterising from readily available client data whilst being clear about uncertainties. As an additional incentive, we also report on the potential costs or savings of switching activities. For users to benefit from the tool, they need to understand the output and know how much confidence they should place in the results. We not only describe an application of non-parametric statistics to generate confidence intervals, but also offer users the option of and guidance on adjusting figures to examine the sensitivity of the model to its

  13. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  14. Enhanced performance CCD output amplifier

    Science.gov (United States)

    Dunham, Mark E.; Morley, David W.

    1996-01-01

    A low-noise FET amplifier is connected to amplify output charge from a che coupled device (CCD). The FET has its gate connected to the CCD in common source configuration for receiving the output charge signal from the CCD and output an intermediate signal at a drain of the FET. An intermediate amplifier is connected to the drain of the FET for receiving the intermediate signal and outputting a low-noise signal functionally related to the output charge signal from the CCD. The amplifier is preferably connected as a virtual ground to the FET drain. The inherent shunt capacitance of the FET is selected to be at least equal to the sum of the remaining capacitances.

  15. Automation of Flexible Migration Workflows

    Directory of Open Access Journals (Sweden)

    Dirk von Suchodoletz

    2011-03-01

    Full Text Available Many digital preservation scenarios are based on the migration strategy, which itself is heavily tool-dependent. For popular, well-defined and often open file formats – e.g., digital images, such as PNG, GIF, JPEG – a wide range of tools exist. Migration workflows become more difficult with proprietary formats, as used by the several text processing applications becoming available in the last two decades. If a certain file format can not be rendered with actual software, emulation of the original environment remains a valid option. For instance, with the original Lotus AmiPro or Word Perfect, it is not a problem to save an object of this type in ASCII text or Rich Text Format. In specific environments, it is even possible to send the file to a virtual printer, thereby producing a PDF as a migration output. Such manual migration tasks typically involve human interaction, which may be feasible for a small number of objects, but not for larger batches of files.We propose a novel approach using a software-operated VNC abstraction layer in order to replace humans with machine interaction. Emulators or virtualization tools equipped with a VNC interface are very well suited for this approach. But screen, keyboard and mouse interaction is just part of the setup. Furthermore, digital objects need to be transferred into the original environment in order to be extracted after processing. Nevertheless, the complexity of the new generation of migration services is quickly rising; a preservation workflow is now comprised not only of the migration tool itself, but of a complete software and virtual hardware stack with recorded workflows linked to every supported migration scenario. Thus the requirements of OAIS management must include proper software archiving, emulator selection, system image and recording handling. The concept of view-paths could help either to automatically determine the proper pre-configured virtual environment or to set up system

  16. Automation Of An Analogue Temperature Control System For Chlorination Process Of Zircon Sand In A Bricket Form

    International Nuclear Information System (INIS)

    Triyono; Wasito, Bangun; Aryadi

    2000-01-01

    Automation of an analogue temperature control system for chlorination process of zircon sand in a bricket form has been carried out. Principally, automation of an analogue temperature control is a simple and a closed loop system model controller. The used controller system is an ON-OFF model thermocople probe as a sensor. The output system is in the form of ON-OFF relay which is connected to contactor relay, so that it is able to serve the chlorination furnace. The prepared automatic temperature control system for chlorination process of zircon sand has been continuously tested at temperatures between 800 to 1050 o C. This required heating times between 8 to 17 minutes

  17. Automating Visualization Service Generation with the WATT Compiler

    Science.gov (United States)

    Bollig, E. F.; Lyness, M. D.; Erlebacher, G.; Yuen, D. A.

    2007-12-01

    As tasks and workflows become increasingly complex, software developers are devoting increasing attention to automation tools. Among many examples, the Automator tool from Apple collects components of a workflow into a single script, with very little effort on the part of the user. Tasks are most often described as a series of instructions. The granularity of the tasks dictates the tools to use. Compilers translate fine-grained instructions to assembler code, while scripting languages (ruby, perl) are used to describe a series of tasks at a higher level. Compilers can also be viewed as transformational tools: a cross-compiler can translate executable code written on one computer to assembler code understood on another, while transformational tools can translate from one high-level language to another. We are interested in creating visualization web services automatically, starting from stand-alone VTK (Visualization Toolkit) code written in Tcl. To this end, using the OCaml programming language, we have developed a compiler that translates Tcl into C++, including all the stubs, classes and methods to interface with gSOAP, a C++ implementation of the Soap 1.1/1.2 protocols. This compiler, referred to as the Web Automation and Translation Toolkit (WATT), is the first step towards automated creation of specialized visualization web services without input from the user. The WATT compiler seeks to automate all aspects of web service generation, including the transport layer, the division of labor and the details related to interface generation. The WATT compiler is part of ongoing efforts within the NSF funded VLab consortium [1] to facilitate and automate time-consuming tasks for the science related to understanding planetary materials. Through examples of services produced by WATT for the VLab portal, we will illustrate features, limitations and the improvements necessary to achieve the ultimate goal of complete and transparent automation in the generation of web

  18. Donor age is a predictor of early low output after heart transplantation.

    Science.gov (United States)

    Fujino, Takeo; Kinugawa, Koichiro; Nitta, Daisuke; Imamura, Teruhiko; Maki, Hisataka; Amiya, Eisuke; Hatano, Masaru; Kimura, Mitsutoshi; Kinoshita, Osamu; Nawata, Kan; Komuro, Issei; Ono, Minoru

    2016-05-01

    Using hearts from marginal donors could be related to increased risk of primary graft dysfunction and poor long-term survival. However, factors associated with delayed myocardial recovery after heart transplantation (HTx) remain unknown. We sought to clarify risk factors that predict early low output after HTx, and investigated whether early low output affects mid-term graft dysfunction. We retrospectively analyzed patients who had undergone HTx at The University of Tokyo Hospital. We defined early low output patients as those whose cardiac index (CI) was early low output group, and the others into early preserved output group. We performed univariable logistic analysis and found that donor age was the only significant factor that predicted early low output (odds ratio 1.107, 95% confidence interval 1.034-1.210, p=0.002). CI of early low output patients gradually increased and it caught up with that of early preserved output patients at 2 weeks after HTx (2.4±0.6 L/min/m(2) in early low output group vs 2.5±0.5 L/min/m(2) in early preserved output group, p=0.684). Plasma B-type natriuretic peptide concentration of early low output patients was higher (1118.5±1250.2 pg/ml vs 526.4±399.5 pg/ml; p=0.033) at 1 week, 703.6±518.4 pg/ml vs 464.6±509.0 pg/ml (p=0.033) at 2 weeks, and 387.7±231.9 pg/ml vs 249.4±209.5 pg/ml (p=0.010) at 4 weeks after HTx, and it came down to that of early preserved output patients at 12 weeks after HTx. Donor age was a predictor of early low output after HTx. We should be careful after HTx from old donors. However, hemodynamic parameters of early low output patients gradually caught up with those of early preserved output patients. Copyright © 2015 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  19. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  20. SmartUnit: Empirical Evaluations for Automated Unit Testing of Embedded Software in Industry

    OpenAIRE

    Zhang, Chengyu; Yan, Yichen; Zhou, Hanru; Yao, Yinbo; Wu, Ke; Su, Ting; Miao, Weikai; Pu, Geguang

    2018-01-01

    In this paper, we aim at the automated unit coverage-based testing for embedded software. To achieve the goal, by analyzing the industrial requirements and our previous work on automated unit testing tool CAUT, we rebuild a new tool, SmartUnit, to solve the engineering requirements that take place in our partner companies. SmartUnit is a dynamic symbolic execution implementation, which supports statement, branch, boundary value and MC/DC coverage. SmartUnit has been used to test more than one...

  1. System of automated processing of radionuclide investigations (SAPRI-01) in clinical practice

    International Nuclear Information System (INIS)

    Sivachenko, T.P.; Mechev, D.S.; Krupka, I.N.

    1988-01-01

    The author described the results of clinical testing of a system SAPRI-01 designed for automated collection, storage and processing of data on radionuclide investigations. He gave examples of automated processing of RCG and the results of positive scintigraphy of tumors of different sites using 67 Ga-citrate and 99m Tc pertechnetate in statistical and dynamic investigations. Short-comings and ways for updating 4 the system during its serial production were pointed out. The introduction of the system into clinical practice on a wide scale was shown to hold promise

  2. Automated granularity to integrate digital information: the "Antarctic Treaty Searchable Database" case study

    Directory of Open Access Journals (Sweden)

    Paul Arthur Berkman

    2006-06-01

    Full Text Available Access to information is necessary, but not sufficient in our digital era. The challenge is to objectively integrate digital resources based on user-defined objectives for the purpose of discovering information relationships that facilitate interpretations and decision making. The Antarctic Treaty Searchable Database (http://aspire.nvi.net, which is in its sixth edition, provides an example of digital integration based on the automated generation of information granules that can be dynamically combined to reveal objective relationships within and between digital information resources. This case study further demonstrates that automated granularity and dynamic integration can be accomplished simply by utilizing the inherent structure of the digital information resources. Such information integration is relevant to library and archival programs that require long-term preservation of authentic digital resources.

  3. Microfluidic perfusion system for automated delivery of temporal gradients to islets of Langerhans.

    Science.gov (United States)

    Zhang, Xinyu; Roper, Michael G

    2009-02-01

    A microfluidic perfusion system was developed for automated delivery of stimulant waveforms to cells within the device. The 3-layer glass/polymer device contained two pneumatic pumps, a 12 cm mixing channel, and a 0.2 microL cell chamber. By altering the flow rate ratio of the pumps, a series of output concentrations could be produced while a constant 1.43 +/- 0.07 microL/min flow rate was maintained. The output concentrations could be changed in time producing step gradients and other waveforms, such as sine and triangle waves, at different amplitudes and frequencies. Waveforms were analyzed by comparing the amplitude of output waveforms to the amplitude of theoretical waveforms. Below a frequency of 0.0098 Hz, the output waveforms had less than 20% difference than input waveforms. To reduce backflow of solutions into the pumps, the operational sequence of the valving program was modified, as well as differential etching of the valve seat depths. These modifications reduced backflow to the point that it was not detected. Gradients in glucose levels were applied in this work to stimulate single islets of Langerhans. Glucose gradients between 3 and 20 mM brought clear and intense oscillations of intracellular [Ca(2+)] indicating the system will be useful in future studies of cellular physiology.

  4. Multi-output decision trees for lesion segmentation in multiple sclerosis

    Science.gov (United States)

    Jog, Amod; Carass, Aaron; Pham, Dzung L.; Prince, Jerry L.

    2015-03-01

    Multiple Sclerosis (MS) is a disease of the central nervous system in which the protective myelin sheath of the neurons is damaged. MS leads to the formation of lesions, predominantly in the white matter of the brain and the spinal cord. The number and volume of lesions visible in magnetic resonance (MR) imaging (MRI) are important criteria for diagnosing and tracking the progression of MS. Locating and delineating lesions manually requires the tedious and expensive efforts of highly trained raters. In this paper, we propose an automated algorithm to segment lesions in MR images using multi-output decision trees. We evaluated our algorithm on the publicly available MICCAI 2008 MS Lesion Segmentation Challenge training dataset of 20 subjects, and showed improved results in comparison to state-of-the-art methods. We also evaluated our algorithm on an in-house dataset of 49 subjects with a true positive rate of 0.41 and a positive predictive value 0.36.

  5. Quality of radiomic features in glioblastoma multiforme: Impact of semi-automated tumor segmentation software

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myung Eun; Kim, Jong Hyo [Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Seoul National University, Suwon (Korea, Republic of); Woo, Bo Yeong [Dept. of Transdisciplinary Studies, Graduate School of Convergence Science and Technology, Seoul National University, Suwon (Korea, Republic of); Ko, Micheal D.; Jamshidi, Neema [Dept. of Radiological Sciences, University of California, Los Angeles, Los Angeles (United States)

    2017-06-15

    The purpose of this study was to evaluate the reliability and quality of radiomic features in glioblastoma multiforme (GBM) derived from tumor volumes obtained with semi-automated tumor segmentation software. MR images of 45 GBM patients (29 males, 16 females) were downloaded from The Cancer Imaging Archive, in which post-contrast T1-weighted imaging and fluid-attenuated inversion recovery MR sequences were used. Two raters independently segmented the tumors using two semi-automated segmentation tools (TumorPrism3D and 3D Slicer). Regions of interest corresponding to contrast-enhancing lesion, necrotic portions, and non-enhancing T2 high signal intensity component were segmented for each tumor. A total of 180 imaging features were extracted, and their quality was evaluated in terms of stability, normalized dynamic range (NDR), and redundancy, using intra-class correlation coefficients, cluster consensus, and Rand Statistic. Our study results showed that most of the radiomic features in GBM were highly stable. Over 90% of 180 features showed good stability (intra-class correlation coefficient [ICC] ≥ 0.8), whereas only 7 features were of poor stability (ICC < 0.5). Most first order statistics and morphometric features showed moderate-to-high NDR (4 > NDR ≥1), while above 35% of the texture features showed poor NDR (< 1). Features were shown to cluster into only 5 groups, indicating that they were highly redundant. The use of semi-automated software tools provided sufficiently reliable tumor segmentation and feature stability; thus helping to overcome the inherent inter-rater and intra-rater variability of user intervention. However, certain aspects of feature quality, including NDR and redundancy, need to be assessed for determination of representative signature features before further development of radiomics.

  6. Quantized Synchronization of Chaotic Neural Networks With Scheduled Output Feedback Control.

    Science.gov (United States)

    Wan, Ying; Cao, Jinde; Wen, Guanghui

    In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control gain matrix, allowable length of sampling intervals, and upper bound of network-induced delays are derived to ensure the quantized synchronization of master-slave chaotic neural networks. Lastly, Chua's circuit system and 4-D Hopfield neural network are simulated to validate the effectiveness of the main results.In this paper, the synchronization problem of master-slave chaotic neural networks with remote sensors, quantization process, and communication time delays is investigated. The information communication channel between the master chaotic neural network and slave chaotic neural network consists of several remote sensors, with each sensor able to access only partial knowledge of output information of the master neural network. At each sampling instants, each sensor updates its own measurement and only one sensor is scheduled to transmit its latest information to the controller's side in order to update the control inputs for the slave neural network. Thus, such communication process and control strategy are much more energy-saving comparing with the traditional point-to-point scheme. Sufficient conditions for output feedback control

  7. Foldover effect and energy output from a nonlinear pseudo-maglev harvester

    Science.gov (United States)

    Kecik, Krzysztof; Mitura, Andrzej; Warminski, Jerzy; Lenci, Stefano

    2018-01-01

    Dynamics analysis and energy harvesting of a nonlinear magnetic pseudo-levitation (pseudo-maglev) harvester under harmonic excitation is presented in this paper. The system, for selected parameters, has two stable possible solutions with different corresponding energy outputs. The main goal is to analyse the influence of resistance load on the multi-stability zones and energy recovery which can help to tune the system to improve the energy harvesting efficiency.

  8. A new method for automated dynamic calibration of tipping-bucket rain gauges

    Science.gov (United States)

    Humphrey, M.D.; Istok, J.D.; Lee, J.Y.; Hevesi, J.A.; Flint, A.L.

    1997-01-01

    Existing methods for dynamic calibration of tipping-bucket rain gauges (TBRs) can be time consuming and labor intensive. A new automated dynamic calibration system has been developed to calibrate TBRs with minimal effort. The system consists of a programmable pump, datalogger, digital balance, and computer. Calibration is performed in two steps: 1) pump calibration and 2) rain gauge calibration. Pump calibration ensures precise control of water flow rates delivered to the rain gauge funnel; rain gauge calibration ensures precise conversion of bucket tip times to actual rainfall rates. Calibration of the pump and one rain gauge for 10 selected pump rates typically requires about 8 h. Data files generated during rain gauge calibration are used to compute rainfall intensities and amounts from a record of bucket tip times collected in the field. The system was tested using 5 types of commercial TBRs (15.2-, 20.3-, and 30.5-cm diameters; 0.1-, 0.2-, and 1.0-mm resolutions) and using 14 TBRs of a single type (20.3-cm diameter; 0.1-mm resolution). Ten pump rates ranging from 3 to 154 mL min-1 were used to calibrate the TBRs and represented rainfall rates between 6 and 254 mm h-1 depending on the rain gauge diameter. All pump calibration results were very linear with R2 values greater than 0.99. All rain gauges exhibited large nonlinear underestimation errors (between 5% and 29%) that decreased with increasing rain gauge resolution and increased with increasing rainfall rate, especially for rates greater than 50 mm h-1. Calibration curves of bucket tip time against the reciprocal of the true pump rate for all rain gauges also were linear with R2 values of 0.99. Calibration data for the 14 rain gauges of the same type were very similar, as indicated by slope values that were within 14% of each other and ranged from about 367 to 417 s mm h-1. The developed system can calibrate TBRs efficiently, accurately, and virtually unattended and could be modified for use with other

  9. Governmentally amplified output volatility

    Science.gov (United States)

    Funashima, Yoshito

    2016-11-01

    Predominant government behavior is decomposed by frequency into several periodic components: updating cycles of infrastructure, Kuznets cycles, fiscal policy over business cycles, and election cycles. Little is known, however, about the theoretical impact of such cyclical behavior in public finance on output fluctuations. Based on a standard neoclassical growth model, this study intends to examine the frequency at which public investment cycles are relevant to output fluctuations. We find an inverted U-shaped relationship between output volatility and length of cycle in public investment. This implies that periodic behavior in public investment at a certain frequency range can cause aggravated output resonance. Moreover, we present an empirical analysis to test the theoretical implication, using the U.S. data in the period from 1968 to 2015. The empirical results suggest that such resonance phenomena change from low to high frequency.

  10. Synchronization of two chaotic systems: Dynamic compensator approach

    International Nuclear Information System (INIS)

    Chen, C.-K.; Lai, T.-W.; Yan, J.-J.; Liao, T.-L.

    2009-01-01

    This study is concerned with the identical synchronization problem for a class of chaotic systems. A dynamic compensator is proposed to achieve the synchronization between master and slave chaotic systems using only the accessible output variables. A sufficient condition is also proposed to ensure the global synchronization. Furthermore, the strictly positive real (SPR) restriction, which is normally required in most of the observer-based synchronization schemes, is released in our approach. Two numerical examples are included to illustrate the proposed scheme.

  11. Theoretical study of the mode of the mass-selective nonstable axial output ions from the nonlinear trap

    International Nuclear Information System (INIS)

    Sudakov, M.Yu.

    2000-01-01

    One studied theoretically the mode of mass-selective unstable output of ions from three-dimensional quadrupole ion trap. One developed a method represent coordinates of ions per one period of supplying HF voltage with regard to nonlinear distortions of quadrupole potential. One derived equation for an envelope of ion oscillations in the form of motion equation of mass point in the efficient force field. One explained the effect of output delay of ions at presence of the field negative even harmonics. One proved that the positive even distortions of quadrupole potential favored realization of that mode and studied the dynamics of ions in the course of output [ru

  12. Use of Computer vision for Automation of a Roadheader in Selective Cutting Operation

    OpenAIRE

    Fuentes-Cantillana , J.L.; Catalina , J.C.; Rodriguez , A.; Orteu , Jean-José; Dumahu , Didier

    1991-01-01

    International audience; State-of-the art of automation in roadheaders Most of the experimental work for roadheaders automation has been centered in the operations which imply cutting a complete section which has a constant profile, or shows only slight changes, and with an arrangement of the cutting sequence subject basically only to the restrictions arising from the geometrical or geotechnical conditions. Nowadays, the market offers Systems able to control automatically the cutting of a fixe...

  13. Sliding-mode control of single input multiple output DC-DC converter

    Science.gov (United States)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Wan, Qiyang

    2016-10-01

    Various voltage levels are required in the vehicle mounted power system. A conventional solution is to utilize an independent multiple output DC-DC converter whose cost is high and control scheme is complicated. In this paper, we design a novel SIMO DC-DC converter with sliding mode controller. The proposed converter can boost the voltage of a low-voltage input power source to a controllable high-voltage DC bus and middle-voltage output terminals, which endow the converter with characteristics of simple structure, low cost, and convenient control. In addition, the sliding mode control (SMC) technique applied in our converter can enhance the performances of a certain SIMO DC-DC converter topology. The high-voltage DC bus can be regarded as the main power source to the high-voltage facility of the vehicle mounted power system, and the middle-voltage output terminals can supply power to the low-voltage equipment on an automobile. In the respect of control algorithm, it is the first time to propose the SMC-PID (Proportion Integration Differentiation) control algorithm, in which the SMC algorithm is utilized and the PID control is attended to the conventional SMC algorithm. The PID control increases the dynamic ability of the SMC algorithm by establishing the corresponding SMC surface and introducing the attached integral of voltage error, which endow the sliding-control system with excellent dynamic performance. At last, we established the MATLAB/SIMULINK simulation model, tested performance of the system, and built the hardware prototype based on Digital Signal Processor (DSP). Results show that the sliding mode control is able to track a required trajectory, which has robustness against the uncertainties and disturbances.

  14. Improving Vibration Energy Harvesting Using Dynamic Magnifier

    Directory of Open Access Journals (Sweden)

    Almuatasim Alomari

    2016-01-01

    Full Text Available This paper reports on the design and evaluation of vibration-based piezoelectric energy-harvesting devices based on a polyvinylidene fluoride unimorph cantilever beam attached to the front of a dynamic magnifier. Experimental studies of the electromechanical frequency response functions are studied for the first three resonance frequencies. An analytical analysis is undertaken by applying the chain matrix in order to predict output voltage and output power with respect to the vibration frequency. The proposed harvester was modeled using MATLAB software and COMSOL multi- physics to study the mode shapes and electrical output parameters. The voltage and power output of the energy harvester with a dynamic magnifier was 2.62 V and 13.68 mW, respectively at the resonance frequency of the second mode. The modeling approach provides a basis to design energy harvesters exploiting dynamic magnification for improved performance and bandwidth. The potential application of such energy harvesting devices in the transport sector include autonomous structural health monitoring systems that often include embedded sensors, data acquisition, wireless communication, and energy harvesting systems.

  15. An Immune-inspired Adaptive Automated Intrusion Response System Model

    Directory of Open Access Journals (Sweden)

    Ling-xi Peng

    2012-09-01

    Full Text Available An immune-inspired adaptive automated intrusion response system model, named as , is proposed. The descriptions of self, non-self, immunocyte, memory detector, mature detector and immature detector of the network transactions, and the realtime network danger evaluation equations are given. Then, the automated response polices are adaptively performed or adjusted according to the realtime network danger. Thus, not only accurately evaluates the network attacks, but also greatly reduces the response times and response costs.

  16. Automated builder and database of protein/membrane complexes for molecular dynamics simulations.

    Directory of Open Access Journals (Sweden)

    Sunhwan Jo

    2007-09-01

    Full Text Available Molecular dynamics simulations of membrane proteins have provided deeper insights into their functions and interactions with surrounding environments at the atomic level. However, compared to solvation of globular proteins, building a realistic protein/membrane complex is still challenging and requires considerable experience with simulation software. Membrane Builder in the CHARMM-GUI website (http://www.charmm-gui.org helps users to build such a complex system using a web browser with a graphical user interface. Through a generalized and automated building process including system size determination as well as generation of lipid bilayer, pore water, bulk water, and ions, a realistic membrane system with virtually any kinds and shapes of membrane proteins can be generated in 5 minutes to 2 hours depending on the system size. Default values that were elaborated and tested extensively are given in each step to provide reasonable options and starting points for both non-expert and expert users. The efficacy of Membrane Builder is illustrated by its applications to 12 transmembrane and 3 interfacial membrane proteins, whose fully equilibrated systems with three different types of lipid molecules (DMPC, DPPC, and POPC and two types of system shapes (rectangular and hexagonal are freely available on the CHARMM-GUI website. One of the most significant advantages of using the web environment is that, if a problem is found, users can go back and re-generate the whole system again before quitting the browser. Therefore, Membrane Builder provides the intuitive and easy way to build and simulate the biologically important membrane system.

  17. Decentralised output feedback control of Markovian jump interconnected systems with unknown interconnections

    Science.gov (United States)

    Li, Li-Wei; Yang, Guang-Hong

    2017-07-01

    The problem of decentralised output feedback control is addressed for Markovian jump interconnected systems with unknown interconnections and general transition rates (TRs) allowed to be unknown or known with uncertainties. A class of decentralised dynamic output feedback controllers are constructed, and a cyclic-small-gain condition is exploited to dispose the unknown interconnections so that the resultant closed-loop system is stochastically stable and satisfies an H∞ performance. With slack matrices to cope with the nonlinearities incurred by unknown and uncertain TRs in control synthesis, a novel controller design condition is developed in linear matrix inequality formalism. Compared with the existing works, the proposed approach leads to less conservatism. Finally, two examples are used to illustrate the effectiveness of the new results.

  18. Output Feedback Fractional-Order Nonsingular Terminal Sliding Mode Control of Underwater Remotely Operated Vehicles

    Directory of Open Access Journals (Sweden)

    Yaoyao Wang

    2014-01-01

    Full Text Available For the 4-DOF (degrees of freedom trajectory tracking control problem of underwater remotely operated vehicles (ROVs in the presence of model uncertainties and external disturbances, a novel output feedback fractional-order nonsingular terminal sliding mode control (FO-NTSMC technique is introduced in light of the equivalent output injection sliding mode observer (SMO and TSMC principle and fractional calculus technology. The equivalent output injection SMO is applied to reconstruct the full states in finite time. Meanwhile, the FO-NTSMC algorithm, based on a new proposed fractional-order switching manifold, is designed to stabilize the tracking error to equilibrium points in finite time. The corresponding stability analysis of the closed-loop system is presented using the fractional-order version of the Lyapunov stability theory. Comparative numerical simulation results are presented and analyzed to demonstrate the effectiveness of the proposed method. Finally, it is noteworthy that the proposed output feedback FO-NTSMC technique can be used to control a broad range of nonlinear second-order dynamical systems in finite time.

  19. Human-centred automation programme: review of experiment related studies

    International Nuclear Information System (INIS)

    Grimstad, Tone; Andresen, Gisle; Skjerve, Ann Britt Miberg

    2000-04-01

    Twenty-three empirical studies concerning automation and performance have been reviewed. The purposes of the review are to support experimental studies in the Human-Centred Automation (HCA) programme and to develop a general theory on HCA. Each study was reviewed with regard to twelve study characteristics: domain, type of study, purpose, definition of automation, variables, theoretical basis, models of operator performance, methods applied, experimental design, outcome, stated scope of results, strengths and limitations. Seven of the studies involved domain experts, the rest used students as participants. The majority of the articles originated from the aviation domain: only the study conducted in HAMMLAB considered process control in power plants. In the experimental studies, the independent variable was level of automation, or reliability of automation, while the most common dependent variables were workload, situation awareness, complacency, trust, and criteria of performance, e.g., number of correct responses or response time. Although the studies highlight important aspects of human-automation interaction, it is still unclear how system performance is affected. Nevertheless, the fact that many factors seem to be involved is taken as support for the system-oriented approach of the HCA programme. In conclusion, the review provides valuable input both to the design of experiments and to the development of a general theory. (Author). refs

  20. Usefulness of automated biopsy guns in image-guided biopsy

    International Nuclear Information System (INIS)

    Lee, Jung Hyung; Rhee, Chang Soo; Lee, Sung Moon; Kim, Hong; Woo, Sung Ku; Suh, Soo Jhi

    1994-01-01

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis

  1. Usefulness of automated biopsy guns in image-guided biopsy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Hyung; Rhee, Chang Soo; Lee, Sung Moon; Kim, Hong; Woo, Sung Ku; Suh, Soo Jhi [School of Medicine, Keimyung University, Daegu (Korea, Republic of)

    1994-12-15

    To evaluate the usefulness of automated biopsy guns in image-guided biopsy of lung, liver, pancreas and other organs. Using automated biopsy devices, 160 biopsies of variable anatomic sites were performed: Biopsies were performed under ultrasonographic(US) guidance in 95 and computed tomographic (CT) guidance in 65. We retrospectively analyzed histologic results and complications. Specimens were adequate for histopathologic diagnosis in 143 of the 160 patients(89.4%)-Diagnostic tissue was obtained in 130 (81.3%), suggestive tissue obtained in 13(8.1%), and non-diagnostic tissue was obtained in 14(8.7%). Inadequate tissue was obtained in only 3(1.9%). There was no statistically significant difference between US-guided and CT-guided percutaneous biopsy. There was no occurrence of significant complication. We have experienced mild complications in only 5 patients-2 hematuria and 2 hematochezia in transrectal prostatic biopsy, and 1 minimal pneumothorax in CT-guided percutaneous lung biopsy. All of them were resolved spontaneously. The image-guided biopsy using the automated biopsy gun was a simple, safe and accurate method of obtaining adequate specimen for the histopathologic diagnosis.

  2. Energy management through building automation. Fundamentals - Technologies - Applications

    International Nuclear Information System (INIS)

    Aschendorf, Bernd

    2014-01-01

    The books available in the market consider only the use of individual buildings bus systems, but not to compare with each other with respect to cost-benefit and applicability. In this book, a total of 40 different systems, such as radio bus systems, PEHA-PHC, EIB, LCN, LON, PLC systems, investigated for their possible use in the various categories of buildings. The comparison refers to all levels of the automation pyramid from fieldbus, to automation to the control level and considers in particular the usability for SmartMetering-based energy management. [de

  3. Cross-covariance based global dynamic sensitivity analysis

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Li, Zhao; Wu, Mengmeng

    2018-02-01

    For identifying the cross-covariance source of dynamic output at each time instant for structural system involving both input random variables and stochastic processes, a global dynamic sensitivity (GDS) technique is proposed. The GDS considers the effect of time history inputs on the dynamic output. In the GDS, the cross-covariance decomposition is firstly developed to measure the contribution of the inputs to the output at different time instant, and an integration of the cross-covariance change over the specific time interval is employed to measure the whole contribution of the input to the cross-covariance of output. Then, the GDS main effect indices and the GDS total effect indices can be easily defined after the integration, and they are effective in identifying the important inputs and the non-influential inputs on the cross-covariance of output at each time instant, respectively. The established GDS analysis model has the same form with the classical ANOVA when it degenerates to the static case. After degeneration, the first order partial effect can reflect the individual effects of inputs to the output variance, and the second order partial effect can reflect the interaction effects to the output variance, which illustrates the consistency of the proposed GDS indices and the classical variance-based sensitivity indices. The MCS procedure and the Kriging surrogate method are developed to solve the proposed GDS indices. Several examples are introduced to illustrate the significance of the proposed GDS analysis technique and the effectiveness of the proposed solution.

  4. Automated Asteroseismic Analysis of Solar-type Stars

    DEFF Research Database (Denmark)

    Karoff, Christoffer; Campante, T.L.; Chaplin, W.J.

    2010-01-01

    The rapidly increasing volume of asteroseismic observations on solar-type stars has revealed a need for automated analysis tools. The reason for this is not only that individual analyses of single stars are rather time consuming, but more importantly that these large volumes of observations open...... are calculated in a consistent way. Here we present a set of automated asterosesimic analysis tools. The main engine of these set of tools is an algorithm for modelling the autocovariance spectra of the stellar acoustic spectra allowing us to measure not only the frequency of maximum power and the large......, radius, luminosity, effective temperature, surface gravity and age based on grid modeling. All the tools take into account the window function of the observations which means that they work equally well for space-based photometry observations from e.g. the NASA Kepler satellite and ground-based velocity...

  5. Output synchronization control of ship replenishment operations: theory and experiments

    NARCIS (Netherlands)

    Kyrkjebø, E.; Pettersen, K.Y.; Wondergem, M.; Nijmeijer, H.

    2007-01-01

    A leader–follower synchronization output feedback control scheme is presented for the ship replenishment problem where only positions are measured. No mathematical model of the leader ship is required, and the control scheme relies on nonlinear observers to estimate velocity and acceleration of all

  6. Dynamic Communication Resource Negotiations

    Science.gov (United States)

    Chow, Edward; Vatan, Farrokh; Paloulian, George; Frisbie, Steve; Srostlik, Zuzana; Kalomiris, Vasilios; Apgar, Daniel

    2012-01-01

    Today's advanced network management systems can automate many aspects of the tactical networking operations within a military domain. However, automation of joint and coalition tactical networking across multiple domains remains challenging. Due to potentially conflicting goals and priorities, human agreement is often required before implementation into the network operations. This is further complicated by incompatible network management systems and security policies, rendering it difficult to implement automatic network management, thus requiring manual human intervention to the communication protocols used at various network routers and endpoints. This process of manual human intervention is tedious, error-prone, and slow. In order to facilitate a better solution, we are pursuing a technology which makes network management automated, reliable, and fast. Automating the negotiation of the common network communication parameters between different parties is the subject of this paper. We present the technology that enables inter-force dynamic communication resource negotiations to enable ad-hoc inter-operation in the field between force domains, without pre-planning. It also will enable a dynamic response to changing conditions within the area of operations. Our solution enables the rapid blending of intra-domain policies so that the forces involved are able to inter-operate effectively without overwhelming each other's networks with in-appropriate or un-warranted traffic. It will evaluate the policy rules and configuration data for each of the domains, then generate a compatible inter-domain policy and configuration that will update the gateway systems between the two domains.

  7. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Jinaphanh, A.

    2012-01-01

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  8. Automation: the competitive edge for HMOs and other alternative delivery systems.

    Science.gov (United States)

    Prussin, J A

    1987-12-01

    Until recently, many, if not most, Health Maintenance Organizations (HMO) were not automated. Moreover, HMOs that were automated tended to be automated only on a limited basis. Recently, however, the highly competitive marketplace within which HMOs and other Alternative Delivery Systems (ADS) exist has required that they operate at a maximum effectiveness and efficiency. Given the complex nature of ADSs, the volume of transactions in ADSs, the large number of members served by ADSs, and the numerous providers who are paid at different rates and on different bases by ADSs, it is impossible for an ADS to operate effectively or efficiently, let alone show optimal performance, without a sophisticated, comprehensive automated system. Reliable automated systems designed specifically to address ADS functions such as enrollment and premium billing, finance and accounting, medical information and patient management, and marketing have recently become available at a reasonable cost.

  9. Lean automation development : applying lean principles to the automation development process

    OpenAIRE

    Granlund, Anna; Wiktorsson, Magnus; Grahn, Sten; Friedler, Niklas

    2014-01-01

    By a broad empirical study it is indicated that automation development show potential of improvement. In the paper, 13 lean product development principles are contrasted to the automation development process and it is suggested why and how these principles can facilitate, support and improve the automation development process. The paper summarises a description of what characterises a lean automation development process and what consequences it entails. Main differences compared to current pr...

  10. Output-only cyclo-stationary linear-parameter time-varying stochastic subspace identification method for rotating machinery and spinning structures

    Science.gov (United States)

    Velazquez, Antonio; Swartz, R. Andrew

    2015-02-01

    Economical maintenance and operation are critical issues for rotating machinery and spinning structures containing blade elements, especially large slender dynamic beams (e.g., wind turbines). Structural health monitoring systems represent promising instruments to assure reliability and good performance from the dynamics of the mechanical systems. However, such devices have not been completely perfected for spinning structures. These sensing technologies are typically informed by both mechanistic models coupled with data-driven identification techniques in the time and/or frequency domain. Frequency response functions are popular but are difficult to realize autonomously for structures of higher order, especially when overlapping frequency content is present. Instead, time-domain techniques have shown to possess powerful advantages from a practical point of view (i.e. low-order computational effort suitable for real-time or embedded algorithms) and also are more suitable to differentiate closely-related modes. Customarily, time-varying effects are often neglected or dismissed to simplify this analysis, but such cannot be the case for sinusoidally loaded structures containing spinning multi-bodies. A more complex scenario is constituted when dealing with both periodic mechanisms responsible for the vibration shaft of the rotor-blade system and the interaction of the supporting substructure. Transformations of the cyclic effects on the vibrational data can be applied to isolate inertial quantities that are different from rotation-generated forces that are typically non-stationary in nature. After applying these transformations, structural identification can be carried out by stationary techniques via data-correlated eigensystem realizations. In this paper, an exploration of a periodic stationary or cyclo-stationary subspace identification technique is presented here for spinning multi-blade systems by means of a modified Eigensystem Realization Algorithm (ERA) via

  11. Automated pure-tone audiometry: an analysis of capacity, need, and benefit.

    Science.gov (United States)

    Margolis, Robert H; Morgan, Donald E

    2008-12-01

    The rationale for automating pure-tone audiometry based on the need for hearing tests and the capacity of audiologists to provide testing is presented. The personnel time savings from automated testing are analyzed. Some possible effects of automated testing on the profession are explored. Need for testing was based on prevalence of hearing impairment, number of normal hearing patients seen for testing, and an assumption of the frequency of testing. Capacity is based on the number of audiologists and the number of audiograms performed in a typical workday. Time savings were estimated from the average duration of an audiogram and an assumption that 80% can be automated. A large gap exists between the need and the capacity of audiologists to provide testing. Automating 80% of audiograms would only partially close the gap. A significant time savings could accrue, permitting reallocation of time for doctoral level services. Although certain jobs could be affected, the gap between capacity and need is so great that automated audiometry will not significantly affect employment. Automation could increase the number of hearing impaired patients that could be served. The reallocation of personnel time would be a positive change for our patients and our profession.

  12. An accurate system for onsite calibration of electronic transformers with digital output

    International Nuclear Information System (INIS)

    Zhi Zhang; Li Hongbin

    2012-01-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  13. An accurate system for onsite calibration of electronic transformers with digital output.

    Science.gov (United States)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  14. An accurate system for onsite calibration of electronic transformers with digital output

    Energy Technology Data Exchange (ETDEWEB)

    Zhi Zhang; Li Hongbin [CEEE of HuaZhong University of Science and Technology, Wuhan 430074 (China); State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074 (China)

    2012-06-15

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  15. An accurate system for onsite calibration of electronic transformers with digital output

    Science.gov (United States)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  16. LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP

    Directory of Open Access Journals (Sweden)

    Pedro Jimenez-Reyes

    2016-02-01

    Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.

  17. Automating Trend Analysis for Spacecraft Constellations

    Science.gov (United States)

    Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)

    2001-01-01

    Spacecraft trend analysis is a vital mission operations function performed by satellite controllers and engineers, who perform detailed analyses of engineering telemetry data to diagnose subsystem faults and to detect trends that may potentially lead to degraded subsystem performance or failure in the future. It is this latter function that is of greatest importance, for careful trending can often predict or detect events that may lead to a spacecraft's entry into safe-hold. Early prediction and detection of such events could result in the avoidance of, or rapid return to service from, spacecraft safing, which not only results in reduced recovery costs but also in a higher overall level of service for the satellite system. Contemporary spacecraft trending activities are manually intensive and are primarily performed diagnostically after a fault occurs, rather than proactively to predict its occurrence. They also tend to rely on information systems and software that are oudated when compared to current technologies. When coupled with the fact that flight operations teams often have limited resources, proactive trending opportunities are limited, and detailed trend analysis is often reserved for critical responses to safe holds or other on-orbit events such as maneuvers. While the contemporary trend analysis approach has sufficed for current single-spacecraft operations, it will be unfeasible for NASA's planned and proposed space science constellations. Missions such as the Dynamics, Reconnection and Configuration Observatory (DRACO), for example, are planning to launch as many as 100 'nanospacecraft' to form a homogenous constellation. A simple extrapolation of resources and manpower based on single-spacecraft operations suggests that trending for such a large spacecraft fleet will be unmanageable, unwieldy, and cost-prohibitive. It is therefore imperative that an approach to automating the spacecraft trend analysis function be studied, developed, and applied to

  18. Drilling Automation Demonstrations in Subsurface Exploration for Astrobiology

    Science.gov (United States)

    Glass, Brian; Cannon, H.; Lee, P.; Hanagud, S.; Davis, K.

    2006-01-01

    This project proposes to study subsurface permafrost microbial habitats at a relevant Arctic Mars-analog site (Haughton Crater, Devon Island, Canada) while developing and maturing the subsurface drilling and drilling automation technologies that will be required by post-2010 missions. It builds on earlier drilling technology projects to add permafrost and ice-drilling capabilities to 5m with a lightweight drill that will be automatically monitored and controlled in-situ. Frozen cores obtained with this drill under sterilized protocols will be used in testing three hypotheses pertaining to near-surface physical geology and ground H2O ice distribution, viewed as a habitat for microbial life in subsurface ice and ice-consolidated sediments. Automation technologies employed will demonstrate hands-off diagnostics and drill control, using novel vibrational dynamical analysis methods and model-based reasoning to monitor and identify drilling fault states before and during faults. Three field deployments, to a Mars-analog site with frozen impact crater fallback breccia, will support science goals, provide a rigorous test of drilling automation and lightweight permafrost drilling, and leverage past experience with the field site s particular logistics.

  19. Inverse relations in the patterns of muscle and center of pressure dynamics during standing still and movement postures.

    Science.gov (United States)

    Morrison, S; Hong, S L; Newell, K M

    2007-08-01

    The aim of this study was to investigate the postural center of pressure (COP) and surface muscle (EMG) dynamics of young adult participants under conditions where they were required to voluntarily produce random and regular sway motions in contrast to that of standing still. Frequency, amplitude and regularity measures of the COP excursion and EMG activity were assessed, as were measures of the coupling relations between the COP and EMG outputs. The results demonstrated that, even when standing still, there was a high degree of regularity in the COP output, with little difference in the modal frequency dynamics between standing still and preferred motion. Only during random conditions was a significantly greater degree of irregularity observed in the COP measures. The random-like movements were also characterized by a decrease in the level of synchrony between COP motion on the anterior-posterior (AP) and medio-lateral (ML) axes. In contrast, at muscle level, the random task resulted in the highest level of regularity (decreased ApEn) for the EMG output for soleus and tibialis anterior. The ability of individuals to produce a random motion was achieved through the decoupling of the COP motion in each dimension. This decoupling strategy was reflected by increased regularity of the EMG output as opposed to any significant change in the synchrony in the firing patterns of the muscles examined. Increased regularity across the individual muscles was accompanied by increased irregularity in COP dynamics, which can be characterized as a complexity tradeoff. Collectively, these findings support the view that the dynamics of muscle firing patterns does not necessarily map directly to the dynamics at the movement task level and vice versa.

  20. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  1. Automated multiple failure FMEA

    International Nuclear Information System (INIS)

    Price, C.J.; Taylor, N.S.

    2002-01-01

    Failure mode and effects analysis (FMEA) is typically performed by a team of engineers working together. In general, they will only consider single point failures in a system. Consideration of all possible combinations of failures is impractical for all but the simplest example systems. Even if the task of producing the FMEA report for the full multiple failure scenario were automated, it would still be impractical for the engineers to read, understand and act on all of the results. This paper shows how approximate failure rates for components can be used to select the most likely combinations of failures for automated investigation using simulation. The important information can be automatically identified from the resulting report, making it practical for engineers to study and act on the results. The strategy described in the paper has been applied to a range of electrical subsystems, and the results have confirmed that the strategy described here works well for realistically complex systems

  2. Middle cerebral artery blood velocity depends on cardiac output during exercise with a large muscle mass

    NARCIS (Netherlands)

    Ide, K.; Pott, F.; van Lieshout, J. J.; Secher, N. H.

    1998-01-01

    We tested the hypothesis that pharmacological reduction of the increase in cardiac output during dynamic exercise with a large muscle mass would influence the cerebral blood velocity/perfusion. We studied the relationship between changes in cerebral blood velocity (transcranial Doppler), rectus

  3. Distributed dynamic simulations of networked control and building performance applications.

    Science.gov (United States)

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  4. Automated Groundwater Screening

    International Nuclear Information System (INIS)

    Taylor, Glenn A.; Collard, Leonard B.

    2005-01-01

    The Automated Intruder Analysis has been extended to include an Automated Ground Water Screening option. This option screens 825 radionuclides while rigorously applying the National Council on Radiation Protection (NCRP) methodology. An extension to that methodology is presented to give a more realistic screening factor for those radionuclides which have significant daughters. The extension has the promise of reducing the number of radionuclides which must be tracked by the customer. By combining the Automated Intruder Analysis with the Automated Groundwater Screening a consistent set of assumptions and databases is used. A method is proposed to eliminate trigger values by performing rigorous calculation of the screening factor thereby reducing the number of radionuclides sent to further analysis. Using the same problem definitions as in previous groundwater screenings, the automated groundwater screening found one additional nuclide, Ge-68, which failed the screening. It also found that 18 of the 57 radionuclides contained in NCRP Table 3.1 failed the screening. This report describes the automated groundwater screening computer application

  5. Automated spectrophotometer for plutonium and uranium determination

    International Nuclear Information System (INIS)

    Jackson, D.D.; Hodgkins, D.J.; Hollen, R.M.; Rein, J.E.

    1975-09-01

    The automated spectrophotometer described is the first in a planned series of automated instruments for determining plutonium and uranium in nuclear fuel cycle materials. It has a throughput rate of 5 min per sample and uses a highly specific method of analysis for these elements. The range of plutonium and uranium measured is 0.5 to 14 mg and 1 to 14 mg, respectively, in 0.5 ml or less of solution with an option to pre-evaporate larger volumes. The precision of the measurements is about 0.02 mg standard deviation over the range corresponding to about 2 rel percent at the 1-mg level and 0.2 rel percent at the 10-mg level. The method of analysis involves the extraction of tetrapropylammonium plutonyl and uranyl trinitrate complexes into 2-nitropropane and the measurement of the optical absorbances in the organic phase at unique peak wavelengths. Various aspects of the chemistry associated with the method are presented. The automated spectrophotometer features a turntable that rotates as many as 24 samples in tubes to a series of stations for the sequential chemical operations of reagent addition and phase mixing to effect extraction, and then to a station for the absorbance measurement. With this system, the complications of sample transfers and flow-through cells are avoided. The absorbance measurement system features highly stable interference filters and a microcomputer that controls the timing sequence and operation of the system components. Output is a paper tape printout of three numbers: a four-digit number proportional to the quantity of plutonium or uranium, a two-digit number that designates the position of the tube in the turntable, and a one-digit number that designates whether plutonium or uranium was determined. Details of the mechanical and electrical components of the instrument and of the hardware and software aspects of the computerized control system are provided

  6. Yonjung High-Speed Railway Bridge Assessment Using Output-Only Structural Health Monitoring Measurements under Train Speed Changing

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-01-01

    Full Text Available Yonjung Bridge is a hybrid multispan bridge that is designed to transport high-speed trains (HEMU-430X with maximum operating speed of 430 km/h. The bridge consists of simply supported prestressed concrete (PSC and composite steel girders to carry double railway tracks. The structural health monitoring system (SHM is designed and installed to investigate and assess the performance of the bridge in terms of acceleration and deformation measurements under different speeds of the passing train. The SHM measurements are investigated in both time and frequency domains; in addition, several identification models are examined to assess the performance of the bridge. The drawn conclusions show that the maximum deflection and acceleration of the bridge are within the design limits that are specified by the Korean and European codes. The parameters evaluation of the model identification depicts the quasistatic and dynamic deformations of PSC and steel girders to be different and less correlated when higher speeds of the passing trains are considered. Finally, the variation of the frequency content of the dynamic deformations of the girders is negligible when high speeds are considered.

  7. Robust non-fragile finite-frequency H∞ static output-feedback control for active suspension systems

    Science.gov (United States)

    Wang, Gang; Chen, Changzheng; Yu, Shenbo

    2017-07-01

    This paper deals with the problem of non-fragile H∞ static output-feedback control of vehicle active suspension systems with finite-frequency constraint. The control objective is to improve ride comfort within the given frequency range and ensure the hard constraints in the time-domain. Moreover, in order to enhance the robustness of the controller, the control gain perturbation is also considered in controller synthesis. Firstly, a new non-fragile H∞ finite-frequency control condition is established by using generalized Kalman-Yakubovich-Popov (GKYP) lemma. Secondly, the static output-feedback control gain is directly derived by using a non-iteration algorithm. Different from the existing iteration LMI results, the static output-feedback design is simple and less conservative. Finally, the proposed control algorithm is applied to a quarter-car active suspension model with actuator dynamics, numerical results are made to show the effectiveness and merits of the proposed method.

  8. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  9. Predictive Sea State Estimation for Automated Ride Control and Handling - PSSEARCH

    Science.gov (United States)

    Huntsberger, Terrance L.; Howard, Andrew B.; Aghazarian, Hrand; Rankin, Arturo L.

    2012-01-01

    PSSEARCH provides predictive sea state estimation, coupled with closed-loop feedback control for automated ride control. It enables a manned or unmanned watercraft to determine the 3D map and sea state conditions in its vicinity in real time. Adaptive path-planning/ replanning software and a control surface management system will then use this information to choose the best settings and heading relative to the seas for the watercraft. PSSEARCH looks ahead and anticipates potential impact of waves on the boat and is used in a tight control loop to adjust trim tabs, course, and throttle settings. The software uses sensory inputs including IMU (Inertial Measurement Unit), stereo, radar, etc. to determine the sea state and wave conditions (wave height, frequency, wave direction) in the vicinity of a rapidly moving boat. This information can then be used to plot a safe path through the oncoming waves. The main issues in determining a safe path for sea surface navigation are: (1) deriving a 3D map of the surrounding environment, (2) extracting hazards and sea state surface state from the imaging sensors/map, and (3) planning a path and control surface settings that avoid the hazards, accomplish the mission navigation goals, and mitigate crew injuries from excessive heave, pitch, and roll accelerations while taking into account the dynamics of the sea surface state. The first part is solved using a wide baseline stereo system, where 3D structure is determined from two calibrated pairs of visual imagers. Once the 3D map is derived, anything above the sea surface is classified as a potential hazard and a surface analysis gives a static snapshot of the waves. Dynamics of the wave features are obtained from a frequency analysis of motion vectors derived from the orientation of the waves during a sequence of inputs. Fusion of the dynamic wave patterns with the 3D maps and the IMU outputs is used for efficient safe path planning.

  10. Policy challenges of increasing automation in driving

    Directory of Open Access Journals (Sweden)

    Ata M. Khan

    2012-03-01

    Full Text Available The convergence of information and communication technologies (ICT with automotive technologies has already resulted in automation features in road vehicles and this trend is expected to continue in the future owing to consumer demand, dropping costs of components, and improved reliability. While the automation features that have taken place so far are mainly in the form of information and driver warning technologies (classified as level I pre-2010, future developments in the medium term (level II 2010–2025 are expected to exhibit connected cognitive vehicle features and encompass increasing degree of automation in the form of advanced driver assistance systems. Although autonomous vehicles have been developed for research purposes and are being tested in controlled driving missions, the autonomous driving case is only a long term (level III 2025+ scenario. This paper contributes knowledge on technological forecasts regarding automation, policy challenges for each level of technology development and application context, and the essential instrument of cost-effectiveness for policy analysis which enables policy decisions on the automation systems to be assessed in a consistent and balanced manner. The cost of a system per vehicle is viewed against its effectiveness in meeting policy objectives of improving safety, efficiency, mobility, convenience and reducing environmental effects. Example applications are provided that illustrate the contribution of the methodology in providing information for supporting policy decisions. Given the uncertainties in system costs as well as effectiveness, the tool for assessing policies for future generation features probabilistic and utility-theoretic analysis capability. The policy issues defined and the assessment framework enable the resolution of policy challenges while allowing worthy innovative automation in driving to enhance future road transportation.

  11. Video Enhancement and Dynamic Range Control of HDR Sequences for Automotive Applications

    Directory of Open Access Journals (Sweden)

    Giovanni Ramponi

    2007-01-01

    Full Text Available CMOS video cameras with high dynamic range (HDR output are particularly suitable for driving assistance applications, where lighting conditions can strongly vary, going from direct sunlight to dark areas in tunnels. However, common visualization devices can only handle a low dynamic range, and thus a dynamic range reduction is needed. Many algorithms have been proposed in the literature to reduce the dynamic range of still pictures. Anyway, extending the available methods to video is not straightforward, due to the peculiar nature of video data. We propose an algorithm for both reducing the dynamic range of video sequences and enhancing its appearance, thus improving visual quality and reducing temporal artifacts. We also provide an optimized version of our algorithm for a viable hardware implementation on an FPGA. The feasibility of this implementation is demonstrated by means of a case study.

  12. Integrating wind output with bulk power operations and wholesale electricity markets

    International Nuclear Information System (INIS)

    Hirst, E.

    2002-01-01

    Wind farms have three characteristics that complicate their widespread application as an electricity resource: limited control, unpredictability and variability. Therefore the integration of wind output into bulk power electric systems is qualitatively different from that of other types of generators. The electric system operator must move other generators up or down to offset the time-varying wind fluctuations. Such movements raise the costs of fuel and maintenance for these other generators. Not only is wind power different, it is new. The operators of bulk power systems have limited experience in integrating wind output into the larger system. As a consequence, market rules that treat wind fairly - neither subsidizing nor penalizing its operation - have not yet been developed. The lack of data and analytical methods encourages wind advocates and sceptics to rely primarily on their biases and beliefs in suggesting how wind should be integrated into bulk power systems. This project helps fill this data and analysis gap. Specifically, it develops and applies a quantitative method for the integration of a wind resource into a large electric system. The method permits wind to bid its output into a short-term forward market (specifically, an hour-ahead energy market) or to appear in real time and accept only intrahour and hourly imbalance payments for the unscheduled energy it delivers to the system. Finally, the method analyses the short-term (minute-to-minute) variation in wind output to determine the regulation requirement the wind resource imposes on the electrical system. (author)

  13. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  14. Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics

    Science.gov (United States)

    Chen, Yu-Zhong; Lai, Ying-Cheng

    2018-03-01

    Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.

  15. Gro2mat: a package to efficiently read gromacs output in MATLAB.

    Science.gov (United States)

    Dien, Hung; Deane, Charlotte M; Knapp, Bernhard

    2014-07-30

    Molecular dynamics (MD) simulations are a state-of-the-art computational method used to investigate molecular interactions at atomic scale. Interaction processes out of experimental reach can be monitored using MD software, such as Gromacs. Here, we present the gro2mat package that allows fast and easy access to Gromacs output files from Matlab. Gro2mat enables direct parsing of the most common Gromacs output formats including the binary xtc-format. No openly available Matlab parser currently exists for this format. The xtc reader is orders of magnitudes faster than other available pdb/ascii workarounds. Gro2mat is especially useful for scientists with an interest in quick prototyping of new mathematical and statistical approaches for Gromacs trajectory analyses. © 2014 Wiley Periodicals, Inc. Copyright © 2014 Wiley Periodicals, Inc.

  16. Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery

    Science.gov (United States)

    Kit, Oleksandr; Lüdeke, Matthias

    2013-09-01

    This paper presents an approach to automated identification of slum area change patterns in Hyderabad, India, using multi-year and multi-sensor very high resolution satellite imagery. It relies upon a lacunarity-based slum detection algorithm, combined with Canny- and LSD-based imagery pre-processing routines. This method outputs plausible and spatially explicit slum locations for the whole urban agglomeration of Hyderabad in years 2003 and 2010. The results indicate a considerable growth of area occupied by slums between these years and allow identification of trends in slum development in this urban agglomeration.

  17. Open-Source Assisted Laboratory Automation through Graphical User Interfaces and 3D Printers: Application to Equipment Hyphenation for Higher-Order Data Generation.

    Science.gov (United States)

    Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C

    2017-10-17

    Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.

  18. Flexible End2End Workflow Automation of Hit-Discovery Research.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin

    2014-08-01

    The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. © 2014 Society for Laboratory Automation and Screening.

  19. A Fully Automated Approach to Spike Sorting.

    Science.gov (United States)

    Chung, Jason E; Magland, Jeremy F; Barnett, Alex H; Tolosa, Vanessa M; Tooker, Angela C; Lee, Kye Y; Shah, Kedar G; Felix, Sarah H; Frank, Loren M; Greengard, Leslie F

    2017-09-13

    Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization, and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop central processing unit (CPU) runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Tools for a simulation supported commissioning of the automation of HVAC plants. Hardware-in-the-loop in building automation; Werkzeuge fuer eine simulationsgestuetzte Inbetriebnahme der Automation von RLT- Anlagen. Hardware-in-the-Loop in der Gebaeudeautomation

    Energy Technology Data Exchange (ETDEWEB)

    Richter, Andreas; Sokollik, Frank [Hochschule Merseburg (Germany). Fachbereich Informatik und Kommunikationssysteme

    2012-07-01

    Hardware-in-the-loop (HiL) is a method for testing and validating technical automation solutions based on virtual processes in a simulation environment. Applied to the automation of the interior air supply systems, preceded commissioning tests of the controller at a simulated system can be performed. These tests can be used for example to find logic errors in the program development, or to adjust the parameters of a controller. The adjustment of the parameters can be performed independent of the seasons by modifying the ambient climatic conditions. The parameters of the plants can be tested under dynamic conditions. The control mode can be visualized by starting up of load conditions at dynamic HVAC components and optimized if necessary. Within BMBF funded projects, a HiL solution was developed in a.NET environment. The coupling of simulation and control takes place via the bus systems CAN and BACnet. The elements of the simulation of air conditioners are implemented object-oriented in the programming language C, and are based on the solution of dynamic mass and energy balances. The features of HIL are implemented in a multi-client architecture. This includes primarily the simulation and communication. Other feature are implemented: import of virtual systems from a CAE system, adjustment of parameters of the simulation using structured sets of parameters, features for a distributed simulation of complex systems in the network, a tool for the dimensioning of controllers, chart and visualization features.