WorldWideScience

Sample records for model human processor

  1. Mathematically modelling the effects of pacing, finger strategies and urgency on numerical typing performance with queuing network model human processor.

    Science.gov (United States)

    Lin, Cheng-Jhe; Wu, Changxu

    2012-01-01

    Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-mean-squared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations. An enhanced QN-MHP model was proposed in the study to mathematically account for the effects of pacing, finger strategies and internalised urgency on numerical typing performance. The model can be used to provide optimal pacing for voice synthesise systems and suggested optimal numerical keyboard designs under urgency.

  2. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    Science.gov (United States)

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  3. An optimal speech processor for efficient human speech ...

    Indian Academy of Sciences (India)

    Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans. Keywords. Human speech communication; articulatory gestures; auditory filterbank; mutual information. 1. Introduction. Speech is one of ...

  4. Models of Communication for Multicore Processors

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sørensen, Rasmus Bo; Sparsø, Jens

    2015-01-01

    To efficiently use multicore processors we need to ensure that almost all data communication stays on chip, i.e., the bits moved between tasks executing on different processor cores do not leave the chip. Different forms of on-chip communication are supported by different hardware mechanism, e...

  5. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  6. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  7. An updated program-controlled analog processor, model AP-006, for semiconductor detector spectrometers

    International Nuclear Information System (INIS)

    Shkola, N.F.; Shevchenko, Yu.A.

    1989-01-01

    An analog processor, model AP-006, is reported. The processor is a development of a series of spectrometric units based on a shaper of the type 'DL dif +TVS+gated ideal integrator'. Structural and circuits design features are described. The results of testing the processor in a setup with a Si(Li) detecting unit over an input count-rate range of up to 5x10 5 cps are presented. Processor applications are illustrated. (orig.)

  8. Pulses processor modeling of the AR-PET tomograph

    International Nuclear Information System (INIS)

    Martinez Garbino, Lucio J.; Venialgo, E.; Estryk, Daniel S.; Verrastro, Claudio A.

    2009-01-01

    The detection of two gamma photons in time coincidence is the main process in Positron Emission Tomography. The front end processor estimate the energy and the time stamp of each incident gamma photon, the accuracy of such estimation improves the quality of contrast and resolution of final images. In this work a modeling tool of the full detection chain is described. Starting from stochastic generation of light photons, followed by photoelectrons time transit spread inside the photomultiplier, preamplifier response and digitalisation process were modeling and finally, several algorithms of Energy and Time Stamp estimation were evaluated and compared. (author)

  9. Pengolahan Film Radiografi Secara Otomatis Menggunakan Automatic X-Ray Film Processor Model Jp-33

    Directory of Open Access Journals (Sweden)

    Zoucella Andre Afani

    2017-09-01

    Full Text Available A research on the process of forming an image on a radiographic film and processing techniques automatically has been done. The study was conducted using an X-ray plane Toshiba E 7239, Fil AGFA Healtcare HV Septestraat 27B2640 Mortsel and an automatic film processing "Automatic X-Ray Film Processor Model JP-33". The experimental results showed that the principle of automatic film processing is the same as the principle of film processing manually, except in automatic film processing there is no rinsing stage. Automatic film processing can save time and costs; also it can reduce the possibility of errors due to human factors.

  10. Feasibility analysis of real-time physical modeling using WaveCore processor technology on FPGA

    NARCIS (Netherlands)

    Verstraelen, Martinus Johannes Wilhelmina; Pfeifle, Florian; Bader, Rolf

    2015-01-01

    WaveCore is a scalable many-core processor technology. This technology is specifically developed and optimized for real-time acoustical modeling applications. The programmable WaveCore soft-core processor is silicon-technology independent and hence can be targeted to ASIC or FPGA technologies. The

  11. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    International Nuclear Information System (INIS)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest? The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error ∼ 2%) over a wide range of matrix sizes (10 x 10 through 200 x 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab

  12. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...

  13. Formulation of consumables management models: Mission planning processor payload interface definition

    Science.gov (United States)

    Torian, J. G.

    1977-01-01

    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  14. The impact of accelerator processors for high-throughput molecular modeling and simulation.

    Science.gov (United States)

    Giupponi, G; Harvey, M J; De Fabritiis, G

    2008-12-01

    The recent introduction of cost-effective accelerator processors (APs), such as the IBM Cell processor and Nvidia's graphics processing units (GPUs), represents an important technological innovation which promises to unleash the full potential of atomistic molecular modeling and simulation for the biotechnology industry. Present APs can deliver over an order of magnitude more floating-point operations per second (flops) than standard processors, broadly equivalent to a decade of Moore's law growth, and significantly reduce the cost of current atom-based molecular simulations. In conjunction with distributed and grid-computing solutions, accelerated molecular simulations may finally be used to extend current in silico protocols by the use of accurate thermodynamic calculations instead of approximate methods and simulate hundreds of protein-ligand complexes with full molecular specificity, a crucial requirement of in silico drug discovery workflows.

  15. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  16. A seasonal model of contracts between a monopsonistic processor and smallholder pepper producers in Costa Rica

    NARCIS (Netherlands)

    Sáenz Segura, F.; Haese, D' M.F.C.; Schipper, R.A.

    2010-01-01

    We model the contractual arrangements between smallholder pepper (Piper nigrum L.) producers and a single processor in Costa Rica. Producers in the El Roble settlement sell their pepper to only one processing firm, which exerts its monopsonistic bargaining power by setting the purchase price of

  17. Scaling-up spatially-explicit ecological models using graphics processors

    OpenAIRE

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to efficiently solve spatially explicit ecological models at large spatial scale using the CUDA language extension. We explain this technique by implementing three classical models of spatial self-org...

  18. A processor sharing model for wireless data communication

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    and unevenly distributed number of allocated resources. The model is illustrated on a typical HSCSD setup. Performance characteristics, such as blocking probabilities, utilization, average allocated bandwitdh, sojourn- and response times are studied. The maximum likelihood principle is suggested...

  19. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  20. Support for the Logical Execution Time Model on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Kluge, Florian; Schoeberl, Martin; Ungerer, Theo

    2016-01-01

    The logical execution time (LET) model increases the compositionality of real-time task sets. Removal or addition of tasks does not influence the communication behavior of other tasks. In this work, we extend a multicore operating system running on a time-predictable multicore processor to support...... the LET model. For communication between tasks we use message passing on a time-predictable network-on-chip to avoid the bottleneck of shared memory. We report our experiences and present results on the costs in terms of memory and execution time....

  1. Processor Instructions Execution Models in Computer Systems Supporting Hardware Virtualization When an Intruder Takes Detection Countermeasures

    OpenAIRE

    A. E. Zhukov; I. Y. Korkin; B. M. Sukhinin

    2012-01-01

    We are discussing processor modes switching schemes and analyzing processor instructions execution in the cases when a hypervisor is present in the computer or not. We determine processor instructions execution latency statistics which are applicable for these hypervisors detection when an intruder is modifying time stamp counter.

  2. Simulation-based Modeling Frameworks for Networked Multi-processor System-on-Chip

    DEFF Research Database (Denmark)

    Mahadevan, Shankar

    2006-01-01

    : namely ARTS and RIPE, that allows to model hardware (computation time, power consumption, network latency, caching effect, etc.) and software (application partition and mapping, operating system scheduling, interrupt handling, etc.) aspects from system-level to cycle-true abstraction. Thereby, we can......This thesis deals with modeling aspects of multi-processor system-on-chip (MpSoC) design affected by the on-chip interconnect, also called the Network-on-Chip (NoC), at various levels of abstraction. To begin with, we undertook a comprehensive survey of research and design practices of networked Mp...... realistically model the application executing on the architecture. This includes e.g. accurate modeling of synchronization, cache refills, context switching effects, so on, which are critically dependent on the architecture and the performance of the NoC. The foundation of the ARTS model is abstract tasks...

  3. The Mission Assessment Post Processor (MAPP): A New Tool for Performance Evaluation of Human Lunar Missions

    Science.gov (United States)

    Williams, Jacob; Stewart, Shaun M.; Lee, David E.; Davis, Elizabeth C.; Condon, Gerald L.; Senent, Juan

    2010-01-01

    The National Aeronautics and Space Administration s (NASA) Constellation Program paves the way for a series of lunar missions leading to a sustained human presence on the Moon. The proposed mission design includes an Earth Departure Stage (EDS), a Crew Exploration Vehicle (Orion) and a lunar lander (Altair) which support the transfer to and from the lunar surface. This report addresses the design, development and implementation of a new mission scan tool called the Mission Assessment Post Processor (MAPP) and its use to provide insight into the integrated (i.e., EDS, Orion, and Altair based) mission cost as a function of various mission parameters and constraints. The Constellation architecture calls for semiannual launches to the Moon and will support a number of missions, beginning with 7-day sortie missions, culminating in a lunar outpost at a specified location. The operational lifetime of the Constellation Program can cover a period of decades over which the Earth-Moon geometry (particularly, the lunar inclination) will go through a complete cycle (i.e., the lunar nodal cycle lasting 18.6 years). This geometry variation, along with other parameters such as flight time, landing site location, and mission related constraints, affect the outbound (Earth to Moon) and inbound (Moon to Earth) translational performance cost. The mission designer must determine the ability of the vehicles to perform lunar missions as a function of this complex set of interdependent parameters. Trade-offs among these parameters provide essential insights for properly assessing the ability of a mission architecture to meet desired goals and objectives. These trades also aid in determining the overall usable propellant required for supporting nominal and off-nominal missions over the entire operational lifetime of the program, thus they support vehicle sizing.

  4. Multithreaded Processors

    Indian Academy of Sciences (India)

    IAS Admin

    processor architecture. Venkat Arun is a 3rd year. BTech Computer Science student at IIT Guwahati. He is currently working on congestion control in computer networks. In this article, we describe the constraints faced by modern computer designers due to operating speed mismatch between processors and mem- ory units ...

  5. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  6. A Sound Processor for Cochlear Implant Using a Simple Dual Path Nonlinear Model of Basilar Membrane

    Directory of Open Access Journals (Sweden)

    Kyung Hwan Kim

    2013-01-01

    Full Text Available We propose a new active nonlinear model of the frequency response of the basilar membrane in biological cochlea called the simple dual path nonlinear (SDPN model and a novel sound processing strategy for cochlear implants (CIs based upon this model. The SDPN model was developed to utilize the advantages of the level-dependent frequency response characteristics of the basilar membrane for robust formant representation under noisy conditions. In comparison to the dual resonance nonlinear model (DRNL which was previously proposed as an active nonlinear model of the basilar membrane, the SDPN model can reproduce similar level-dependent frequency responses with a much simpler structure and is thus better suited for incorporation into CI sound processors. By the analysis of dominant frequency component, it was confirmed that the formants of speech are more robustly represented after frequency decomposition by the nonlinear filterbank using SDPN, compared to a linear bandpass filter array which is used in conventional strategies. Acoustic simulation and hearing experiments in subjects with normal hearing showed that the proposed strategy results in better syllable recognition under speech-shaped noise compared to the conventional strategy based on fixed linear bandpass filters.

  7. Shortcut model for water-balanced operation in fuel processor fuel cell systems

    NARCIS (Netherlands)

    Biesheuvel, P.M.; Kramer, G.J.

    2004-01-01

    In a fuel processor, a hydrocarbon or oxygenate fuel is catalytically converted into a mixture rich in hydrogen which can be fed to a fuel cell to generate electricity. In these fuel processor fuel cell systems (FPFCs), water is recovered from the exhaust gases and recycled back into the system. We

  8. Integrated fuel processor development

    International Nuclear Information System (INIS)

    Ahmed, S.; Pereira, C.; Lee, S. H. D.; Krumpelt, M.

    2001-01-01

    The Department of Energy's Office of Advanced Automotive Technologies has been supporting the development of fuel-flexible fuel processors at Argonne National Laboratory. These fuel processors will enable fuel cell vehicles to operate on fuels available through the existing infrastructure. The constraints of on-board space and weight require that these fuel processors be designed to be compact and lightweight, while meeting the performance targets for efficiency and gas quality needed for the fuel cell. This paper discusses the performance of a prototype fuel processor that has been designed and fabricated to operate with liquid fuels, such as gasoline, ethanol, methanol, etc. Rated for a capacity of 10 kWe (one-fifth of that needed for a car), the prototype fuel processor integrates the unit operations (vaporization, heat exchange, etc.) and processes (reforming, water-gas shift, preferential oxidation reactions, etc.) necessary to produce the hydrogen-rich gas (reformate) that will fuel the polymer electrolyte fuel cell stacks. The fuel processor work is being complemented by analytical and fundamental research. With the ultimate objective of meeting on-board fuel processor goals, these studies include: modeling fuel cell systems to identify design and operating features; evaluating alternative fuel processing options; and developing appropriate catalysts and materials. Issues and outstanding challenges that need to be overcome in order to develop practical, on-board devices are discussed

  9. Decomposing the queue length distribution of processor-sharing models into queue lengths of permanent customer queues

    NARCIS (Netherlands)

    Cheung, S.-K.; Berg, H. van den; Boucherie, R.J.

    2005-01-01

    We obtain a decomposition result for the steady state queue length distribution in egalitarian processor-sharing (PS) models. In particular, for multi-class egalitarian PS queues, we show that the marginal queue length distribution for each class equals the queue length distribution of an equivalent

  10. The EPIC Architecture for Modeling Human Information-Processing and Performance: A Brief Introduction.

    Science.gov (United States)

    Kieras, David E.; Meyer, David E.

    EPIC (Executive Process-Interactive Control) is a human information-processing architecture that is especially suited for modeling multiple-task performance. The EPIC architecture includes peripheral sensory-motor processors surrounding a production-rule cognitive processor, and is being used to construct precise computational models for basic…

  11. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  12. Multithreaded Processors

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 20; Issue 9. Multithreaded Processors. Venkat Arun. General Article Volume 20 Issue 9 September 2015 pp 844-855. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/020/09/0844-0855. Keywords.

  13. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS on Intel Xeon Phi processors

    Directory of Open Access Journals (Sweden)

    H. Wang

    2017-08-01

    Full Text Available The Global Nested Air Quality Prediction Modeling System (GNAQPMS is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS, which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL. Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC, KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1 updating the pure Message Passing Interface (MPI parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2 fully employing the 512 bit wide vector processing units (VPUs on the KNL platform; (3 reducing unnecessary memory access to improve cache efficiency; (4 reducing the thread local storage (TLS in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5 changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined

  14. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    Science.gov (United States)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy

  15. Meteorological Processors and Accessory Programs

    Science.gov (United States)

    Surface and upper air data, provided by NWS, are important inputs for air quality models. Before these data are used in some of the EPA dispersion models, meteorological processors are used to manipulate the data.

  16. Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization

    Science.gov (United States)

    Westenbroek, Stephen M.; Doherty, John; Walker, John F.; Kelson, Victor A.; Hunt, Randall J.; Cera, Timothy B.

    2012-01-01

    The TSPROC (Time Series PROCessor) computer software uses a simple scripting language to process and analyze time series. It was developed primarily to assist in the calibration of environmental models. The software is designed to perform calculations on time-series data commonly associated with surface-water models, including calculation of flow volumes, transformation by means of basic arithmetic operations, and generation of seasonal and annual statistics and hydrologic indices. TSPROC can also be used to generate some of the key input files required to perform parameter optimization by means of the PEST (Parameter ESTimation) computer software. Through the use of TSPROC, the objective function for use in the model-calibration process can be focused on specific components of a hydrograph.

  17. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  18. The LASS hardware processor

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1976-01-01

    The problems of data analysis with hardware processors are reviewed and a description is given of a programmable processor. This processor, the 168/E, has been designed for use in the LASS multi-processor system; it has an execution speed comparable to the IBM 370/168 and uses the subset of IBM 370 instructions appropriate to the LASS analysis task. (Auth.)

  19. Scalability of human models

    NARCIS (Netherlands)

    Rodarius, C.; Rooij, L. van; Lange, R. de

    2007-01-01

    The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was

  20. Optical Finite Element Processor

    Science.gov (United States)

    Casasent, David; Taylor, Bradley K.

    1986-01-01

    A new high-accuracy optical linear algebra processor (OLAP) with many advantageous features is described. It achieves floating point accuracy, handles bipolar data by sign-magnitude representation, performs LU decomposition using only one channel, easily partitions and considers data flow. A new application (finite element (FE) structural analysis) for OLAPs is introduced and the results of a case study presented. Error sources in encoded OLAPs are addressed for the first time. Their modeling and simulation are discussed and quantitative data are presented. Dominant error sources and the effects of composite error sources are analyzed.

  1. Towards a Process Algebra for Shared Processors

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Andersen, Jacob; Løvengreen, Hans Henrik

    2002-01-01

    We present initial work on a timed process algebra that models sharing of processor resources allowing preemption at arbitrary points in time. This enables us to model both the functional and the timely behaviour of concurrent processes executed on a single processor. We give a refinement relation...

  2. A Sound Processor for Cochlear Implant Using a Simple Dual Path Nonlinear Model of Basilar Membrane

    OpenAIRE

    Kim, Kyung Hwan; Choi, Sung Jin; Kim, Jin Ho

    2013-01-01

    We propose a new active nonlinear model of the frequency response of the basilar membrane in biological cochlea called the simple dual path nonlinear (SDPN) model and a novel sound processing strategy for cochlear implants (CIs) based upon this model. The SDPN model was developed to utilize the advantages of the level-dependent frequency response characteristics of the basilar membrane for robust formant representation under noisy conditions. In comparison to the dual resonance nonlinear mode...

  3. RPC Stereo Processor (rsp) - a Software Package for Digital Surface Model and Orthophoto Generation from Satellite Stereo Imagery

    Science.gov (United States)

    Qin, R.

    2016-06-01

    Large-scale Digital Surface Models (DSM) are very useful for many geoscience and urban applications. Recently developed dense image matching methods have popularized the use of image-based very high resolution DSM. Many commercial/public tools that implement matching methods are available for perspective images, but there are rare handy tools for satellite stereo images. In this paper, a software package, RPC (rational polynomial coefficient) stereo processor (RSP), is introduced for this purpose. RSP implements a full pipeline of DSM and orthophoto generation based on RPC modelled satellite imagery (level 1+), including level 2 rectification, geo-referencing, point cloud generation, pan-sharpen, DSM resampling and ortho-rectification. A modified hierarchical semi-global matching method is used as the current matching strategy. Due to its high memory efficiency and optimized implementation, RSP can be used in normal PC to produce large format DSM and orthophotos. This tool was developed for internal use, and may be acquired by researchers for academic and non-commercial purpose to promote the 3D remote sensing applications.

  4. WCET Analysis of ARM Processors using Real-Time Model Checking

    DEFF Research Database (Denmark)

    Toft, Martin; Olesen, Mads Christian; Dalsgaard, Andreas

    2009-01-01

    This paper presents a flexible method that utilises real-time model checking to determine safe and sharp WCETs for processes running on hardware platforms featuring pipelining and caching.......This paper presents a flexible method that utilises real-time model checking to determine safe and sharp WCETs for processes running on hardware platforms featuring pipelining and caching....

  5. 21 CFR 892.1900 - Automatic radiographic film processor.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900 Section 892.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... processor. (a) Identification. An automatic radiographic film processor is a device intended to be used to...

  6. 21 CFR 864.3875 - Automated tissue processor.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated tissue processor. 864.3875 Section 864.3875 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Automated tissue processor. (a) Identification. An automated tissue processor is an automated system used to...

  7. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  8. Experimentally modeling stochastic processes with less memory by the use of a quantum processor.

    Science.gov (United States)

    Palsson, Matthew S; Gu, Mile; Ho, Joseph; Wiseman, Howard M; Pryde, Geoff J

    2017-02-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process' statistical complexity, C . We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of C q = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems.

  9. Modeling Enterprise Architecture Using Timed Colored PETRI Net: Single Processor Scheduling

    OpenAIRE

    Pashazadeh, Saied; Niyari, Elham Abdolrahimi

    2014-01-01

    The purpose of modeling enterprise architecture and analysis of it is to ease decision making about architecture of information systems. Planning is one of the most important tasks in an organization and has a major role in increasing the productivity of it. Scope of this paper is scheduling processes in the enterprise architecture. Scheduling is decision making on execution start time of processes that are used in manufacturing and service systems. Different methods and tools have been propo...

  10. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    Science.gov (United States)

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Quality-Driven Model-Based Design of MultiProcessor Embedded Systems for Highlydemanding Applications

    DEFF Research Database (Denmark)

    Jozwiak, Lech; Madsen, Jan

    2013-01-01

    opportunities have been created. The traditional applications can be served much better and numerous new sorts of embedded systems became technologically feasible and economically justified. Various monitoring, control, communication or multi-media systems that can be put on or embedded in (mobile, poorly......C optimization, adequate resolution of numerous complex design tradeoffs, reduction of the design productivity gap for the increasingly complex and sophisticated systems, reduction of the time-to market and development costs without compromising the system quality, etc. These challenges cannot be well addressed...... of contemporary and future embedded systems and introduction of the quality-driven model-based design methodology based on the paradigms of life-inspired systems and quality-driven design earlier proposed by the first presenter of this tutorial. Subsequently, the actual industrial Intel's ASIP-based MPSo...

  12. Image processor of model-based vision system for assembly robots

    International Nuclear Information System (INIS)

    Moribe, H.; Nakano, M.; Kuno, T.; Hasegawa, J.

    1987-01-01

    A special purpose image preprocessor for the visual system of assembly robots has been developed. The main function unit is composed of lookup tables to utilize the advantage of semiconductor memory for large scale integration, high speed and low price. More than one unit may be operated in parallel since it is designed on the standard IEEE 796 bus. The operation time of the preprocessor in line segment extraction is usually 200 ms per 500 segments, though it differs according to the complexity of scene image. The gray-scale visual system supported by the model-based analysis program using the extracted line segments recognizes partially visible or overlapping industrial workpieces, and detects these locations and orientations

  13. Models of human operators

    International Nuclear Information System (INIS)

    Knee, H.E.; Schryver, J.C.

    1991-01-01

    Models of human behavior and cognition (HB and C) are necessary for understanding the total response of complex systems. Many such models have come available over the past thirty years for various applications. Unfortunately, many potential model users remain skeptical about their practicality, acceptability, and usefulness. Such hesitancy stems in part to disbelief in the ability to model complex cognitive processes, and a belief that relevant human behavior can be adequately accounted for through the use of commonsense heuristics. This paper will highlight several models of HB and C and identify existing and potential applications in attempt to dispel such notions. (author)

  14. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  15. Multithreading in vector processors

    Energy Technology Data Exchange (ETDEWEB)

    Evangelinos, Constantinos; Kim, Changhoan; Nair, Ravi

    2018-01-16

    In one embodiment, a system includes a processor having a vector processing mode and a multithreading mode. The processor is configured to operate on one thread per cycle in the multithreading mode. The processor includes a program counter register having a plurality of program counters, and the program counter register is vectorized. Each program counter in the program counter register represents a distinct corresponding thread of a plurality of threads. The processor is configured to execute the plurality of threads by activating the plurality of program counters in a round robin cycle.

  16. A Methodolgy, Based on Analytical Modeling, for the Design of Parallel and Distributed Architectures for Relational Database Query Processors.

    Science.gov (United States)

    1987-12-01

    AFIT MPOA Architecture ........ ....................... 13 9. DIRECT Architecture ........ .......................... 14 - d 10. Teradata Ynet...commercially available database machines [28,59.1]. the Britton-Lee and Teradata machines. There have been other cormpanies announce database machines. such...Processors Figure 10. Teradata Ynet Architecture The Britton-Lee IDM-500 series database machine is the most well known and widelv used database machine

  17. Development of Innovative Design Processor

    International Nuclear Information System (INIS)

    Park, Y.S.; Park, C.O.

    2004-01-01

    The nuclear design analysis requires time-consuming and erroneous model-input preparation, code run, output analysis and quality assurance process. To reduce human effort and improve design quality and productivity, Innovative Design Processor (IDP) is being developed. Two basic principles of IDP are the document-oriented design and the web-based design. The document-oriented design is that, if the designer writes a design document called active document and feeds it to a special program, the final document with complete analysis, table and plots is made automatically. The active documents can be written with ordinary HTML editors or created automatically on the web, which is another framework of IDP. Using the proper mix-up of server side and client side programming under the LAMP (Linux/Apache/MySQL/PHP) environment, the design process on the web is modeled as a design wizard style so that even a novice designer makes the design document easily. This automation using the IDP is now being implemented for all the reload design of Korea Standard Nuclear Power Plant (KSNP) type PWRs. The introduction of this process will allow large reduction in all reload design efforts of KSNP and provide a platform for design and R and D tasks of KNFC. (authors)

  18. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  19. Human migraine models

    DEFF Research Database (Denmark)

    Iversen, Helle Klingenberg

    2001-01-01

    The need for experimental models is obvious. In animal models it is possible to study vascular responses, neurogenic inflammation, c-fos expression etc. However, the pathophysiology of migraine remains unsolved, why results from animal studies not directly can be related to the migraine attack......, which is a human experience. A set-up for investigations of experimental headache and migraine in humans, has been evaluated and headache mechanisms explored by using nitroglycerin and other headache-inducing agents. Nitric oxide (NO) or other parts of the NO activated cascade seems to be responsible...... for the induced headache and migraine. Perspectives are discussed....

  20. Graphics Processor Units (GPUs)

    Science.gov (United States)

    Wyrwas, Edward J.

    2017-01-01

    This presentation will include information about Graphics Processor Units (GPUs) technology, NASA Electronic Parts and Packaging (NEPP) tasks, The test setup, test parameter considerations, lessons learned, collaborations, a roadmap, NEPP partners, results to date, and future plans.

  1. Logistic Fuel Processor Development

    National Research Council Canada - National Science Library

    Salavani, Reza

    2004-01-01

    The Air Base Technologies Division of the Air Force Research Laboratory has developed a logistic fuel processor that removes the sulfur content of the fuel and in the process converts logistic fuel...

  2. Adaptive signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 ..mu..sec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed.

  3. Adaptive signal processor

    International Nuclear Information System (INIS)

    Walz, H.V.

    1980-07-01

    An experimental, general purpose adaptive signal processor system has been developed, utilizing a quantized (clipped) version of the Widrow-Hoff least-mean-square adaptive algorithm developed by Moschner. The system accommodates 64 adaptive weight channels with 8-bit resolution for each weight. Internal weight update arithmetic is performed with 16-bit resolution, and the system error signal is measured with 12-bit resolution. An adapt cycle of adjusting all 64 weight channels is accomplished in 8 μsec. Hardware of the signal processor utilizes primarily Schottky-TTL type integrated circuits. A prototype system with 24 weight channels has been constructed and tested. This report presents details of the system design and describes basic experiments performed with the prototype signal processor. Finally some system configurations and applications for this adaptive signal processor are discussed

  4. 3081/E processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.

    1984-04-01

    The 3081/E project was formed to prepare a much improved IBM mainframe emulator for the future. Its design is based on a large amount of experience in using the 168/E processor to increase available CPU power in both online and offline environments. The processor will be at least equal to the execution speed of a 370/168 and up to 1.5 times faster for heavy floating point code. A single processor will thus be at least four times more powerful than the VAX 11/780, and five processors on a system would equal at least the performance of the IBM 3081K. With its large memory space and simple but flexible high speed interface, the 3081/E is well suited for the online and offline needs of high energy physics in the future

  5. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  6. Functional unit for a processor

    NARCIS (Netherlands)

    Rohani, A.; Kerkhoff, Hans G.

    2013-01-01

    The invention relates to a functional unit for a processor, such as a Very Large Instruction Word Processor. The invention further relates to a processor comprising at least one such functional unit. The invention further relates to a functional unit and processor capable of mitigating the effect of

  7. LISA package user guide. Part III: SPOP (Statistical POst Processor). Uncertainty and sensitivity analysis for model output. Program description and user guide

    International Nuclear Information System (INIS)

    Saltelli, A.; Homma, T.

    1992-01-01

    This manual is subdivided into three parts. In the third part, the SPOP (Statistical POst Processor) code is described as a tool to perform Uncertainty and Sensitivity Analyses on the output of a User implemented model. It has been developed at the joint Research Centre of Ispra as part of the LISA package. SPOP performs Sensitivity Analysis (SA) and Uncertainty Analysis (UA) on a sample output from a Monte Carlo simulation. The sample is generated by the User and contains values of the output variable (in the form of a time series) and values of the input variables for a set of different simulations (runs), which are realised by varying the model input parameters. The User may generate the Monte Carlo sample with the PREP pre-processor, another module of the LISA package. The SPOP code is completely written in FORTRAN 77 using structured programming. Among the tasks performed by the code are the computation of Tchebycheff and Kolmogorov confidence bounds on the output variable (UA), and the use of effective non-parametric statistics to rank the influence of model input parameters (SA). The statistics employed are described in the present manual. 19 refs., 16 figs., 2 tabs. Note: This PART III is a revised version of the previous EUR report N.12700EN (1990)

  8. Limit characteristics of digital optoelectronic processor

    Science.gov (United States)

    Kolobrodov, V. G.; Tymchik, G. S.; Kolobrodov, M. S.

    2018-01-01

    In this article, the limiting characteristics of a digital optoelectronic processor are explored. The limits are defined by diffraction effects and a matrix structure of the devices for input and output of optical signals. The purpose of a present research is to optimize the parameters of the processor's components. The developed physical and mathematical model of DOEP allowed to establish the limit characteristics of the processor, restricted by diffraction effects and an array structure of the equipment for input and output of optical signals, as well as to optimize the parameters of the processor's components. The diameter of the entrance pupil of the Fourier lens is determined by the size of SLM and the pixel size of the modulator. To determine the spectral resolution, it is offered to use a concept of an optimum phase when the resolved diffraction maxima coincide with the pixel centers of the radiation detector.

  9. Accuracy Limitations in Optical Linear Algebra Processors

    Science.gov (United States)

    Batsell, Stephen Gordon

    1990-01-01

    One of the limiting factors in applying optical linear algebra processors (OLAPs) to real-world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication and addition operations, noise from spatial variations across arrays, and from crosstalk. In this dissertation, we propose a second-order statistical model for an OLAP which incorporates all these system noise sources. We now apply this knowledge to determining upper and lower bounds on the achievable accuracy. This is accomplished by first translating the standard definition of accuracy used in electronic digital processors to analog optical processors. We then employ our second-order statistical model. Having determined a general accuracy equation, we consider limiting cases such as for ideal and noisy components. From the ideal case, we find the fundamental limitations on improving analog processor accuracy. From the noisy case, we determine the practical limitations based on both device and system noise sources. These bounds allow system trade-offs to be made both in the choice of architecture and in individual components in such a way as to maximize the accuracy of the processor. Finally, by determining the fundamental limitations, we show the system engineer when the accuracy desired can be achieved from hardware or architecture improvements and when it must come from signal pre-processing and/or post-processing techniques.

  10. 3081//sub E/ processor

    International Nuclear Information System (INIS)

    Kunz, P.F.; Gravina, M.; Oxoby, G.; Trang, Q.; Fucci, A.; Jacobs, D.; Martin, B.; Storr, K.

    1983-03-01

    Since the introduction of the 168//sub E/, emulating processors have been successful over an amazingly wide range of applications. This paper will describe a second generation processor, the 3081//sub E/. This new processor, which is being developed as a collaboration between SLAC and CERN, goes beyond just fixing the obvious faults of the 168//sub E/. Not only will the 3081//sub E/ have much more memory space, incorporate many more IBM instructions, and have much more memory space, incorporate many more IBM instructions, and have full double precision floating point arithmetic, but it will also have faster execution times and be much simpler to build, debug, and maintain. The simple interface and reasonable cost of the 168//sub E/ will be maintained for the 3081//sub E/

  11. Universal hybrid quantum processors

    International Nuclear Information System (INIS)

    Vlasov, A.Yu.

    2003-01-01

    A quantum processor (the programmable gate array) is a quantum network with a fixed structure. A space of states is represented as tensor product of data and program registers. Different unitary operations with the data register correspond to 'loaded' programs without any changing or 'tuning' of the network itself. Due to such property and undesirability of entanglement between program and data registers, universality of quantum processors is a subject of rather strong restrictions. Universal 'stochastic' quantum gate arrays were developed by different authors. It was also proved that 'deterministic' quantum processors with finite-dimensional space of states may be universal only in approximate sense. In the present paper it is shown that, using a hybrid system with continuous and discrete quantum variables, it is possible to suggest a design of strictly universal quantum processors. It is also shown that 'deterministic' limit of specific programmable 'stochastic' U(1) gates (probability of success becomes a unit for the infinite program register), discussed by other authors, may be essentially the same kind of hybrid quantum systems used here

  12. Beyond processor sharing

    NARCIS (Netherlands)

    S. Aalto; U. Ayesta (Urtzi); S.C. Borst (Sem); V. Misra; R. Núñez Queija (Rudesindo)

    2007-01-01

    textabstractWhile the (Egalitarian) Processor-Sharing (PS) discipline offers crucial insights in the performance of fair resource allocation mechanisms, it is inherently limited in analyzing and designing differentiated scheduling algorithms such as Weighted Fair Queueing and Weighted Round-Robin.

  13. Automobile Crash Sensor Signal Processor

    Science.gov (United States)

    1973-11-01

    The crash sensor signal processor described interfaces between an automobile-installed doppler radar and an air bag activating solenoid or equivalent electromechanical device. The processor utilizes both digital and analog techniques to produce an ou...

  14. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  15. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  16. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    Science.gov (United States)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly

  17. Energy Use in California Wholesale Water Operations: Development and Application of a General Energy Post-Processor for California Water Management Models

    Science.gov (United States)

    Bates, Matthew Earl

    This thesis explores the effects of future water and social conditions on energy consumption in the major pumping and generation facilities of California's interconnected water-delivery system, with particular emphasis on the federally owned Central Valley Project, California-owned State Water Project, and the large locally owned systems in Southern California. Anticipated population growth, technological advancement, climatic changes, urban water conservation, and restrictions of through-Delta pumping will together affect the energy used for water operations and alter statewide water deliveries in complex ways that are often opposing and difficult to predict. Flow modeling with detailed statewide water models is necessary, and the CALVIN economic-engineering optimization model of California's interconnected water-delivery system is used to model eight future water-supply scenarios. Model results detail potential water-delivery patterns for the year 2050, but do not explicitly show the energy impacts of the modeled water operations. Energy analysis of flow results is accomplished with the UC Davis General Energy Post-Processor, a new tool for California water models that generalizes previous efforts at energy modeling and extends embedded-energy analysis to additional models and scenarios. Energy-intensity data come from existing energy post-processors for CalSim II and a recent embedded-energy-in-water study prepared by GEI Consultants and Navigant Consulting for the California Public Utilities Commission. Differences in energy consumption are assessed between modeled scenarios, and comparisons are made between data sources, with implications for future water and energy planning strategies and future modeling efforts. Results suggest that the effects of climate warming on water-delivery energy use could be relatively minimal, that the effects of a 50% reduction in Delta exports can be largely offset by 30% urban water conservation, and that a 30% conservation in

  18. The Molen Polymorphic Media Processor

    NARCIS (Netherlands)

    Kuzmanov, G.K.

    2004-01-01

    In this dissertation, we address high performance media processing based on a tightly coupled co-processor architectural paradigm. More specifically, we introduce a reconfigurable media augmentation of a general purpose processor and implement it into a fully operational processor prototype. The

  19. Dual-core Itanium Processor

    CERN Multimedia

    2006-01-01

    Intel’s first dual-core Itanium processor, code-named "Montecito" is a major release of Intel's Itanium 2 Processor Family, which implements the Intel Itanium architecture on a dual-core processor with two cores per die (integrated circuit). Itanium 2 is much more powerful than its predecessor. It has lower power consumption and thermal dissipation.

  20. Software-defined reconfigurable microwave photonics processor.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Capmany, José

    2015-06-01

    We propose, for the first time to our knowledge, a software-defined reconfigurable microwave photonics signal processor architecture that can be integrated on a chip and is capable of performing all the main functionalities by suitable programming of its control signals. The basic configuration is presented and a thorough end-to-end design model derived that accounts for the performance of the overall processor taking into consideration the impact and interdependencies of both its photonic and RF parts. We demonstrate the model versatility by applying it to several relevant application examples.

  1. Coding for parallel execution of hardware-in-the-loop millimeter-wave scene generation models on multicore SIMD processor architectures

    Science.gov (United States)

    Olson, Richard F.

    2013-05-01

    Rendering of point scatterer based radar scenes for millimeter wave (mmW) seeker tests in real-time hardware-in-the-loop (HWIL) scene generation requires efficient algorithms and vector-friendly computer architectures for complex signal synthesis. New processor technology from Intel implements an extended 256-bit vector SIMD instruction set (AVX, AVX2) in a multi-core CPU design providing peak execution rates of hundreds of GigaFLOPS (GFLOPS) on one chip. Real world mmW scene generation code can approach peak SIMD execution rates only after careful algorithm and source code design. An effective software design will maintain high computing intensity emphasizing register-to-register SIMD arithmetic operations over data movement between CPU caches or off-chip memories. Engineers at the U.S. Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) applied two basic parallel coding methods to assess new 256-bit SIMD multi-core architectures for mmW scene generation in HWIL. These include use of POSIX threads built on vector library functions and more portable, highlevel parallel code based on compiler technology (e.g. OpenMP pragmas and SIMD autovectorization). Since CPU technology is rapidly advancing toward high processor core counts and TeraFLOPS peak SIMD execution rates, it is imperative that coding methods be identified which produce efficient and maintainable parallel code. This paper describes the algorithms used in point scatterer target model rendering, the parallelization of those algorithms, and the execution performance achieved on an AVX multi-core machine using the two basic parallel coding methods. The paper concludes with estimates for scale-up performance on upcoming multi-core technology.

  2. Multimode power processor

    Science.gov (United States)

    O'Sullivan, George A.; O'Sullivan, Joseph A.

    1999-01-01

    In one embodiment, a power processor which operates in three modes: an inverter mode wherein power is delivered from a battery to an AC power grid or load; a battery charger mode wherein the battery is charged by a generator; and a parallel mode wherein the generator supplies power to the AC power grid or load in parallel with the battery. In the parallel mode, the system adapts to arbitrary non-linear loads. The power processor may operate on a per-phase basis wherein the load may be synthetically transferred from one phase to another by way of a bumpless transfer which causes no interruption of power to the load when transferring energy sources. Voltage transients and frequency transients delivered to the load when switching between the generator and battery sources are minimized, thereby providing an uninterruptible power supply. The power processor may be used as part of a hybrid electrical power source system which may contain, in one embodiment, a photovoltaic array, diesel engine, and battery power sources.

  3. Mathematical models of human retina.

    Science.gov (United States)

    Tălu, Stefan

    2011-01-01

    To describe the human retina, due the absence of complete topographical data, mathematical models are required. The mathematical formula permits a relatively simple representation to explore the physical and optical characteristics of the retina, with particular parameters. Advanced mathematical models are applied for human vision studies, solid modelling and biomechanical behavior of the retina. The accurate modelling of the retina is important in the development of visual prostheses. The objective of this paper is to present an overview of researches for human retina modelling using mathematical models.

  4. Time Manager Software for a Flight Processor

    Science.gov (United States)

    Zoerne, Roger

    2012-01-01

    Data analysis is a process of inspecting, cleaning, transforming, and modeling data to highlight useful information and suggest conclusions. Accurate timestamps and a timeline of vehicle events are needed to analyze flight data. By moving the timekeeping to the flight processor, there is no longer a need for a redundant time source. If each flight processor is initially synchronized to GPS, they can freewheel and maintain a fairly accurate time throughout the flight with no additional GPS time messages received. How ever, additional GPS time messages will ensure an even greater accuracy. When a timestamp is required, a gettime function is called that immediately reads the time-base register.

  5. Distributed processor allocation for launching applications in a massively connected processors complex

    Science.gov (United States)

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  6. Modeling human color categorization

    NARCIS (Netherlands)

    van den Broek, Egon; Schouten, Th.E.; Kisters, P.M.F.

    A unique color space segmentation method is introduced. It is founded on features of human cognition, where 11 color categories are used in processing color. In two experiments, human subjects were asked to categorize color stimuli into these 11 color categories, which resulted in markers for a

  7. Stochastic Models of Human Errors

    Science.gov (United States)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  8. Noise limitations in optical linear algebra processors.

    Science.gov (United States)

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  9. Trigger and decision processors

    International Nuclear Information System (INIS)

    Franke, G.

    1980-11-01

    In recent years there have been many attempts in high energy physics to make trigger and decision processes faster and more sophisticated. This became necessary due to a permanent increase of the number of sensitive detector elements in wire chambers and calorimeters, and in fact it was possible because of the fast developments in integrated circuits technique. In this paper the present situation will be reviewed. The discussion will be mainly focussed upon event filtering by pure software methods and - rather hardware related - microprogrammable processors as well as random access memory triggers. (orig.)

  10. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  11. Integrated Environmental Modelling: Human decisions, human challenges

    Science.gov (United States)

    Glynn, Pierre D.

    2015-01-01

    Integrated Environmental Modelling (IEM) is an invaluable tool for understanding the complex, dynamic ecosystems that house our natural resources and control our environments. Human behaviour affects the ways in which the science of IEM is assembled and used for meaningful societal applications. In particular, human biases and heuristics reflect adaptation and experiential learning to issues with frequent, sharply distinguished, feedbacks. Unfortunately, human behaviour is not adapted to the more diffusely experienced problems that IEM typically seeks to address. Twelve biases are identified that affect IEM (and science in general). These biases are supported by personal observations and by the findings of behavioural scientists. A process for critical analysis is proposed that addresses some human challenges of IEM and solicits explicit description of (1) represented processes and information, (2) unrepresented processes and information, and (3) accounting for, and cognizance of, potential human biases. Several other suggestions are also made that generally complement maintaining attitudes of watchful humility, open-mindedness, honesty and transparent accountability. These suggestions include (1) creating a new area of study in the behavioural biogeosciences, (2) using structured processes for engaging the modelling and stakeholder communities in IEM, and (3) using ‘red teams’ to increase resilience of IEM constructs and use.

  12. Integrated Environmental Modelling: human decisions, human challenges

    Science.gov (United States)

    Glynn, Pierre D.

    2015-01-01

    Integrated Environmental Modelling (IEM) is an invaluable tool for understanding the complex, dynamic ecosystems that house our natural resources and control our environments. Human behaviour affects the ways in which the science of IEM is assembled and used for meaningful societal applications. In particular, human biases and heuristics reflect adaptation and experiential learning to issues with frequent, sharply distinguished, feedbacks. Unfortunately, human behaviour is not adapted to the more diffusely experienced problems that IEM typically seeks to address. Twelve biases are identified that affect IEM (and science in general). These biases are supported by personal observations and by the findings of behavioural scientists. A process for critical analysis is proposed that addresses some human challenges of IEM and solicits explicit description of (1) represented processes and information, (2) unrepresented processes and information, and (3) accounting for, and cognizance of, potential human biases. Several other suggestions are also made that generally complement maintaining attitudes of watchful humility, open-mindedness, honesty and transparent accountability. These suggestions include (1) creating a new area of study in the behavioural biogeosciences, (2) using structured processes for engaging the modelling and stakeholder communities in IEM, and (3) using ‘red teams’ to increase resilience of IEM constructs and use.

  13. Command and Data Handling Processor

    OpenAIRE

    Perschy, James

    1996-01-01

    This command and data handling processor is designed to perform mission critical functions for the NEAR and ACE spacecraft. For both missions the processor formats telemetry and executes real-time, delayed and autonomy-rule commands. For the ACE mission the processor also performs spin stabilized attitude control. The design is based on the Harris RTX2010 microprocessor and the UTMC Summit MIL-STD-1553 bus controller. Fault tolerant features added include error detection, correction and write...

  14. AMD's 64-bit Opteron processor

    CERN Multimedia

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  15. The Meteorology-Chemistry Interface Processor (MCIP) for the CMAQ Modeling System: Updates through MCIPv3.4.1

    Science.gov (United States)

    The Community Multiscale Air Quality (CMAQ) modeling system is a state-of-the science regional air quality modeling system. The CMAQ modeling system has been primarily developed by the U.S. Environmental Protection Agency, and it has been publically and freely available for more...

  16. The Another Assimilation System for WRF-Chem (AAS4WRF): a new mass-conserving emissions pre-processor for WRF-Chem regional modelling

    Science.gov (United States)

    Vara Vela, A. L.; Muñoz, A.; Lomas, A., Sr.; González, C. M.; Calderon, M. G.; Andrade, M. D. F.

    2017-12-01

    The Weather Research and Forecasting with Chemistry (WRF-Chem) community model have been widely used for the study of pollutants transport, formation of secondary pollutants, as well as for the assessment of air quality policies implementation. A key factor to improve the WRF-Chem air quality simulations over urban areas is the representation of anthropogenic emission sources. There are several tools that are available to assist users in creating their own emissions based on global emissions information (e.g. anthro_emiss, prep_chem_src); however, there is no single tool that will construct local emissions input datasets for any particular domain at this time. Because the official emissions pre-processor (emiss_v03) is designed to work with domains located over North America, this work presents the Another Assimilation System for WRF-Chem (AAS4WRF), a ncl based mass-conserving emissions pre-processor designed to create WRF-Chem ready emissions files from local inventories on a lat/lon projection. AAS4WRF is appropriate to scale emission rates from both surface and elevated sources, providing the users an alternative way to assimilate their emissions to WRF-Chem. Since it was successfully tested for the first time for the city of Lima, Peru in 2014 (managed by SENAMHI, the National Weather Service of the country), several studies on air quality modelling have applied this utility to convert their emissions to those required for WRF-Chem. Two case studies performed in the metropolitan areas of Sao Paulo and Manizales in Brazil and Colombia, respectively, are here presented in order to analyse the influence of using local or global emission inventories in the representation of regulated air pollutants such as O3 and PM2.5. Although AAS4WRF works with local emissions information at the moment, further work is being conducted to make it compatible with global/regional emissions data file format. The tool is freely available upon request to the corresponding author.

  17. Analog processor for electroluminescent detector

    International Nuclear Information System (INIS)

    Belkin, V.S.

    1988-01-01

    Analog processor for spectrometric channel of soft X-ray radiation electroluminescent detector is described. Time internal spectrometric measurer (TIM) with 1 ns/chan quick action serves as signal analyzer. Analog processor restores signals direct component, integrates detector signals and generates control pulses on the TIM input, provides signal discrimination by amplitude and duration, counts number of input pulses per measuring cycle. Flowsheet of analog processor and its man characteristics are presented. Analog processor dead time constitutes 0.5-5 ms. Signal/noise relation is ≥ 500. Scale integral nonlinearity is < 2%

  18. Spaceborne Processor Array

    Science.gov (United States)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  19. Conceptual model of a logical system processor of selection to electrical filters for correction of harmonics in low voltage lines

    Science.gov (United States)

    Lastre, Arlys; Torriente, Ives; Méndez, Erik F.; Cordovés, Alexis

    2017-06-01

    In the present investigation, the authors propose a conceptual model for the analysis and the decision making of the corrective models to use in the mitigation of the harmonic distortion. The authors considered the setting of conventional models, and such adaptive models like the filters incorporation to networks neuronal artificial (RNA's) for the mitigating effect. In addition to the present work is a showing of the experimental model that learns by means of a flowchart denoting the need to use artificial intelligence skills for the exposition of the proposed model. The other aspect considered and analyzed are the adaptability and usage of the same, considering a local reference of the laws and lineaments of energy quality that demands the Department of Electricity and Energy Renewable (MEER) of Equator.

  20. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  1. Never Trust Your Word Processor

    Science.gov (United States)

    Linke, Dirk

    2009-01-01

    In this article, the author talks about the auto correction mode of word processors that leads to a number of problems and describes an example in biochemistry exams that shows how word processors can lead to mistakes in databases and in papers. The author contends that, where this system is applied, spell checking should not be left to a word…

  2. Parallelization and improvements of the generalized born model with a simple sWitching function for modern graphics processors.

    Science.gov (United States)

    Arthur, Evan J; Brooks, Charles L

    2016-04-15

    Two fundamental challenges of simulating biologically relevant systems are the rapid calculation of the energy of solvation and the trajectory length of a given simulation. The Generalized Born model with a Simple sWitching function (GBSW) addresses these issues by using an efficient approximation of Poisson-Boltzmann (PB) theory to calculate each solute atom's free energy of solvation, the gradient of this potential, and the subsequent forces of solvation without the need for explicit solvent molecules. This study presents a parallel refactoring of the original GBSW algorithm and its implementation on newly available, low cost graphics chips with thousands of processing cores. Depending on the system size and nonbonded force cutoffs, the new GBSW algorithm offers speed increases of between one and two orders of magnitude over previous implementations while maintaining similar levels of accuracy. We find that much of the algorithm scales linearly with an increase of system size, which makes this water model cost effective for solvating large systems. Additionally, we utilize our GPU-accelerated GBSW model to fold the model system chignolin, and in doing so we demonstrate that these speed enhancements now make accessible folding studies of peptides and potentially small proteins. © 2016 Wiley Periodicals, Inc.

  3. Embedded Processor Oriented Compiler Infrastructure

    Directory of Open Access Journals (Sweden)

    DJUKIC, M.

    2014-08-01

    Full Text Available In the recent years, research of special compiler techniques and algorithms for embedded processors broaden the knowledge of how to achieve better compiler performance in irregular processor architectures. However, industrial strength compilers, besides ability to generate efficient code, must also be robust, understandable, maintainable, and extensible. This raises the need for compiler infrastructure that provides means for convenient implementation of embedded processor oriented compiler techniques. Cirrus Logic Coyote 32 DSP is an example that shows how traditional compiler infrastructure is not able to cope with the problem. That is why the new compiler infrastructure was developed for this processor, based on research. in the field of embedded system software tools and experience in development of industrial strength compilers. The new infrastructure is described in this paper. Compiler generated code quality is compared with code generated by the previous compiler for the same processor architecture.

  4. Human mobility: Models and applications

    Science.gov (United States)

    Barbosa, Hugo; Barthelemy, Marc; Ghoshal, Gourab; James, Charlotte R.; Lenormand, Maxime; Louail, Thomas; Menezes, Ronaldo; Ramasco, José J.; Simini, Filippo; Tomasini, Marcello

    2018-03-01

    Recent years have witnessed an explosion of extensive geolocated datasets related to human movement, enabling scientists to quantitatively study individual and collective mobility patterns, and to generate models that can capture and reproduce the spatiotemporal structures and regularities in human trajectories. The study of human mobility is especially important for applications such as estimating migratory flows, traffic forecasting, urban planning, and epidemic modeling. In this survey, we review the approaches developed to reproduce various mobility patterns, with the main focus on recent developments. This review can be used both as an introduction to the fundamental modeling principles of human mobility, and as a collection of technical methods applicable to specific mobility-related problems. The review organizes the subject by differentiating between individual and population mobility and also between short-range and long-range mobility. Throughout the text the description of the theory is intertwined with real-world applications.

  5. A natural human hand model

    NARCIS (Netherlands)

    Van Nierop, O.A.; Van der Helm, A.; Overbeeke, K.J.; Djajadiningrat, T.J.P.

    2007-01-01

    We present a skeletal linked model of the human hand that has natural motion. We show how this can be achieved by introducing a new biology-based joint axis that simulates natural joint motion and a set of constraints that reduce an estimated 150 possible motions to twelve. The model is based on

  6. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...

  7. Mathematical models of human behavior

    DEFF Research Database (Denmark)

    Møllgaard, Anders Edsberg

    During the last 15 years there has been an explosion in human behavioral data caused by the emergence of cheap electronics and online platforms. This has spawned a whole new research field called computational social science, which has a quantitative approach to the study of human behavior. Most...... studies have considered data sets with just one behavioral variable such as email communication. The Social Fabric interdisciplinary research project is an attempt to collect a more complete data set on human behavior by providing 1000 smartphones with pre-installed data collection software to students...... data set, along with work on other behavioral data. The overall goal is to contribute to a quantitative understanding of human behavior using big data and mathematical models. Central to the thesis is the determination of the predictability of different human activities. Upper limits are derived...

  8. Distributed processor systems

    International Nuclear Information System (INIS)

    Zacharov, B.

    1976-01-01

    In recent years, there has been a growing tendency in high-energy physics and in other fields to solve computational problems by distributing tasks among the resources of inter-coupled processing devices and associated system elements. This trend has gained further momentum more recently with the increased availability of low-cost processors and with the development of the means of data distribution. In two lectures, the broad question of distributed computing systems is examined and the historical development of such systems reviewed. An attempt is made to examine the reasons for the existence of these systems and to discern the main trends for the future. The components of distributed systems are discussed in some detail and particular emphasis is placed on the importance of standards and conventions in certain key system components. The ideas and principles of distributed systems are discussed in general terms, but these are illustrated by a number of concrete examples drawn from the context of the high-energy physics environment. (Auth.)

  9. A* Algorithm for Graphics Processors

    OpenAIRE

    Inam, Rafia; Cederman, Daniel; Tsigas, Philippas

    2010-01-01

    Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...

  10. Human Modeling For Ground Processing Human Factors Engineering Analysis

    Science.gov (United States)

    Tran, Donald; Stambolian, Damon; Henderson, Gena; Barth, Tim

    2011-01-01

    There have been many advancements and accomplishments over that last few years using human modeling for human factors engineering analysis for design of spacecraft and launch vehicles. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the different types of human modeling used currently and in the past at Kennedy Space Center (KSC) currently, and to explain the future plans for human modeling for future spacecraft designs.

  11. Particle simulation on a distributed memory highly parallel processor

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Ikesaka, Morio

    1990-01-01

    This paper describes parallel molecular dynamics simulation of atoms governed by local force interaction. The space in the model is divided into cubic subspaces and mapped to the processor array of the CAP-256, a distributed memory, highly parallel processor developed at Fujitsu Labs. We developed a new technique to avoid redundant calculation of forces between atoms in different processors. Experiments showed the communication overhead was less than 5%, and the idle time due to load imbalance was less than 11% for two model problems which contain 11,532 and 46,128 argon atoms. From the software simulation, the CAP-II which is under development is estimated to be about 45 times faster than CAP-256 and will be able to run the same problem about 40 times faster than Fujitsu's M-380 mainframe when 256 processors are used. (author)

  12. 7 CFR 926.13 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 926.13 Section 926.13 Agriculture... Processor. Processor means any person who receives or acquires fresh or frozen cranberries or cranberries in the form of concentrate from handlers, producer-handlers, importers, brokers or other processors and...

  13. 40 CFR 791.45 - Processors.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Processors. 791.45 Section 791.45...) DATA REIMBURSEMENT Basis for Proposed Order § 791.45 Processors. (a) Generally, processors will be... processors will have a responsibility to provide reimbursement directly to those paying for the testing: (1...

  14. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  15. Flexible Bayesian Human Fecundity Models.

    Science.gov (United States)

    Kim, Sungduk; Sundaram, Rajeshwari; Buck Louis, Germaine M; Pyper, Cecilia

    2012-12-01

    Human fecundity is an issue of considerable interest for both epidemiological and clinical audiences, and is dependent upon a couple's biologic capacity for reproduction coupled with behaviors that place a couple at risk for pregnancy. Bayesian hierarchical models have been proposed to better model the conception probabilities by accounting for the acts of intercourse around the day of ovulation, i.e., during the fertile window. These models can be viewed in the framework of a generalized nonlinear model with an exponential link. However, a fixed choice of link function may not always provide the best fit, leading to potentially biased estimates for probability of conception. Motivated by this, we propose a general class of models for fecundity by relaxing the choice of the link function under the generalized nonlinear model framework. We use a sample from the Oxford Conception Study (OCS) to illustrate the utility and fit of this general class of models for estimating human conception. Our findings reinforce the need for attention to be paid to the choice of link function in modeling conception, as it may bias the estimation of conception probabilities. Various properties of the proposed models are examined and a Markov chain Monte Carlo sampling algorithm was developed for implementing the Bayesian computations. The deviance information criterion measure and logarithm of pseudo marginal likelihood are used for guiding the choice of links. The supplemental material section contains technical details of the proof of the theorem stated in the paper, and contains further simulation results and analysis.

  16. Modelling biased human trust dynamics

    NARCIS (Netherlands)

    Hoogendoorn, M.; Jaffry, S.W.; Maanen, P.P. van; Treur, J.

    2013-01-01

    Abstract. Within human trust related behaviour, according to the literature from the domains of Psychology and Social Sciences often non-rational behaviour can be observed. Current trust models that have been developed typically do not incorporate non-rational elements in the trust formation

  17. Human Resource Models: An Overview.

    Science.gov (United States)

    1982-11-01

    simulation models in which human performance plays an important part see, A.I. Siegel and J.J. Wolf , "Digital Behavioral Simulation--State-of-the-Art...atford and Petersen, Charles C., ’Mfaritine 421-439 Factors Adfecting Iberian Security," (Factores hbritimos Que "Northwestern University, Evanston

  18. Animal models for human diseases.

    Science.gov (United States)

    Rust, J H

    1982-01-01

    The use of animal models for the study of human disease is, for the most part, a recent development. This discussion of the use of animal models for human diseases directs attention to the sterile period, early advances, some personal experiences, the human as the model, biological oddities among common laboratory animals, malignancies in laboratory animals, problems created by federal regulations, cancer tests with animals, and what the future holds in terms of the use of animal models as an aid to understanding human disease. In terms of early use of animal models, there was a school of rabbis, some of whom were also physicians, in Babylon who studied and wrote extensively on ritual slaughter and the suitability of birds and beasts for food. Considerable detailed information on animal pathology, physiology, anatomy, and medicine in general can be found in the Soncino Babylonian Talmudic Translations. The 1906 edition of the "Jewish Encyclopedia," has been a rich resource. Although it has not been possible to establish what diseases of animals were studied and their relationship to the diseases of humans, there are fascinating clues to pursue, despite the fact that these were sterile years for research in medicine. The quotation from the Talmud is of interest: "The medical knowledge of the Talmudist was based upon tradition, the dissection of human bodies, observation of disease and experiments upon animals." A bright light in the lackluster years of medical research was provided by Galen, considered the originator of research in physiology and anatomy. His dissection of animals and work on apes and other lower animals were models for human anatomy and physiology and the bases for many treatises. Yet, Galen never seemed to suggest that animals could serve as models for human diseases. Most early physicians who can be considered to have been students of disease developed their medical knowledge by observing the sick under their care. 1 early medical investigator

  19. Ssip-a processor interconnection simulator

    Energy Technology Data Exchange (ETDEWEB)

    Navaux, P.; Weber, R.; Prezzi, J.; Tazza, M.

    1982-01-01

    Recent growing interest in multiple processor architectures has given rise to the study of procesor-memory interconnections for the determination of better architectures. This paper concerns the development of the SSIP-sistema simulador de interconexao de processadores (processor interconnection simulating system) which allows the evaluation of different interconnection structures comparing its performance in order to provide parameters which would help the designer to define an architcture. A wide spectrum of systems may be evaluated, and their behaviour observed due to the features incorporated into the simulator program. The system modelling and the simulator program implementation are described. Some results that can be obtained are shown, along with the discussion of their usefulness. 12 references.

  20. Use of data assimilation procedures in the meteorological pre-processors of decision support systems to improve the meteorological input of atmospheric dispersion models

    International Nuclear Information System (INIS)

    Kovalets, I.; Andronopoulos, S.; Bartzis, J.G.

    2003-01-01

    Full text: The Atmospheric Dispersion Models (ADMs) play a key role in decision support systems for nuclear emergency management, as they are used to determine the current, and predict the future spatial distribution of radionuclides after an accidental release of radioactivity to the atmosphere. Meteorological pre-processors (MPPs), usually act as interface between the ADMs and the incoming meteorological data. Therefore the quality of the results of the ADMs crucially depends on the input that they receive from the MPPs. The meteorological data are measurements from one or more stations in the vicinity of the nuclear power plant and/or prognostic data from Numerical Weather Prediction (NWP) models of National Weather Services. The measurements are representative of the past and current local conditions, while the NWP data cover a wider range in space and future time, where no measurements exist. In this respect, the simultaneous use of both by an MPP immediately poses the questions of consistency and of the appropriate methodology for reconciliation of the two kinds of meteorological data. The main objective of the work presented in this paper is the introduction of data assimilation (DA) techniques in the MPP of the RODOS (Real-time On-line Decision Support) system for nuclear emergency management in Europe, developed under the European Project 'RODOS-Migration', to reconcile the NWP data with the local observations coming from the meteorological stations. More specifically, in this paper: the methodological approach for simultaneous use of both meteorological measurements and NWP data in the MPP is presented; the method is validated by comparing results of calculations with experimental data; future ways of improvement of the meteorological input for the calculations of the atmospheric dispersion in the RODOS system are discussed. The methodological approach for solving the DA problem developed in this work is based on the method of optimal interpolation (OI

  1. Holistic Modeling for Human-Autonomous System Interaction

    Science.gov (United States)

    2015-01-01

    human  processor  (QN-­‐ MHP):  a  computational  architecture  for   multitask  performance  in  human-­‐machine   systems...Cancer  Screening  for  Older  US   Women .  North  Carolina  State  University,   Raleigh.           Venkateswaran

  2. Zebrafish models for human cancer.

    Science.gov (United States)

    Shive, H R

    2013-05-01

    For decades, the advancement of cancer research has relied on in vivo models for examining key processes in cancer pathogenesis, including neoplastic transformation, progression, and response to therapy. These studies, which have traditionally relied on rodent models, have engendered a vast body of scientific literature. Recently, experimental cancer researchers have embraced many new and alternative model systems, including the zebrafish (Danio rerio). The general benefits of the zebrafish model for laboratory investigation, such as cost, size, fecundity, and generation time, were quickly superseded by the discovery that zebrafish are amenable to a wide range of investigative techniques, many of which are difficult or impossible to perform in mammalian models. These advantages, coupled with the finding that many aspects of carcinogenesis are conserved in zebrafish as compared with humans, have firmly established a unique niche for the zebrafish model in comparative cancer research. This article introduces methods for generating cancer models in zebrafish and reviews a range of models that have been developed for specific cancer types.

  3. Libera Electron Beam Position Processor

    CERN Document Server

    Ursic, Rok

    2005-01-01

    Libera is a product family delivering unprecedented possibilities for either building powerful single station solutions or architecting complex feedback systems in the field of accelerator instrumentation and controls. This paper presents functionality and field performance of its first member, the electron beam position processor. It offers superior performance with multiple measurement channels delivering simultaneously position measurements in digital format with MHz kHz and Hz bandwidths. This all-in-one product, facilitating pulsed and CW measurements, is much more than simply a high performance beam position measuring device delivering micrometer level reproducibility with sub-micrometer resolution. Rich connectivity options and innate processing power make it a powerful feedback building block. By interconnecting multiple Libera electron beam position processors one can build a low-latency high throughput orbit feedback system without adding additional hardware. Libera electron beam position processor ...

  4. XL-100S microprogrammable processor

    International Nuclear Information System (INIS)

    Gorbunov, N.V.; Guzik, Z.; Sutulin, V.A.; Forytski, A.

    1983-01-01

    The XL-100S microprogrammable processor providing the multiprocessor operation mode in the XL system crate is described. The processor meets the EUR 6500 CAMAC standards, address up to 4 Mbyte memory, and interacts with 7 CAMAC branchas. Eight external requests initiate operations preset by a sequence of microcommands in a memory of the capacity up to 64 kwords of 32-Git. The microprocessor architecture allows one to emulate commands of the majority of mini- or micro-computers, including floating point operations. The XL-100S processor may be used in various branches of experimental physics: for physical experiment apparatus control, fast selection of useful physical events, organization of the of input/output operations, organization of direct assess to memory included, etc. The Am2900 microprocessor set is used as an elementary base. The device is made in the form of a single width CAMAC module

  5. Fast processor for dilepton triggers

    International Nuclear Information System (INIS)

    Katsanevas, S.; Kostarakis, P.; Baltrusaitis, R.

    1983-01-01

    We describe a fast trigger processor, developed for and used in Fermilab experiment E-537, for selecting high-mass dimuon events produced by negative pions and anti-protons. The processor finds candidate tracks by matching hit information received from drift chambers and scintillation counters, and determines their momenta. Invariant masses are calculated for all possible pairs of tracks and an event is accepted if any invariant mass is greater than some preselectable minimum mass. The whole process, accomplished within 5 to 10 microseconds, achieves up to a ten-fold reduction in trigger rate

  6. Optical Array Processor: Laboratory Results

    Science.gov (United States)

    Casasent, David; Jackson, James; Vaerewyck, Gerard

    1987-01-01

    A Space Integrating (SI) Optical Linear Algebra Processor (OLAP) is described and laboratory results on its performance in several practical engineering problems are presented. The applications include its use in the solution of a nonlinear matrix equation for optimal control and a parabolic Partial Differential Equation (PDE), the transient diffusion equation with two spatial variables. Frequency-multiplexed, analog and high accuracy non-base-two data encoding are used and discussed. A multi-processor OLAP architecture is described and partitioning and data flow issues are addressed.

  7. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performan...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  8. Counseling and Human Sexuality: A Training Model.

    Science.gov (United States)

    Fyfe, Bill

    1980-01-01

    Presents a counseling and human sexuality course model that provides counselors with an information base in human sexuality and assists them in exploring the emotional aspects of sexuality. Human sexuality is a vital aspect of personal development. (Author)

  9. The L0(muon) processor

    CERN Document Server

    Aslanides, Elie; Le Gac, R; Menouni, M; Potheau, R; Tsaregorodtsev, A Yu; Tsaregorodtsev, Andrei

    1999-01-01

    99-008 In this note we review the Marseille implementation for the L0(muon) processor. We describe the data flow, hardware implementation, synchronization issue as well as our first ideas on debugging and monitoring procedure. We also present the performance of the proposed architecture with an estimate of its cost.

  10. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Explicitly Parallel Instruction Computing (EPIC) is an instruction processing paradigm that has been in the spot- light due to its adoption by the next generation of Intel. Processors starting with the IA-64. The EPIC processing paradigm is an evolution of the Very Long Instruction. Word (VLIW) paradigm. This article gives an ...

  11. A Course on Reconfigurable Processors

    Science.gov (United States)

    Shoufan, Abdulhadi; Huss, Sorin A.

    2010-01-01

    Reconfigurable computing is an established field in computer science. Teaching this field to computer science students demands special attention due to limited student experience in electronics and digital system design. This article presents a compact course on reconfigurable processors, which was offered at the Technische Universitat Darmstadt,…

  12. GENERALIZED PROCESSOR SHARING (GPS) TECHNIQUES

    African Journals Online (AJOL)

    Olumide

    popular technique, Generalized Processor Sharing (GPS), provided an effective and efficient utilization of the available resources at the face of stringent and varied QoS requirements. This paper, therefore, presents the comparison of two GPS techniques –. PGPS and CDGPS – based on performance with limited resources ...

  13. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    memory stage. The fetch stage fetches instructions from the cache. In this stage, current day processors (like the IA-64) also incorporate a branch prediction unit. The branch prediction unit predicts the direction of branch instructions and speculatively fetches instructions from the predicted path. This is necessary to keep the ...

  14. Very Long Instruction Word Processors

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 12. Very Long Instruction Word Processors. S Balakrishnan. General Article Volume 6 Issue 12 December 2001 pp 61-68. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/006/12/0061-0068 ...

  15. Modeling Forces on the Human Body.

    Science.gov (United States)

    Pagonis, Vasilis; Drake, Russel; Morgan, Michael; Peters, Todd; Riddle, Chris; Rollins, Karen

    1999-01-01

    Presents five models of the human body as a mechanical system which can be used in introductory physics courses: human arms as levers, humans falling from small heights, a model of the human back, collisions during football, and the rotating gymnast. Gives ideas for discussions and activities, including Interactive Physics (TM) simulations. (WRM)

  16. Development and test of model apparatus of non-contact spin processor for photo mask production applying radial-type superconducting magnetic bearing

    International Nuclear Information System (INIS)

    Saito, Kimiyo; Fukui, Satoshi; Maezawa, Masaru; Ogawa, Jun; Oka, Tetsuo; Sato, Takao

    2013-01-01

    Highlights: ► We develop test spinner for non-contact spinning process in photo mask production. ► This test spinner shows improved spinning ability compared with our previous one. ► Large vertical movement of turn table still occurs during acceleration. ► Method to control vertical movement of turn table should be developed in next step. -- Abstract: In semiconductor devices, miniaturization of circuit patterning on wafers is required for higher integrations of circuit elements. Therefore, very high tolerance and quality are also required for patterning of microstructures of photo masks. The deposition of particulate dusts generated from mechanical bearings of the spin processor in the patterns of the photo mask is one of main causes of the deterioration of pattern preciseness. In our R and D, application of magnetic bearing utilizing bulk high temperature superconductors to the spin processors has been proposed. In this study, we develop a test spinner for the non-contact spinning process in the photo mask production system. The rotation test by using this test spinner shows that this test spinner accomplishes the improvement of the spinning ability compared with the test spinner developed in our previous study. This paper describes the rotation test results of the new test spinner applying the magnetic bearing with bulk high temperature superconductors

  17. Cassava processors' awareness of occupational and environmental ...

    African Journals Online (AJOL)

    ) is not without hazards both to the environment, the processors, and even the consumers. This study, therefore, investigated cassava processors' awareness of occupational and environmental hazards associated with and factors affecting ...

  18. Vicarious Learning from Human Models in Monkeys

    OpenAIRE

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was app...

  19. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  20. 7 CFR 1215.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Processor. 1215.14 Section 1215.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who owns...

  1. 7 CFR 989.13 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 989.13 Section 989.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... CALIFORNIA Order Regulating Handling Definitions § 989.13 Processor. Processor means any person who receives...

  2. 7 CFR 927.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Processor. 927.14 Section 927.14 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Order Regulating Handling Definitions § 927.14 Processor. Processor means any person who as owner, agent...

  3. Deformable human body model development

    Energy Technology Data Exchange (ETDEWEB)

    Wray, W.O.; Aida, T.

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). A Deformable Human Body Model (DHBM) capable of simulating a wide variety of deformation interactions between man and his environment has been developed. The model was intended to have applications in automobile safety analysis, soldier survivability studies and assistive technology development for the disabled. To date, we have demonstrated the utility of the DHBM in automobile safety analysis and are currently engaged in discussions with the U.S. military involving two additional applications. More specifically, the DHBM has been incorporated into a Virtual Safety Lab (VSL) for automobile design under contract to General Motors Corporation. Furthermore, we have won $1.8M in funding from the U.S. Army Medical Research and Material Command for development of a noninvasive intracranial pressure measurement system. The proposed research makes use of the detailed head model that is a component of the DHBM; the project duration is three years. In addition, we have been contacted by the Air Force Armstrong Aerospace Medical Research Laboratory concerning possible use of the DHBM in analyzing the loads and injury potential to pilots upon ejection from military aircraft. Current discussions with Armstrong involve possible LANL participation in a comparison between DHBM and the Air Force Articulated Total Body (ATB) model that is the current military standard.

  4. Vicarious learning from human models in monkeys.

    Science.gov (United States)

    Falcone, Rossella; Brunamonti, Emiliano; Genovesio, Aldo

    2012-01-01

    We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  5. Vicarious learning from human models in monkeys.

    Directory of Open Access Journals (Sweden)

    Rossella Falcone

    Full Text Available We examined whether monkeys can learn by observing a human model, through vicarious learning. Two monkeys observed a human model demonstrating an object-reward association and consuming food found underneath an object. The monkeys observed human models as they solved more than 30 learning problems. For each problem, the human models made a choice between two objects, one of which concealed a piece of apple. In the test phase afterwards, the monkeys made a choice of their own. Learning was apparent from the first trial of the test phase, confirming the ability of monkeys to learn by vicarious observation of human models.

  6. Communications systems and methods for subsea processors

    Science.gov (United States)

    Gutierrez, Jose; Pereira, Luis

    2016-04-26

    A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.

  7. Token-Aware Completion Functions for Elastic Processor Verification

    Directory of Open Access Journals (Sweden)

    Sudarshan K. Srinivasan

    2009-01-01

    Full Text Available We develop a formal verification procedure to check that elastic pipelined processor designs correctly implement their instruction set architecture (ISA specifications. The notion of correctness we use is based on refinement. Refinement proofs are based on refinement maps, which—in the context of this problem—are functions that map elastic processor states to states of the ISA specification model. Data flow in elastic architectures is complicated by the insertion of any number of buffers in any place in the design, making it hard to construct refinement maps for elastic systems in a systematic manner. We introduce token-aware completion functions, which incorporate a mechanism to track the flow of data in elastic pipelines, as a highly automated and systematic approach to construct refinement maps. We demonstrate the efficiency of the overall verification procedure based on token-aware completion functions using six elastic pipelined processor models based on the DLX architecture.

  8. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  9. Implementation of kernels on the Maestro processor

    Science.gov (United States)

    Suh, Jinwoo; Kang, D. I. D.; Crago, S. P.

    Currently, most microprocessors use multiple cores to increase performance while limiting power usage. Some processors use not just a few cores, but tens of cores or even 100 cores. One such many-core microprocessor is the Maestro processor, which is based on Tilera's TILE64 processor. The Maestro chip is a 49-core, general-purpose, radiation-hardened processor designed for space applications. The Maestro processor, unlike the TILE64, has a floating point unit (FPU) in each core for improved floating point performance. The Maestro processor runs at 342 MHz clock frequency. On the Maestro processor, we implemented several widely used kernels: matrix multiplication, vector add, FIR filter, and FFT. We measured and analyzed the performance of these kernels. The achieved performance was up to 5.7 GFLOPS, and the speedup compared to single tile was up to 49 using 49 tiles.

  10. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  11. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  12. Crowd Human Behavior for Modeling and Simulation

    Science.gov (United States)

    2009-08-06

    Crowd Human Behavior for Modeling and Simulation Elizabeth Mezzacappa, Ph.D. & Gordon Cooke, MEME Target Behavioral Response Laboratory, ARDEC...TYPE Conference Presentation 3. DATES COVERED 00-00-2008 to 00-00-2009 4. TITLE AND SUBTITLE Crowd Human Behavior for Modeling and Simulation...34understanding human behavior " and "model validation and verification" and will focus on modeling and simulation of crowds from a social scientist???s

  13. Biomechanical Modeling of the Human Head

    Science.gov (United States)

    2017-10-03

    Experimental Animal Models for Studies on the Mechanisms of Blast- Induced Neurotrauma,” Frontiers in Neurology 3, 30 (2012). 13. R. A. Bauman, G. Ling...modeling, of both humans and animals , has gained momentum for the investigation of traumatic brain injury. These models require both accurate geometric...between model predictions and experimental data. This report details model calibration for all materials identified in models of a human head and

  14. Processor farming in two-level analysis of historical bridge

    Science.gov (United States)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  15. Face feature processor on mobile service robot

    Science.gov (United States)

    Ahn, Ho Seok; Park, Myoung Soo; Na, Jin Hee; Choi, Jin Young

    2005-12-01

    In recent years, many mobile service robots have been developed. These robots are different from industrial robots. Service robots were confronted to unexpected changes in the human environment. So many capabilities were needed to service mobile robot, for example, the capability to recognize people's face and voice, the capability to understand people's conversation, and the capability to express the robot's thinking etc. This research considered face detection, face tracking and face recognition from continuous camera image. For face detection module, it used CBCH algorithm using openCV library from Intel Corporation. For face tracking module, it used the fuzzy controller to control the pan-tilt camera movement smoothly with face detection result. A PCA-FX, which adds class information to PCA, was used for face recognition module. These three procedures were called face feature processor, which were implemented on mobile service robot OMR to verify.

  16. Volterra dendritic stimulus processors and biophysical spike generators with intrinsic noise sources

    OpenAIRE

    Lazar, Aurel A.; Zhou, Yiyin

    2014-01-01

    We consider a class of neural circuit models with internal noise sources arising in sensory systems. The basic neuron model in these circuits consists of a nonlinear dendritic stimulus processor (DSP) cascaded with a biophysical spike generator (BSG). The nonlinear dendritic processor is modeled as a set of nonlinear operators that are assumed to have a Volterra series representation. Biophysical point neuron models, such as the Hodgkin-Huxley neuron, are used to model the spike generator. We...

  17. Human mammary microenvironment better regulates the biology of human breast cancer in humanized mouse model.

    Science.gov (United States)

    Zheng, Ming-Jie; Wang, Jue; Xu, Lu; Zha, Xiao-Ming; Zhao, Yi; Ling, Li-Jun; Wang, Shui

    2015-02-01

    During the past decades, many efforts have been made in mimicking the clinical progress of human cancer in mouse models. Previously, we developed a human breast tissue-derived (HB) mouse model. Theoretically, it may mimic the interactions between "species-specific" mammary microenvironment of human origin and human breast cancer cells. However, detailed evidences are absent. The present study (in vivo, cellular, and molecular experiments) was designed to explore the regulatory role of human mammary microenvironment in the progress of human breast cancer cells. Subcutaneous (SUB), mammary fat pad (MFP), and HB mouse models were developed for in vivo comparisons. Then, the orthotopic tumor masses from three different mouse models were collected for primary culture. Finally, the biology of primary cultured human breast cancer cells was compared by cellular and molecular experiments. Results of in vivo mouse models indicated that human breast cancer cells grew better in human mammary microenvironment. Cellular and molecular experiments confirmed that primary cultured human breast cancer cells from HB mouse model showed a better proliferative and anti-apoptotic biology than those from SUB to MFP mouse models. Meanwhile, primary cultured human breast cancer cells from HB mouse model also obtained the migratory and invasive biology for "species-specific" tissue metastasis to human tissues. Comprehensive analyses suggest that "species-specific" mammary microenvironment of human origin better regulates the biology of human breast cancer cells in our humanized mouse model of breast cancer, which is more consistent with the clinical progress of human breast cancer.

  18. Online track processor for the CDF upgrade

    International Nuclear Information System (INIS)

    Ciobanu, C.; Gertenslager, J.; Hoftiezer, J.

    1999-01-01

    A trigger track processor is being designed for the CDF upgrade. This processor identifies high momentum (P T > 1.5 GeV/c) charged tracks in the new central outer tracking chamber for CDF II. The track processor is called the Extremely Fast Tracker (XFT). The XFT design is highly parallel to handle the input rate of 183 Gbits/sec and output rate of 44 Gbits/sec. The processor is pipelined and reports the results for a new event every 132 ns. The processor uses three stages, hit classification, segment finding, and segment linking. The pattern recognition algorithms for the three stages are implemented in programmable logic devices (PLDs) which allow for in-situ modification of the algorithm at any time. The PLDs reside on three different types of modules. Prototypes of each of these modules have been designed and built, and are presently undergoing testing. An overview of the track processor and results of testing are presented

  19. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  20. Configurable Multi-Purpose Processor

    Science.gov (United States)

    Valencia, J. Emilio; Forney, Chirstopher; Morrison, Robert; Birr, Richard

    2010-01-01

    Advancements in technology have allowed the miniaturization of systems used in aerospace vehicles. This technology is driven by the need for next-generation systems that provide reliable, responsive, and cost-effective range operations while providing increased capabilities such as simultaneous mission support, increased launch trajectories, improved launch, and landing opportunities, etc. Leveraging the newest technologies, the command and telemetry processor (CTP) concept provides for a compact, flexible, and integrated solution for flight command and telemetry systems and range systems. The CTP is a relatively small circuit board that serves as a processing platform for high dynamic, high vibration environments. The CTP can be reconfigured and reprogrammed, allowing it to be adapted for many different applications. The design is centered around a configurable field-programmable gate array (FPGA) device that contains numerous logic cells that can be used to implement traditional integrated circuits. The FPGA contains two PowerPC processors running the Vx-Works real-time operating system and are used to execute software programs specific to each application. The CTP was designed and developed specifically to provide telemetry functions; namely, the command processing, telemetry processing, and GPS metric tracking of a flight vehicle. However, it can be used as a general-purpose processor board to perform numerous functions implemented in either hardware or software using the FPGA s processors and/or logic cells. Functionally, the CTP was designed for range safety applications where it would ultimately become part of a vehicle s flight termination system. Consequently, the major functions of the CTP are to perform the forward link command processing, GPS metric tracking, return link telemetry data processing, error detection and correction, data encryption/ decryption, and initiate flight termination action commands. Also, the CTP had to be designed to survive and

  1. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  2. Accuracies Of Optical Processors For Adaptive Optics

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1992-01-01

    Paper presents analysis of accuracies and requirements concerning accuracies of optical linear-algebra processors (OLAP's) in adaptive-optics imaging systems. Much faster than digital electronic processor and eliminate some residual distortion. Question whether errors introduced by analog processing of OLAP overcome advantage of greater speed. Paper addresses issue by presenting estimate of accuracy required in general OLAP that yields smaller average residual aberration of wave front than digital electronic processor computing at given speed.

  3. Digital Signal Processor For GPS Receivers

    Science.gov (United States)

    Thomas, J. B.; Meehan, T. K.; Srinivasan, J. M.

    1989-01-01

    Three innovative components combined to produce all-digital signal processor with superior characteristics: outstanding accuracy, high-dynamics tracking, versatile integration times, lower loss-of-lock signal strengths, and infrequent cycle slips. Three components are digital chip advancer, digital carrier downconverter and code correlator, and digital tracking processor. All-digital signal processor intended for use in receivers of Global Positioning System (GPS) for geodesy, geodynamics, high-dynamics tracking, and ionospheric calibration.

  4. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  5. Computer modeling of human decision making

    Science.gov (United States)

    Gevarter, William B.

    1991-01-01

    Models of human decision making are reviewed. Models which treat just the cognitive aspects of human behavior are included as well as models which include motivation. Both models which have associated computer programs, and those that do not, are considered. Since flow diagrams, that assist in constructing computer simulation of such models, were not generally available, such diagrams were constructed and are presented. The result provides a rich source of information, which can aid in construction of more realistic future simulations of human decision making.

  6. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and a processor for data receiving and processing from drift chambers of a device for investigating relativistic positroniums are described. The data are delivered to the register input in the form of the Grey 8 bit code, memorized and transformed to a position code. The register information is delivered to the KAMAK trunk and to the front panel plug. The processor selects particle tracks in a horizontal plane of the facility. ΔY maximum coordinate divergence and minimum point quantity on the track are set from the processor front panel. Processor solution time is 16 μs maximum quantity of simultaneously analyzed coordinates is 16

  7. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    Science.gov (United States)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  8. Human Performance Models of Pilot Behavior

    Science.gov (United States)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  9. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie L.; Adam, Niklas M.; Barta, Daniel; Meyer, Caitlin E.; Pensinger, Stuart; Vega, Leticia M.; Callahan, Michael R.; Flynn, Michael; Wheeler, Ray; hide

    2013-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrification and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  10. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie; Vega, Leticia; Adam, Niklas; Flynn, Michael; Wjee (er. Rau); Lunn, Griffin; Jackson, Andrew

    2012-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrogen and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  11. Animal models for human genetic diseases

    African Journals Online (AJOL)

    Sharif Sons

    to be the prime model of inherited human disease and share 99% of their ... disturbances (including anxiety and depression) ..... Leibovici M, Safieddine S, Petit C (2008). Mouse models for human hereditary deafness. Curr. Top. Dev. Biol. 84:385-429. Levi YF, Meiner Z, Canello T, Frid K, Kovacs GG, Budka H, Avrahami.

  12. MEDINA: MECCA Development in Accelerators – KPP Fortran to CUDA source-to-source Pre-processor

    Directory of Open Access Journals (Sweden)

    Michail Alvanos

    2017-04-01

    Full Text Available The global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC is a modular global model that simulates climate change and air quality scenarios. The application includes different sub-models for the calculation of chemical species concentrations, their interaction with land and sea, and the human interaction. The paper presents a source-to-source parser that enables support for Graphics Processing Units (GPU by the Kinetic Pre-Processor (KPP general purpose open-source software tool. The requirements of the host system are also described. The source code of the source-to-source parser is available under the MIT License.

  13. Hidden Markov Models for Human Genes

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren; Chauvin, Yves

    1997-01-01

    We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover com...

  14. Producing chopped firewood with firewood processors

    International Nuclear Information System (INIS)

    Kaerhae, K.; Jouhiaho, A.

    2009-01-01

    The TTS Institute's research and development project studied both the productivity of new, chopped firewood processors (cross-cutting and splitting machines) suitable for professional and independent small-scale production, and the costs of the chopped firewood produced. Seven chopped firewood processors were tested in the research, six of which were sawing processors and one shearing processor. The chopping work was carried out using wood feeding racks and a wood lifter. The work was also carried out without any feeding appliances. Altogether 132.5 solid m 3 of wood were chopped in the time studies. The firewood processor used had the most significant impact on chopping work productivity. In addition to the firewood processor, the stem mid-diameter, the length of the raw material, and of the firewood were also found to affect productivity. The wood feeding systems also affected productivity. If there is a feeding rack and hydraulic grapple loader available for use in chopping firewood, then it is worth using the wood feeding rack. A wood lifter is only worth using with the largest stems (over 20 cm mid-diameter) if a feeding rack cannot be used. When producing chopped firewood from small-diameter wood, i.e. with a mid-diameter less than 10 cm, the costs of chopping work were over 10 EUR solid m -3 with sawing firewood processors. The shearing firewood processor with a guillotine blade achieved a cost level of 5 EUR solid m -3 when the mid-diameter of the chopped stem was 10 cm. In addition to the raw material, the cost-efficient chopping work also requires several hundred annual operating hours with a firewood processor, which is difficult for individual firewood entrepreneurs to achieve. The operating hours of firewood processors can be increased to the required level by the joint use of the processors by a number of firewood entrepreneurs. (author)

  15. Efficient quantum walk on a quantum processor.

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L; Wang, Jingbo B; Matthews, Jonathan C F

    2016-05-05

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

  16. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-01-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471

  17. The communication processor of TUMULT-64

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Jansen, P.G.

    1988-01-01

    Tumult (Twente University MULTi-processor system) is a modular extendible multi-processor system designed and implemented at the Twente University of Technology in co-operation with Oce Nederland B.V. and the Dr. Neher Laboratories (Dutch PTT). Characteristics of the hardware are: MIMD type,

  18. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  19. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    These proceedings contain the articles presented at the named conference. These concern hardware and software for vector and parallel processors, numerical methods and algorithms for the computation on such processors, as well as applications of such methods to different fields of physics and related sciences. See hints under the relevant topics. (HSI)

  20. A Study of Communication Processor Systems

    Science.gov (United States)

    1979-12-01

    by S . The processor and manually controlled switches mp Skp enable a link between each processor and controllers (K io) which in turn allow access to... proceso i S thle base leel wh Ichl scans all LIines And Initiates all non--interrut drvn rcsse0s . The voice switching functioni Is performed by one

  1. The TM3270 Media-processor

    NARCIS (Netherlands)

    van de Waerdt, J.W.

    2006-01-01

    I n this thesis, we present the TM3270 VLIW media-processor, the latest of TriMedia processors, and describe the innovations with respect to its prede- cessor: the TM3260. We describe enhancements to the load/store unit design, such as a new data prefetching technique, and architectural

  2. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  3. SCAN secure processor and its biometric capabilities

    Science.gov (United States)

    Kannavara, Raghudeep; Mertoguno, Sukarno; Bourbakis, Nikolaos

    2011-04-01

    This paper presents the design of the SCAN secure processor and its extended instruction set to enable secure biometric authentication. The SCAN secure processor is a modified SparcV8 processor architecture with a new instruction set to handle voice, iris, and fingerprint-based biometric authentication. The algorithms for processing biometric data are based on the local global graph methodology. The biometric modules are synthesized in reconfigurable logic and the results of the field-programmable gate array (FPGA) synthesis are presented. We propose to implement the above-mentioned modules in an off-chip FPGA co-processor. Further, the SCAN-secure processor will offer a SCAN-based encryption and decryption of 32 bit instructions and data.

  4. A fully reconfigurable photonic integrated signal processor

    Science.gov (United States)

    Liu, Weilin; Li, Ming; Guzzon, Robert S.; Norberg, Erik J.; Parker, John S.; Lu, Mingzhi; Coldren, Larry A.; Yao, Jianping

    2016-03-01

    Photonic signal processing has been considered a solution to overcome the inherent electronic speed limitations. Over the past few years, an impressive range of photonic integrated signal processors have been proposed, but they usually offer limited reconfigurability, a feature highly needed for the implementation of large-scale general-purpose photonic signal processors. Here, we report and experimentally demonstrate a fully reconfigurable photonic integrated signal processor based on an InP-InGaAsP material system. The proposed photonic signal processor is capable of performing reconfigurable signal processing functions including temporal integration, temporal differentiation and Hilbert transformation. The reconfigurability is achieved by controlling the injection currents to the active components of the signal processor. Our demonstration suggests great potential for chip-scale fully programmable all-optical signal processing.

  5. Human Adaptive Mechatronics and Human-System Modelling

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2013-03-01

    Full Text Available Several topics in projects for mechatronics studies, which are 'Human Adaptive Mechatronics (HAM' and 'Human-System Modelling (HSM', are presented in this paper. The main research theme of the HAM project is a design strategy for a new intelligent mechatronics system, which enhances operators' skills during machine operation. Skill analyses and control system design have been addressed. In the HSM project, human modelling based on hierarchical classification of skills was studied, including the following five types of skills: social, planning, cognitive, motion and sensory-motor skills. This paper includes digests of these research topics and the outcomes concerning each type of skill. Relationships with other research activities, knowledge and information that will be helpful for readers who are trying to study assistive human-mechatronics systems are also mentioned.

  6. Animal Models of Human Placentation - A Review

    DEFF Research Database (Denmark)

    Carter, Anthony Michael

    2007-01-01

    This review examines the strengths and weaknesses of animal models of human placentation and pays particular attention to the mouse and non-human primates. Analogies can be drawn between mouse and human in placental cell types and genes controlling placental development. There are, however...... and delivers poorly developed young. Guinea pig is a good alternative rodent model and among the few species known to develop pregnancy toxaemia. The sheep is well established as a model in fetal physiology but is of limited value for placental research. The ovine placenta is epitheliochorial...

  7. HTGR core seismic analysis using an array processor

    International Nuclear Information System (INIS)

    Shatoff, H.; Charman, C.M.

    1983-01-01

    A Floating Point Systems array processor performs nonlinear dynamic analysis of the high-temperature gas-cooled reactor (HTGR) core with significant time and cost savings. The graphite HTGR core consists of approximately 8000 blocks of various shapes which are subject to motion and impact during a seismic event. Two-dimensional computer programs (CRUNCH2D, MCOCO) can perform explicit step-by-step dynamic analyses of up to 600 blocks for time-history motions. However, use of two-dimensional codes was limited by the large cost and run times required. Three-dimensional analysis of the entire core, or even a large part of it, had been considered totally impractical. Because of the needs of the HTGR core seismic program, a Floating Point Systems array processor was used to enhance computer performance of the two-dimensional core seismic computer programs, MCOCO and CRUNCH2D. This effort began by converting the computational algorithms used in the codes to a form which takes maximum advantage of the parallel and pipeline processors offered by the architecture of the Floating Point Systems array processor. The subsequent conversion of the vectorized FORTRAN coding to the array processor required a significant programming effort to make the system work on the General Atomic (GA) UNIVAC 1100/82 host. These efforts were quite rewarding, however, since the cost of running the codes has been reduced approximately 50-fold and the time threefold. The core seismic analysis with large two-dimensional models has now become routine and extension to three-dimensional analysis is feasible. These codes simulate the one-fifth-scale full-array HTGR core model. This paper compares the analysis with the test results for sine-sweep motion

  8. DEMAND FOR WILD BLUEBERRIES AT FARM AND PROCESSOR LEVELS

    OpenAIRE

    Cheng, Hsiang-Tai; Peavey, Stephanie R.; Kezis, Alan S.

    2000-01-01

    The wild blueberry crop harvested in Maine and eastern Canada has increased considerably in recent years. The purpose of this study is to understand the recent trends in demand for wild blueberries with particular attention to the effects of production and the marketing of wild and cultivated blueberries. A price response model was developed to analyze farm-gate price and the processor price, using annual data from 1978 through 1997. Key explanatory variables in the model include quantity of ...

  9. The Human-Artifact Model

    DEFF Research Database (Denmark)

    Bødker, Susanne; Klokmose, Clemens Nylandsted

    2011-01-01

    needs to support such development through concepts and methods. This leads to a methodological approach that focuses on new artifacts to supplement and substitute existing artifacts. Through a design case, we develop the methodological approach and illustrate how the human–artifact model can be applied...... to analyze present artifacts and to design future ones. The model is used to structure such analysis and to reason about findings while providing leverage from activity theoretical insights on mediation, dialectics, and levels of activity....

  10. Human BDCM Mulit-Route PBPK Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data set contains the code for the BDCM human multi-route model written in the programming language acsl. The final published manuscript is provided since it...

  11. Modeling Human Cancers in Drosophila.

    Science.gov (United States)

    Sonoshita, M; Cagan, R L

    2017-01-01

    Cancer is a complex disease that affects multiple organs. Whole-body animal models provide important insights into oncology that can lead to clinical impact. Here, we review novel concepts that Drosophila studies have established for cancer biology, drug discovery, and patient therapy. Genetic studies using Drosophila have explored the roles of oncogenes and tumor-suppressor genes that when dysregulated promote cancer formation, making Drosophila a useful model to study multiple aspects of transformation. Not limited to mechanism analyses, Drosophila has recently been showing its value in facilitating drug development. Flies offer rapid, efficient platforms by which novel classes of drugs can be identified as candidate anticancer leads. Further, we discuss the use of Drosophila as a platform to develop therapies for individual patients by modeling the tumor's genetic complexity. Drosophila provides both a classical and a novel tool to identify new therapeutics, complementing other more traditional cancer tools. © 2017 Elsevier Inc. All rights reserved.

  12. Modeling human learning involved in car driving

    OpenAIRE

    Wewerinke, P.H.

    1994-01-01

    In this paper, car driving is considered at the level of human tracking and maneuvering in the context of other traffic. A model analysis revealed the most salient features determining driving performance and safety. Learning car driving is modelled based on a system theoretical approach and based on a neural network approach. The aim of this research is to assess the relative merit of both approaches to describe human learning behavior in car driving specifically and in operating dynamic sys...

  13. Effect of processor temperature on film dosimetry.

    Science.gov (United States)

    Srivastava, Shiv P; Das, Indra J

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d(max.), 10 × 10 cm(2), 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4-40.6°C (85-105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used. Copyright © 2012 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  14. Effect of processor temperature on film dosimetry

    International Nuclear Information System (INIS)

    Srivastava, Shiv P.; Das, Indra J.

    2012-01-01

    Optical density (OD) of a radiographic film plays an important role in radiation dosimetry, which depends on various parameters, including beam energy, depth, field size, film batch, dose, dose rate, air film interface, postexposure processing time, and temperature of the processor. Most of these parameters have been studied for Kodak XV and extended dose range (EDR) films used in radiation oncology. There is very limited information on processor temperature, which is investigated in this study. Multiple XV and EDR films were exposed in the reference condition (d max. , 10 × 10 cm 2 , 100 cm) to a given dose. An automatic film processor (X-Omat 5000) was used for processing films. The temperature of the processor was adjusted manually with increasing temperature. At each temperature, a set of films was processed to evaluate OD at a given dose. For both films, OD is a linear function of processor temperature in the range of 29.4–40.6°C (85–105°F) for various dose ranges. The changes in processor temperature are directly related to the dose by a quadratic function. A simple linear equation is provided for the changes in OD vs. processor temperature, which could be used for correcting dose in radiation dosimetry when film is used.

  15. Enabling Future Robotic Missions with Multicore Processors

    Science.gov (United States)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  16. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  17. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  18. The GF-3 SAR Data Processor.

    Science.gov (United States)

    Han, Bing; Ding, Chibiao; Zhong, Lihua; Liu, Jiayin; Qiu, Xiaolan; Hu, Yuxin; Lei, Bin

    2018-03-10

    The Gaofen-3 (GF-3) data processor was developed as a workstation-based GF-3 synthetic aperture radar (SAR) data processing system. The processor consists of two vital subsystems of the GF-3 ground segment, which are referred to as data ingesting subsystem (DIS) and product generation subsystem (PGS). The primary purpose of DIS is to record and catalogue GF-3 raw data with a transferring format, and PGS is to produce slant range or geocoded imagery from the signal data. This paper presents a brief introduction of the GF-3 data processor, including descriptions of the system architecture, the processing algorithms and its output format.

  19. Making CSB+-Tree Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...

  20. Hardware trigger processor for the MDT system

    CERN Document Server

    AUTHOR|(SzGeCERN)757787; The ATLAS collaboration; Hazen, Eric; Butler, John; Black, Kevin; Gastler, Daniel Edward; Ntekas, Konstantinos; Taffard, Anyes; Martinez Outschoorn, Verena; Ishino, Masaya; Okumura, Yasuyuki

    2017-01-01

    We are developing a low-latency hardware trigger processor for the Monitored Drift Tube system in the Muon spectrometer. The processor will fit candidate Muon tracks in the drift tubes in real time, improving significantly the momentum resolution provided by the dedicated trigger chambers. We present a novel pure-FPGA implementation of a Legendre transform segment finder, an associative-memory alternative implementation, an ARM (Zynq) processor-based track fitter, and compact ATCA carrier board architecture. The ATCA architecture is designed to allow a modular, staged approach to deployment of the system and exploration of alternative technologies.

  1. A humanized mouse model of tuberculosis.

    Directory of Open Access Journals (Sweden)

    Veronica E Calderon

    Full Text Available Mycobacterium tuberculosis (M.tb is the second leading infectious cause of death worldwide and the primary cause of death in people living with HIV/AIDS. There are several excellent animal models employed to study tuberculosis (TB, but many have limitations for reproducing human pathology and none are amenable to the direct study of HIV/M.tb co-infection. The humanized mouse has been increasingly employed to explore HIV infection and other pathogens where animal models are limiting. Our goal was to develop a small animal model of M.tb infection using the bone marrow, liver, thymus (BLT humanized mouse. NOD-SCID/γc(null mice were engrafted with human fetal liver and thymus tissue, and supplemented with CD34(+ fetal liver cells. Excellent reconstitution, as measured by expression of the human CD45 pan leukocyte marker by peripheral blood populations, was observed at 12 weeks after engraftment. Human T cells (CD3, CD4, CD8, as well as natural killer cells and monocyte/macrophages were all observed within the human leukocyte (CD45(+ population. Importantly, human T cells were functionally competent as determined by proliferative capacity and effector molecule (e.g. IFN-γ, granulysin, perforin expression in response to positive stimuli. Animals infected intranasally with M.tb had progressive bacterial infection in the lung and dissemination to spleen and liver from 2-8 weeks post infection. Sites of infection in the lung were characterized by the formation of organized granulomatous lesions, caseous necrosis, bronchial obstruction, and crystallization of cholesterol deposits. Human T cells were distributed throughout the lung, liver, and spleen at sites of inflammation and bacterial growth and were organized to the periphery of granulomas. These preliminary results demonstrate the potential to use the humanized mouse as a model of experimental TB.

  2. A statistical model of future human actions

    International Nuclear Information System (INIS)

    Woo, G.

    1992-02-01

    A critical review has been carried out of models of future human actions during the long term post-closure period of a radioactive waste repository. Various Markov models have been considered as alternatives to the standard Poisson model, and the problems of parameterisation have been addressed. Where the simplistic Poisson model unduly exaggerates the intrusion risk, some form of Markov model may have to be introduced. This situation may well arise for shallow repositories, but it is less likely for deep repositories. Recommendations are made for a practical implementation of a computer based model and its associated database. (Author)

  3. Material Models for the Human Torso Finite Element Model

    Science.gov (United States)

    2018-04-04

    ARL-TR-8338 ● Apr 2018 US Army Research Laboratory Material Models for the Human Torso Finite Element Model by Carolyn E...longer needed. Do not return it to the originator. ARL-TR-8338 ● Apr 2018 US Army Research Laboratory Material Models for the...Weapons and Materials Research Directorate, ARL Approved for public release; distribution is unlimited. ii REPORT

  4. Modeling human learning involved in car driving

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1994-01-01

    In this paper, car driving is considered at the level of human tracking and maneuvering in the context of other traffic. A model analysis revealed the most salient features determining driving performance and safety. Learning car driving is modelled based on a system theoretical approach and based

  5. A Model of the Human Eye

    Science.gov (United States)

    Colicchia, G.; Wiesner, H.; Waltner, C.; Zollman, D.

    2008-01-01

    We describe a model of the human eye that incorporates a variable converging lens. The model can be easily constructed by students with low-cost materials. It shows in a comprehensible way the functionality of the eye's optical system. Images of near and far objects can be focused. Also, the defects of near and farsighted eyes can be demonstrated.

  6. Mathematical human body modelling for impact loading

    NARCIS (Netherlands)

    Happee, R.; Morsink, P.L.J.; Wismans, J.S.H.M.

    1999-01-01

    Mathematical modelling of the human body is widely used for automotive crash safety research and design. Simulations have contributed to a reduction of injury numbers by optimisation of vehicle structures and restraint systems. Currently such simulations are largely performed using occupant models

  7. Mathematical models of human african trypanosomiasis epidemiology.

    Science.gov (United States)

    Rock, Kat S; Stone, Chris M; Hastings, Ian M; Keeling, Matt J; Torr, Steve J; Chitnis, Nakul

    2015-03-01

    Human African trypanosomiasis (HAT), commonly called sleeping sickness, is caused by Trypanosoma spp. and transmitted by tsetse flies (Glossina spp.). HAT is usually fatal if untreated and transmission occurs in foci across sub-Saharan Africa. Mathematical modelling of HAT began in the 1980s with extensions of the Ross-Macdonald malaria model and has since consisted, with a few exceptions, of similar deterministic compartmental models. These models have captured the main features of HAT epidemiology and provided insight on the effectiveness of the two main control interventions (treatment of humans and tsetse fly control) in eliminating transmission. However, most existing models have overestimated prevalence of infection and ignored transient dynamics. There is a need for properly validated models, evolving with improved data collection, that can provide quantitative predictions to help guide control and elimination strategies for HAT. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Data register and processor for multiwire chambers

    International Nuclear Information System (INIS)

    Karpukhin, V.V.

    1985-01-01

    A data register and processor for acquisition and processing of data from drift chambers if apparatus for studying relativistic positrona are described. Data are input to the register in eight-bit Gray code, stored, and converted to position code. Data are output from the register to a CAMAC highway and to a front-panel connector. The processor selects the tracks of particles that lie in the horizontal plane of the apparatus. The maximum coordinate spread delta Y and the minimum number of points on a track are set from the front panel of the processor. The resolving time of the processor is 16 microsec and the maximum number of simultaneously analyzable coordinates is 16

  9. Probabilistic implementation of universal quantum processors

    International Nuclear Information System (INIS)

    Hillery, Mark; Buzek, Vladimir; Ziman, Mario

    2002-01-01

    We present a probabilistic quantum processor for qudits on a single qudit of dimension N. The processor itself is represented by a fixed array of gates. The input of the processor consists of two registers. In the program register the set of instructions (program) is encoded. This program is applied to the data register. The processor can perform any operation on a single qudit of dimension N with a certain probability. For a general unitary operation, the probability is 1/N 2 , but for more restricted sets of operators the probability can be higher. In fact, this probability can be independent of the dimension of the qudit Hilbert space of the qudit under some conditions

  10. Heterogeneous Multicore Processor Technologies for Embedded Systems

    CERN Document Server

    Uchiyama, Kunio; Kasahara, Hironori; Nojiri, Tohru; Noda, Hideyuki; Tawara, Yasuhiro; Idehara, Akio; Iwata, Kenichi; Shikano, Hiroaki

    2012-01-01

    To satisfy the higher requirements of digitally converged embedded systems, this book describes heterogeneous multicore technology that uses various kinds of low-power embedded processor cores on a single chip. With this technology, heterogeneous parallelism can be implemented on an SoC, and greater flexibility and superior performance per watt can then be achieved. This book defines the heterogeneous multicore architecture and explains in detail several embedded processor cores including CPU cores and special-purpose processor cores that achieve highly arithmetic-level parallelism. The authors developed three multicore chips (called RP-1, RP-2, and RP-X) according to the defined architecture with the introduced processor cores. The chip implementations, software environments, and applications running on the chips are also explained in the book. Provides readers an overview and practical discussion of heterogeneous multicore technologies from both a hardware and software point of view; Discusses a new, high-p...

  11. Photonics and Fiber Optics Processor Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Photonics and Fiber Optics Processor Lab develops, tests and evaluates high speed fiber optic network components as well as network protocols. In addition, this...

  12. Radiation Tolerant Software Defined Video Processor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  13. Human models of acute lung injury

    Directory of Open Access Journals (Sweden)

    Alastair G. Proudfoot

    2011-03-01

    Full Text Available Acute lung injury (ALI is a syndrome that is characterised by acute inflammation and tissue injury that affects normal gas exchange in the lungs. Hallmarks of ALI include dysfunction of the alveolar-capillary membrane resulting in increased vascular permeability, an influx of inflammatory cells into the lung and a local pro-coagulant state. Patients with ALI present with severe hypoxaemia and radiological evidence of bilateral pulmonary oedema. The syndrome has a mortality rate of approximately 35% and usually requires invasive mechanical ventilation. ALI can follow direct pulmonary insults, such as pneumonia, or occur indirectly as a result of blood-borne insults, commonly severe bacterial sepsis. Although animal models of ALI have been developed, none of them fully recapitulate the human disease. The differences between the human syndrome and the phenotype observed in animal models might, in part, explain why interventions that are successful in models have failed to translate into novel therapies. Improved animal models and the development of human in vivo and ex vivo models are therefore required. In this article, we consider the clinical features of ALI, discuss the limitations of current animal models and highlight how emerging human models of ALI might help to answer outstanding questions about this syndrome.

  14. Real time monitoring of electron processors

    International Nuclear Information System (INIS)

    Nablo, S.V.; Kneeland, D.R.; McLaughlin, W.L.

    1995-01-01

    A real time radiation monitor (RTRM) has been developed for monitoring the dose rate (current density) of electron beam processors. The system provides continuous monitoring of processor output, electron beam uniformity, and an independent measure of operating voltage or electron energy. In view of the device's ability to replace labor-intensive dosimetry in verification of machine performance on a real-time basis, its application to providing archival performance data for in-line processing is discussed. (author)

  15. Matrix Manipulation Algorithms for Hasse Processor Implementation

    OpenAIRE

    Hahanov, Vladimir; Dahiri, Farid

    2014-01-01

    The processor is implemented in software-hardware modules, which are based on the use of programming languages: C ++, Verilog, Python 2.7 and platforms: Microsoft Windows, X Window (in Unix and Linux) and Macintosh OS X. HDL-code generator makes it possible to automatically synthesize HDL-code of the processor structure from 1 to 16 bits for parallel processing corresponding number of input vectors or words.

  16. SutraPrep, a pre-processor for SUTRA, a model for ground-water flow with solute or energy transport

    Science.gov (United States)

    Provost, Alden M.

    2002-01-01

    SutraPrep facilitates the creation of three-dimensional (3D) input datasets for the USGS ground-water flow and transport model SUTRA Version 2D3D.1. It is most useful for applications in which the geometry of the 3D model domain and the spatial distribution of physical properties and boundary conditions is relatively simple. SutraPrep can be used to create a SUTRA main input (?.inp?) file, an initial conditions (?.ics?) file, and a 3D plot of the finite-element mesh in Virtual Reality Modeling Language (VRML) format. Input and output are text-based. The code can be run on any platform that has a standard FORTRAN-90 compiler. Executable code is available for Microsoft Windows.

  17. Optimal processor assignment for pipeline computations

    Science.gov (United States)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  18. 7 CFR 1160.108 - Fluid milk processor.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 9 2010-01-01 2009-01-01 true Fluid milk processor. 1160.108 Section 1160.108... Order Definitions § 1160.108 Fluid milk processor. (a) Fluid milk processor means any person who... term fluid milk processor shall not include in each of the respective fiscal periods those persons who...

  19. 7 CFR 1435.310 - Sharing processors' allocations with producers.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Sharing processors' allocations with producers. 1435... Flexible Marketing Allotments For Sugar § 1435.310 Sharing processors' allocations with producers. (a) Every sugar beet and sugarcane processor must provide CCC a certification that: (1) The processor...

  20. 21 CFR 120.25 - Process verification for certain processors.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Process verification for certain processors. 120... Pathogen Reduction § 120.25 Process verification for certain processors. Each juice processor that relies... covered by this section, processors shall take subsamples according to paragraph (a) of this section for...

  1. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    Science.gov (United States)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  2. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  3. A lock circuit for a multi-core processor

    DEFF Research Database (Denmark)

    2015-01-01

    An integrated circuit comprising a multiple processor cores and a lock circuit that comprises a queue register with respective bits set or reset via respective, connections dedicated to respective processor cores, whereby the queue register identifies those among the multiple processor cores...... that are enqueued in the queue register. Furthermore, the integrated circuit comprises a current register and a selector circuit configured to select a processor core and identify that processor core by a value in the current register. A selected processor core is a prioritized processor core among the cores...... that have a bit that is set in the queue register. The processor cores are connected to receive a signal from the current register. Correspondingly: a method of synchronizing access to software and/or hardware resources by a core of a multi-core processor by means of a lock circuit; a multi-core processor...

  4. Interpreter composition issues in the formal verification of a processor-memory module

    Science.gov (United States)

    Fura, David A.; Cohen, Gerald C.

    1994-01-01

    This report describes interpreter composition techniques suitable for the formal specification and verification of a processor-memory module using the HOL theorem proving system. The processor-memory module is a multichip subsystem within a fault-tolerant embedded system under development within the Boeing Defense and Space Group. Modeling and verification methods were developed that permit provably secure composition at the transaction-level of specification, significantly reducing the complexity of the hierarchical verification of the system.

  5. Ingredients of Adaptability: A Survey of Reconfigurable Processors

    Directory of Open Access Journals (Sweden)

    Anupam Chattopadhyay

    2013-01-01

    Full Text Available For a design to survive unforeseen physical effects like aging, temperature variation, and/or emergence of new application standards, adaptability needs to be supported. Adaptability, in its complete strength, is present in reconfigurable processors, which makes it an important IP in modern System-on-Chips (SoCs. Reconfigurable processors have risen to prominence as a dominant computing platform across embedded, general-purpose, and high-performance application domains during the last decade. Significant advances have been made in many areas such as, identifying the advantages of reconfigurable platforms, their modeling, implementation flow and finally towards early commercial acceptance. This paper reviews these progresses from various perspectives with particular emphasis on fundamental challenges and their solutions. Empowered with the analysis of past, the future research roadmap is proposed.

  6. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    Directory of Open Access Journals (Sweden)

    Faouzi Soltani

    2007-01-01

    Full Text Available This paper deals with the distributed constant false alarm rate (CFAR radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA, order statistics (OS, and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO and the smallest of (SO CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the “OR” fusion rule.

  7. Performance of Distributed CFAR Processors in Pearson Distributed Clutter

    Directory of Open Access Journals (Sweden)

    Messali Zoubeida

    2007-01-01

    Full Text Available This paper deals with the distributed constant false alarm rate (CFAR radar detection of targets embedded in heavy-tailed Pearson distributed clutter. In particular, we extend the results obtained for the cell averaging (CA, order statistics (OS, and censored mean level CMLD CFAR processors operating in positive alpha-stable (P&S random variables to more general situations, specifically to the presence of interfering targets and distributed CFAR detectors. The receiver operating characteristics of the greatest of (GO and the smallest of (SO CFAR processors are also determined. The performance characteristics of distributed systems are presented and compared in both homogeneous and in presence of interfering targets. We demonstrate, via simulation results, that the distributed systems when the clutter is modelled as positive alpha-stable distribution offer robustness properties against multiple target situations especially when using the "OR" fusion rule.

  8. Architectural design and analysis of a programmable image processor

    International Nuclear Information System (INIS)

    Siyal, M.Y.; Chowdhry, B.S.; Rajput, A.Q.K.

    2003-01-01

    In this paper we present an architectural design and analysis of a programmable image processor, nicknamed Snake. The processor was designed with a high degree of parallelism to speed up a range of image processing operations. Data parallelism found in array processors has been included into the architecture of the proposed processor. The implementation of commonly used image processing algorithms and their performance evaluation are also discussed. The performance of Snake is also compared with other types of processor architectures. (author)

  9. Analisis Model Pengukuran Human Capital dalam Organisasi

    Directory of Open Access Journals (Sweden)

    Cecep Hidayat

    2013-11-01

    Full Text Available Measurement of human capital is not an easy to do because it is dynamic and always changing in accordance with the changing circumstances. Determination of dimensions and indicators of measurement needs to consider various factors such as situations and also the research scopes. This article has objectives to review the concepts, dimensions and measurement models of human capital. The research method used was literature study with a major reference source from current journal articles that discuss the measurement of human capital. Results of the study showed that basically the definition set forth in any dimension containing either explicitly or implicitly. In addition, the result indicated that there are three main categories of equality among researchers regarding the definition of human capital which emphasizes on: economic value/productivity, education, and abilities/competencies. The results also showed that the use of definitions, dimensions, and indicators for measurement of human capital depends on the situation, the scope of research, and the size of the organization. The conclusion of the study indicated that the measurement model and determination of dimensions and indicators of human capital measurement will determine the effectiveness of the measurement, and will have an impact on organizational performance.

  10. Modeling the exergy behavior of human body

    International Nuclear Information System (INIS)

    Keutenedjian Mady, Carlos Eduardo; Silva Ferreira, Maurício; Itizo Yanagihara, Jurandir; Hilário Nascimento Saldiva, Paulo; Oliveira Junior, Silvio de

    2012-01-01

    Exergy analysis is applied to assess the energy conversion processes that take place in the human body, aiming at developing indicators of health and performance based on the concepts of exergy destroyed rate and exergy efficiency. The thermal behavior of the human body is simulated by a model composed of 15 cylinders with elliptical cross section representing: head, neck, trunk, arms, forearms, hands, thighs, legs, and feet. For each, a combination of tissues is considered. The energy equation is solved for each cylinder, being possible to obtain transitory response from the body due to a variation in environmental conditions. With this model, it is possible to obtain heat and mass flow rates to the environment due to radiation, convection, evaporation and respiration. The exergy balances provide the exergy variation due to heat and mass exchange over the body, and the exergy variation over time for each compartments tissue and blood, the sum of which leads to the total variation of the body. Results indicate that exergy destroyed and exergy efficiency decrease over lifespan and the human body is more efficient and destroys less exergy in lower relative humidities and higher temperatures. -- Highlights: ► In this article it is indicated an overview of the human thermal model. ► It is performed the energy and exergy analysis of the human body. ► Exergy destruction and exergy efficiency decreases with lifespan. ► Exergy destruction and exergy efficiency are a function of environmental conditions.

  11. Models of the Human in Tantric Hinduism

    DEFF Research Database (Denmark)

    Olesen, Bjarne Wernicke; Flood, Gavin

    2019-01-01

    This research project explores the origins, developments and transformations of yogic models of the human (e.g. kuṇḍalinī yoga, the cakra system and ritual sex) in the tantric goddess traditions or what might be called Śāktism of medieval India. These Śākta models of esoteric anatomy originating...... in medieval ascetic traditions became highly influential in South Asia and were popularized in the West....

  12. Optical symbolic processor for expert system execution

    Science.gov (United States)

    Guha, Aloke

    1987-11-01

    The goal of this program is to develop a concept for an optical computer architecture for symbolic computing by defining a computation model of a high level language, examining the possible devices for the ultimate construction of a processor, and by defining required optical operations. This quarter we investigated the implementation alternatives for an optical shuffle exchange network (SEN). Work in previous quarter had led to the conclusion that the SEN was most appropriate optical interconnection network topology for the symbolic processing architecture (SPARO). A more detailed analysis was therefore conducted to examine implementation possibilities. It was determined that while the shuffle connection of the SEN was very feasible in optics using passive devices, a full-scale exchange switch which handles conflict resolution among competing messages is much more difficult. More emphasis was therefore given to the exchange switch design. The functionalities required for the exchange switch and its controls were analyzed. These functionalities were then assessed for optical implementation. It is clear that even the basic exchange switch, that is, an exchange without the controls for conflict resolution, delivery, etc..., is quite a difficult problem in optics. We have proposed a number of optical techniques that appear to be good candidates for realizing the basic exchange switch. A reasonable approach appears to be to evaluate these techniques.

  13. A CNN-Specific Integrated Processor

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2009-12-01

    Integrated Processors (IP) are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN) to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  14. A CNN-Specific Integrated Processor

    Directory of Open Access Journals (Sweden)

    Suleyman Malki

    2009-01-01

    Full Text Available Integrated Processors (IP are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  15. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  16. [Primate models of human viral hepatitis].

    Science.gov (United States)

    Poleshchuk, V F; Mikhaĭlov, M I; Zamiatina, N A

    2006-01-01

    The paper summarizes the updates available in the literature and the authors' own data on the etiology of hepatitis, its models, and experimental studies on susceptible simian types. A comparative analysis of the etiological agents--the causative agents of simian and human hepatitis will give a better insight into the evolution of its viruses.

  17. Human driven transitions in complex model ecosystems

    Science.gov (United States)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  18. Quality assessment of human behavior models

    NARCIS (Netherlands)

    Doesburg, W.A. van

    2007-01-01

    Accurate and efficient models of human behavior offer great potential in military and crisis management applications. However, little attention has been given to the man ner in which it can be determined if this potential is actually realized. In this study a quality assessment approach that

  19. Future of human models for crash analysis

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Hoof, J.F.A.M. van; Lange, R. de

    2001-01-01

    In the crash safety field mathematical models can be applied in practically all area's of research and development including: reconstruction of actual accidents, design (CAD) of the crash response of vehicles, safety devices and roadside facilities and in support of human impact biomechanical

  20. Optical models of the human eye.

    Science.gov (United States)

    Atchison, David A; Thibos, Larry N

    2016-03-01

    Optical models of the human eye have been used in visual science for purposes such as providing a framework for explaining optical phenomena in vision, for predicting how refraction and aberrations are affected by change in ocular biometry and as computational tools for exploring the limitations imposed on vision by the optical system of the eye. We address the issue of what is understood by optical model eyes, discussing the 'encyclopaedia' and 'toy train' approaches to modelling. An extensive list of purposes of models is provided. We discuss many of the theoretical types of optical models (also schematic eyes) of varying anatomical accuracy, including single, three and four refracting surface variants. We cover the models with lens structure in the form of nested shells and gradient index. Many optical eye models give accurate predictions only for small angles and small fields of view. If aberrations and image quality are important to consider, such 'paraxial' model eyes must be replaced by 'finite model' eyes incorporating features such as aspheric surfaces, tilts and decentrations, wavelength-dependent media and curved retinas. Many optical model eyes are population averages and must become adaptable to account for age, gender, ethnicity, refractive error and accommodation. They can also be customised for the individual when extensive ocular biometry and optical performance data are available. We consider which optical model should be used for a particular purpose, adhering to the principle that the best model is the simplest fit for the task. We provide a glimpse into the future of optical models of the human eye. This review is interwoven with historical developments, highlighting the important people who have contributed so richly to our understanding of visual optics. © 2016 The Authors. Clinical and Experimental Optometry © 2016 Optometry Australia.

  1. The UA1 upgrade calorimeter trigger processor

    International Nuclear Information System (INIS)

    Bains, M.; Charleton, D.; Ellis, N.; Garvey, J.; Gregory, J.; Jimack, M.P.; Jovanovic, P.; Kenyon, I.R.; Baird, S.A.; Campbell, D.; Cawthraw, M.; Coughlan, J.; Flynn, P.; Galagedera, S.; Grayer, G.; Halsall, R.; Shah, T.P.; Stephens, R.; Biddulph, P.; Eisenhandler, E.; Fensome, I.F.; Landon, M.; Robinson, D.; Oliver, J.; Sumorok, K.

    1990-01-01

    The increased luminosity of the improved CERN Collider and the more subtle signals of second-generation collider physics demand increasingly sophisticated triggering. We have built a new first-level trigger processor designed to use the excellent granularity of the UA1 upgrade calorimeter. This device is entirely digital and handles events in 1.5 μs, thus introducing no dead time. Its most novel feature is fast two-dimensional electromagnetic cluster-finding with the possibility of demanding an isolated shower of limited penetration. The processor allows multiple combinations of triggers on electromagnetic showers, hadronic jets and energy sums, including a total-energy veto of multiple interactions and a full vector sum of missing transverse energy. This hard-wired processor is about five times more powerful than its predecessor, and makes extensive use of pipelining techniques. It was used extensively in the 1988 and 1989 runs of the CERN Collider. (orig.)

  2. Programmable DNA-Mediated Multitasking Processor.

    Science.gov (United States)

    Shu, Jian-Jun; Wang, Qi-Wen; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin

    2015-04-30

    Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.

  3. Intrusion Detection Architecture Utilizing Graphics Processors

    Directory of Open Access Journals (Sweden)

    Branislav Madoš

    2012-12-01

    Full Text Available With the thriving technology and the great increase in the usage of computer networks, the risk of having these network to be under attacks have been increased. Number of techniques have been created and designed to help in detecting and/or preventing such attacks. One common technique is the use of Intrusion Detection Systems (IDS. Today, number of open sources and commercial IDS are available to match enterprises requirements. However, the performance of these systems is still the main concern. This paper examines perceptions of intrusion detection architecture implementation, resulting from the use of graphics processor. It discusses recent research activities, developments and problems of operating systems security. Some exploratory evidence is presented that shows capabilities of using graphical processors and intrusion detection systems. The focus is on how knowledge experienced throughout the graphics processor inclusion has played out in the design of intrusion detection architecture that is seen as an opportunity to strengthen research expertise.

  4. Embedded processor extensions for image processing

    Science.gov (United States)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  5. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  6. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  7. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  8. Parallel processor for fast event analysis

    International Nuclear Information System (INIS)

    Hensley, D.C.

    1983-01-01

    Current maximum data rates from the Spin Spectrometer of approx. 5000 events/s (up to 1.3 MBytes/s) and minimum analysis requiring at least 3000 operations/event require a CPU cycle time near 70 ns. In order to achieve an effective cycle time of 70 ns, a parallel processing device is proposed where up to 4 independent processors will be implemented in parallel. The individual processors are designed around the Am2910 Microsequencer, the AM29116 μP, and the Am29517 Multiplier. Satellite histogramming in a mass memory system will be managed by a commercial 16-bit μP system

  9. Study on Korean Pine Nut Processors

    OpenAIRE

    Kang, Hag Mo; Choi, Soo Im; Sato, Noriko; Kim, Hyun; 佐藤, 宣子

    2012-01-01

    In the results of survey on operating state of pine nut processors located in Gapyeong–gun, Gyeonggi–do and Hongcheon–gun, Gangwon–do, representative pine nut producing area, the total purchasing amount of pine nuts with a cone of Gapyeong–gun, Gyeonggi–do was 500~4,000 bags (1 bag is 80 kg), of which average amount per processor was 2000 bags. The price range per bag of pine nuts was 470~620 thousand won and the average price was 550 thousand won. Total purchase price of pine nuts with a con...

  10. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  11. Model Dinamik Penularan Human Immunodeficiency Virus (HIV)

    OpenAIRE

    Sutimin, Sutimin; Imamudin, Imamudin

    2009-01-01

    -Human Immunodeficiency Virus (HIV) adalah virus yang dapat merusak sistem kekebalan tubuh manusia Virus HIV dapat menyerang orang yang rentan ketika orang yang rentan itu melakukan kontak dengan penderita virus HIV hingga terinfeksi virus HIV pada akhirnya dapat menderita AIDS atau seropositif non-AIDS. Dengan asumsi-asumsi tentang penularan virus HIV dapat diformulasikan suatu model matematika tentang perpindahan antar orang-orang rentan ke infeksi HIV, penderita AIDS dan seropositif non-A...

  12. Modeling human influenza infection in the laboratory

    Directory of Open Access Journals (Sweden)

    Radigan KA

    2015-08-01

    Full Text Available Kathryn A Radigan,1 Alexander V Misharin,2 Monica Chi,1 GR Scott Budinger11Division of Pulmonary and Critical Care Medicine, 2Division of Rheumatology, Northwestern University Feinberg School of Medicine, Chicago, IL, USAAbstract: Influenza is the leading cause of death from an infectious cause. Because of its clinical importance, many investigators use animal models to understand the biologic mechanisms of influenza A virus replication, the immune response to the virus, and the efficacy of novel therapies. This review will focus on the biosafety, biosecurity, and ethical concerns that must be considered in pursuing influenza research, in addition to focusing on the two animal models – mice and ferrets – most frequently used by researchers as models of human influenza infection.Keywords: mice, ferret, influenza, animal model, biosafety

  13. Optimization of experimental human leukemia models (review

    Directory of Open Access Journals (Sweden)

    D. D. Pankov

    2012-01-01

    Full Text Available Actual problem of assessing immunotherapy prospects including antigenpecific cell therapy using animal models was covered in this review.Describe the various groups of currently existing animal models and methods of their creating – from different immunodeficient mice to severalvariants of tumor cells engraftment in them. The review addresses the possibility of tumor stem cells studying using mouse models for the leukemia treatment with adoptive cell therapy including WT1. Also issues of human leukemia cells migration and proliferation in a mice withdifferent immunodeficiency degree are discussed. To assess the potential immunotherapy efficacy comparison of immunodeficient mouse model with clinical situation in oncology patients after chemotherapy is proposed.

  14. Human responses to augmented virtual scaffolding models.

    Science.gov (United States)

    Hsiao, Hongwei; Simeonov, Peter; Dotson, Brian; Ammons, Douglas; Kau, Tsui-Ying; Chiou, Sharon

    2005-08-15

    This study investigated the effect of adding real planks, in virtual scaffolding models of elevation, on human performance in a surround-screen virtual reality (SSVR) system. Twenty-four construction workers and 24 inexperienced controls performed walking tasks on real and virtual planks at three virtual heights (0, 6 m, 12 m) and two scaffolding-platform-width conditions (30, 60 cm). Gait patterns, walking instability measurements and cardiovascular reactivity were assessed. The results showed differences in human responses to real vs. virtual planks in walking patterns, instability score and heart-rate inter-beat intervals; it appeared that adding real planks in the SSVR virtual scaffolding model enhanced the quality of SSVR as a human - environment interface research tool. In addition, there were significant differences in performance between construction workers and the control group. The inexperienced participants were more unstable as compared to construction workers. Both groups increased their stride length with repetitions of the task, indicating a possibly confidence- or habit-related learning effect. The practical implications of this study are in the adoption of augmented virtual models of elevated construction environments for injury prevention research, and the development of programme for balance-control training to reduce the risk of falls at elevation before workers enter a construction job.

  15. Modeling Oxygen Transport in the Human Placenta

    Science.gov (United States)

    Serov, Alexander; Filoche, Marcel; Salafia, Carolyn; Grebenkov, Denis

    Efficient functioning of the human placenta is crucial for the favorable pregnancy outcome. We construct a 3D model of oxygen transport in the placenta based on its histological cross-sections. The model accounts for both diffusion and convention of oxygen in the intervillous space and allows one to estimate oxygen uptake of a placentone. We demonstrate the existence of an optimal villi density maximizing the uptake and explain it as a trade-off between the incoming oxygen flow and the absorbing villous surface. Calculations performed for arbitrary shapes of fetal villi show that only two geometrical characteristics - villi density and the effective villi radius - are required to predict fetal oxygen uptake. Two combinations of physiological parameters that determine oxygen uptake are also identified: maximal oxygen inflow of a placentone and the Damköhler number. An automatic image analysis method is developed and applied to 22 healthy placental cross-sections demonstrating that villi density of a healthy human placenta lies within 10% of the optimal value, while overall geometry efficiency is rather low (around 30-40%). In a perspective, the model can constitute the base of a reliable tool of post partum oxygen exchange efficiency assessment in the human placenta. Also affiliated with Department of Chemistry and Biochemistry, UCLA, Los Angeles, CA 90095, USA.

  16. A dynamic model of human physiology

    Science.gov (United States)

    Green, Melissa; Kaplan, Carolyn; Oran, Elaine; Boris, Jay

    2010-11-01

    To study the systems-level transport in the human body, we develop the Computational Man (CMAN): a set of one-dimensional unsteady elastic flow simulations created to model a variety of coupled physiological systems including the circulatory, respiratory, excretory, and lymphatic systems. The model systems are collapsed from three spatial dimensions and time to one spatial dimension and time by assuming axisymmetric vessel geometry and a parabolic velocity profile across the cylindrical vessels. To model the actions of a beating heart or expanding lungs, the flow is driven by user-defined changes to the equilibrium areas of the elastic vessels. The equations are then iteratively solved for pressure, area, and average velocity. The model is augmented with valves and contractions to resemble the biological structure of the different systems. CMAN will be used to track material transport throughout the human body for diagnostic and predictive purposes. Parameters will be adjustable to match those of individual patients. Validation of CMAN has used both higher-dimensional simulations of similar geometries and benchmark measurement from medical literature.

  17. A post-processor for Gurmukhi OCR

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Abstract. A post-processing system for OCR of Gurmukhi script has been devel- oped. Statistical information of Punjabi language syllable combinations, corpora look-up and certain heuristics based on Punjabi grammar rules have been com- bined to design the post-processor. An improvement of 3% in recognition rate, ...

  18. Monotonicity in the limited processor sharing queue

    NARCIS (Netherlands)

    M. Nuyens; W. van der Weij (Wemke)

    2008-01-01

    htmlabstractWe study a processor sharing queue with a limited number of service positions and an infinite buffer. The occupied service positions share an underlying resource. We prove that for service times with a decreasing failure rate, the queue length is stochastically decreasing in the number

  19. Report of the trigger processor subgroup

    International Nuclear Information System (INIS)

    Johnson, M.

    1993-01-01

    This is a summary report of a small group of people who met one afternoon to discuss trigger processors. The trigger processor group spent much of its time discussing new architecture's for high rate experiments. There was an attempt to differentiate between data driven architectures and the more conventional systems where triggers are divided into a series of levels. This was not too successful because most people felt that there were elements of the data driven architecture in almost all trigger systems -- particularly at the front end. There are, however, broad divisions that are present in almost every trigger system. The typical trigger levels are defined as: level 1 - This is the section of the trigger that is truly dead timeless. The data is pipelined with enough buffers so that no crossing (event in fixed target) is lost. A trigger decision is generated at every crossing (but delayed by the length of the pipeline); level 3 - Processor farm with one complete event per processor; level 2 - Everything in between

  20. A high-speed analog neural processor

    NARCIS (Netherlands)

    Masa, P.; Masa, Peter; Hoen, Klaas; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    Targeted at high-energy physics research applications, our special-purpose analog neural processor can classify up to 70 dimensional vectors within 50 nanoseconds. The decision-making process of the implemented feedforward neural network enables this type of computation to tolerate weight

  1. A post-processor for Gurmukhi OCR

    Indian Academy of Sciences (India)

    /sadh/027/01/0099-0111 ... Statistical information of Punjabi language syllable combinations, corpora look-up and certain heuristics based on Punjabi grammar rules have been combined to design the post-processor. An improvement of 3% in ...

  2. Globe hosts launch of new processor

    CERN Multimedia

    2006-01-01

    Launch of the quadecore processor chip at the Globe. On 14 November, in a series of major media events around the world, the chip-maker Intel launched its new 'quadcore' processor. For the regions of Europe, the Middle East and Africa, the day-long launch event took place in CERN's Globe of Science and Innovation, with over 30 journalists in attendance, coming from as far away as Johannesburg and Dubai. CERN was a significant choice for the event: the first tests of this new generation of processor in Europe had been made at CERN over the preceding months, as part of CERN openlab, a research partnership with leading IT companies such as Intel, HP and Oracle. The event also provided the opportunity for the journalists to visit ATLAS and the CERN Computer Centre. The strategy of putting multiple processor cores on the same chip, which has been pursued by Intel and other chip-makers in the last few years, represents an important departure from the more traditional improvements in the sheer speed of such chips. ...

  3. Direct video acquisition by digital signal processors

    Science.gov (United States)

    de Sa, Luis A. S. V.; Silva, Vitor M.; Silvestre, Joao C.

    1992-08-01

    Almost any frame grabber system has a special controller circuit to transfer data from the video analog to digital converter (ADC) to the system memory. This controller which normally includes a locked phase loop (PLL) and several counters has to fulfill three main functions: the generation of a pixel clock synchronized with the incoming video signal the command of the ADC and memory addressing for the storage of the digitized video. This paper shows how a digital signal processor (DSP) can simplify the design of a video acquisition system by reading the video ADC and writing to its memory at video rates. An example is given with the TM5320C30 processor which supports simultaneous read and write operations on its two external buses. In the case of the CCJR 601 video format the processor runs at 27 MHz. Modern versions of the TMS32OC3O running at as fast as 40 MHz can acquire up to 1066 samples per line. Also the 32-bit wide buses of the processor allows colour acquisition using this technique. In order to build a so simple circuit the DSP needs to be synchronized to the incoming video signal which can be neatly done by using the TMS32OC3O internal timer as part of the PLL. By changing the programming of the internal timer any video format can be grabbed. In addition the DSP can be used as a powerful image

  4. Simplifying cochlear implant speech processor fitting

    NARCIS (Netherlands)

    Willeboer, C.

    2008-01-01

    Conventional fittings of the speech processor of a cochlear implant (CI) rely to a large extent on the implant recipient's subjective responses. For each of the 22 intracochlear electrodes the recipient has to indicate the threshold level (T-level) and comfortable loudness level (C-level) while

  5. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  6. ARTS III/Parallel Processor Design Study

    Science.gov (United States)

    1975-04-01

    It was the purpose of this design study to investigate the feasibility, suitability, and cost-effectiveness of augmenting the ARTS III failsafe/failsoft multiprocessor system with a form of parallel processor to accomodate a large growth in air traff...

  7. A post-processor for Gurmukhi OCR

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    The word dictionary is partitioned in order to reduce the search space besides preventing forced match to incorrect words. Word size and the envelop information of words are taken as the main partitioning features. In this paper we describe a post-processor for improving the recognition rate of an OCR of Gurmukhi script.

  8. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  9. Human reliability data collection and modelling

    International Nuclear Information System (INIS)

    1991-09-01

    The main purpose of this document is to review and outline the current state-of-the-art of the Human Reliability Assessment (HRA) used for quantitative assessment of nuclear power plants safe and economical operation. Another objective is to consider Human Performance Indicators (HPI) which can alert plant manager and regulator to departures from states of normal and acceptable operation. These two objectives are met in the three sections of this report. The first objective has been divided into two areas, based on the location of the human actions being considered. That is, the modelling and data collection associated with control room actions are addressed first in chapter 1 while actions outside the control room (including maintenance) are addressed in chapter 2. Both chapters 1 and 2 present a brief outline of the current status of HRA for these areas, and major outstanding issues. Chapter 3 discusses HPI. Such performance indicators can signal, at various levels, changes in factors which influence human performance. The final section of this report consists of papers presented by the participants of the Technical Committee Meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  10. Zebrafish Models for Human Acute Organophosphorus Poisoning.

    Science.gov (United States)

    Faria, Melissa; Garcia-Reyero, Natàlia; Padrós, Francesc; Babin, Patrick J; Sebastián, David; Cachot, Jérôme; Prats, Eva; Arick Ii, Mark; Rial, Eduardo; Knoll-Gellida, Anja; Mathieu, Guilaine; Le Bihanic, Florane; Escalon, B Lynn; Zorzano, Antonio; Soares, Amadeu M V M; Raldúa, Demetrio

    2015-10-22

    Terrorist use of organophosphorus-based nerve agents and toxic industrial chemicals against civilian populations constitutes a real threat, as demonstrated by the terrorist attacks in Japan in the 1990 s or, even more recently, in the Syrian civil war. Thus, development of more effective countermeasures against acute organophosphorus poisoning is urgently needed. Here, we have generated and validated zebrafish models for mild, moderate and severe acute organophosphorus poisoning by exposing zebrafish larvae to different concentrations of the prototypic organophosphorus compound chlorpyrifos-oxon. Our results show that zebrafish models mimic most of the pathophysiological mechanisms behind this toxidrome in humans, including acetylcholinesterase inhibition, N-methyl-D-aspartate receptor activation, and calcium dysregulation as well as inflammatory and immune responses. The suitability of the zebrafish larvae to in vivo high-throughput screenings of small molecule libraries makes these models a valuable tool for identifying new drugs for multifunctional drug therapy against acute organophosphorus poisoning.

  11. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  12. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S.; Sedukhin, S. [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I.

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  13. Fabrication Security and Trust of Domain-Specific ASIC Processors

    Science.gov (United States)

    2016-10-30

    1 Fabrication Security and Trust of Domain-Specific ASIC Processors Michael Vai, Karen Gettings, and Theodore Lyszczarz MIT Lincoln Laboratory...specific ASIC processor architecture, which we showed to be effective in protecting IP and mitigating the expense and inflexibility associated with using...practicality in ensuring the trust and security of the processor when it is fabricated. The result is a processor architecture that incorporates

  14. Data collection from FASTBUS to a DEC UNIBUS processor through the UNIBUS-Processor Interface

    International Nuclear Information System (INIS)

    Larwill, M.; Barsotti, E.; Lesny, D.; Pordes, R.

    1983-01-01

    This paper describes the use of the UNIBUS Processor Interface, an interface between FASTBUS and the Digital Equipment Corporation UNIBUS. The UPI was developed by Fermilab and the University of Illinois. Details of the use of this interface in a high energy physics experiment at Fermilab are given. The paper includes a discussion of the operation of the UPI on the UNIBUS of a VAX-11, and plans for using the UPI to perform data acquisition from FASTBUS to a VAX-11 Processor

  15. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  16. Modelling human eye under blast loading.

    Science.gov (United States)

    Esposito, L; Clemente, C; Bonora, N; Rossi, T

    2015-01-01

    Primary blast injury (PBI) is the general term that refers to injuries resulting from the mere interaction of a blast wave with the body. Although few instances of primary ocular blast injury, without a concomitant secondary blast injury from debris, are documented, some experimental studies demonstrate its occurrence. In order to investigate PBI to the eye, a finite element model of the human eye using simple constitutive models was developed. The material parameters were calibrated by a multi-objective optimisation performed on available eye impact test data. The behaviour of the human eye and the dynamics of mechanisms occurring under PBI loading conditions were modelled. For the generation of the blast waves, different combinations of explosive (trinitrotoluene) mass charge and distance from the eye were analysed. An interpretation of the resulting pressure, based on the propagation and reflection of the waves inside the eye bulb and orbit, is proposed. The peculiar geometry of the bony orbit (similar to a frustum cone) can induce a resonance cavity effect and generate a pressure standing wave potentially hurtful for eye tissues.

  17. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Science.gov (United States)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  18. An Alternative Water Processor for Long Duration Space Missions

    Science.gov (United States)

    Barta, Daniel J.; Pickering, Karen D.; Meyer, Caitlin; Pennsinger, Stuart; Vega, Leticia; Flynn, Michael; Jackson, Andrew; Wheeler, Raymond

    2014-01-01

    A new wastewater recovery system has been developed that combines novel biological and physicochemical components for recycling wastewater on long duration human space missions. Functionally, this Alternative Water Processor (AWP) would replace the Urine Processing Assembly on the International Space Station and reduce or eliminate the need for the multi-filtration beds of the Water Processing Assembly (WPA). At its center are two unique game changing technologies: 1) a biological water processor (BWP) to mineralize organic forms of carbon and nitrogen and 2) an advanced membrane processor (Forward Osmosis Secondary Treatment) for removal of solids and inorganic ions. The AWP is designed for recycling larger quantities of wastewater from multiple sources expected during future exploration missions, including urine, hygiene (hand wash, shower, oral and shave) and laundry. The BWP utilizes a single-stage membrane-aerated biological reactor for simultaneous nitrification and denitrification. The Forward Osmosis Secondary Treatment (FOST) system uses a combination of forward osmosis (FO) and reverse osmosis (RO), is resistant to biofouling and can easily tolerate wastewaters high in non-volatile organics and solids associated with shower and/or hand washing. The BWP has been operated continuously for over 300 days. After startup, the mature biological system averaged 85% organic carbon removal and 44% nitrogen removal, close to stoichiometric maximum based on available carbon. To date, the FOST has averaged 93% water recovery, with a maximum of 98%. If the wastewater is slighty acidified, ammonia rejection is optimal. This paper will provide a description of the technology and summarize results from ground-based testing using real wastewater

  19. Modeling human comprehension of data visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Matzen, Laura E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Haass, Michael Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Divis, Kristin Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Andrew T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.

  20. Virtual pharmacokinetic model of human eye.

    Science.gov (United States)

    Kotha, Sreevani; Murtomäki, Lasse

    2014-07-01

    A virtual pharmacokinetic 3D model of the human eye is built using Comsol Multiphysics® software, which is based on the Finite Element Method (FEM). The model considers drug release from a polymer patch placed on sclera. The model concentrates on the posterior part of the eye, retina being the target tissue, and comprises the choroidal blood flow, partitioning of the drug between different tissues and active transport at the retina pigment epithelium (RPE)-choroid boundary. Although most straightforward, in order to check the mass balance, no protein binding or metabolism is yet included. It appeared that the most important issue in obtaining reliable simulation results is the finite element mesh, while time stepping has hardly any significance. Simulations were extended to 100,000 s. The concentration of a drug is shown as a function of time at various points of retina, as well as its average value, varying several parameters in the model. This work demonstrates how anybody with basic knowledge of calculus is able to build physically meaningful models of quite complex biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Dengue human infection models supporting drug development.

    Science.gov (United States)

    Whitehorn, James; Van, Vinh Chau Nguyen; Simmons, Cameron P

    2014-06-15

    Dengue is a arboviral infection that represents a major global health burden. There is an unmet need for effective dengue therapeutics to reduce symptoms, duration of illness and incidence of severe complications. Here, we consider the merits of a dengue human infection model (DHIM) for drug development. A DHIM could allow experimentally controlled studies of candidate therapeutics in preselected susceptible volunteers, potentially using smaller sample sizes than trials that recruited patients with dengue in an endemic country. In addition, the DHIM would assist the conduct of intensive pharmacokinetic and basic research investigations and aid in determining optimal drug dosage. Furthermore, a DHIM could help establish proof of concept that chemoprophylaxis against dengue is feasible. The key challenge in developing the DHIM for drug development is to ensure the model reliably replicates the typical clinical and laboratory features of naturally acquired, symptomatic dengue. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America.

  2. Human factors engineering program review model

    International Nuclear Information System (INIS)

    1994-07-01

    The staff of the Nuclear Regulatory Commission is performing nuclear power plant design certification reviews based on a design process plan that describes the human factors engineering (HFE) program elements that are necessary and sufficient to develop an acceptable detailed design specification and an acceptable implemented design. There are two principal reasons for this approach. First, the initial design certification applications submitted for staff review did not include detailed design information. Second, since human performance literature and industry experiences have shown that many significant human factors issues arise early in the design process, review of the design process activities and results is important to the evaluation of an overall design. However, current regulations and guidance documents do not address the criteria for design process review. Therefore, the HFE Program Review Model (HFE PRM) was developed as a basis for performing design certification reviews that include design process evaluations as well as review of the final design. A central tenet of the HFE PRM is that the HFE aspects of the plant should be developed, designed, and evaluated on the basis of a structured top-down system analysis using accepted HFE principles. The HFE PRM consists of ten component elements. Each element in divided into four sections: Background, Objective, Applicant Submittals, and Review Criteria. This report describes the development of the HFE PRM and gives a detailed description of each HFE review element

  3. A human neurodevelopmental model for Williams syndrome.

    Science.gov (United States)

    Chailangkarn, Thanathom; Trujillo, Cleber A; Freitas, Beatriz C; Hrvoj-Mihic, Branka; Herai, Roberto H; Yu, Diana X; Brown, Timothy T; Marchetto, Maria C; Bardy, Cedric; McHenry, Lauren; Stefanacci, Lisa; Järvinen, Anna; Searcy, Yvonne M; DeWitt, Michelle; Wong, Wenny; Lai, Philip; Ard, M Colin; Hanson, Kari L; Romero, Sarah; Jacobs, Bob; Dale, Anders M; Dai, Li; Korenberg, Julie R; Gage, Fred H; Bellugi, Ursula; Halgren, Eric; Semendeferi, Katerina; Muotri, Alysson R

    2016-08-18

    Williams syndrome is a genetic neurodevelopmental disorder characterized by an uncommon hypersociability and a mosaic of retained and compromised linguistic and cognitive abilities. Nearly all clinically diagnosed individuals with Williams syndrome lack precisely the same set of genes, with breakpoints in chromosome band 7q11.23 (refs 1-5). The contribution of specific genes to the neuroanatomical and functional alterations, leading to behavioural pathologies in humans, remains largely unexplored. Here we investigate neural progenitor cells and cortical neurons derived from Williams syndrome and typically developing induced pluripotent stem cells. Neural progenitor cells in Williams syndrome have an increased doubling time and apoptosis compared with typically developing neural progenitor cells. Using an individual with atypical Williams syndrome, we narrowed this cellular phenotype to a single gene candidate, frizzled 9 (FZD9). At the neuronal stage, layer V/VI cortical neurons derived from Williams syndrome were characterized by longer total dendrites, increased numbers of spines and synapses, aberrant calcium oscillation and altered network connectivity. Morphometric alterations observed in neurons from Williams syndrome were validated after Golgi staining of post-mortem layer V/VI cortical neurons. This model of human induced pluripotent stem cells fills the current knowledge gap in the cellular biology of Williams syndrome and could lead to further insights into the molecular mechanism underlying the disorder and the human social brain.

  4. Human factors engineering program review model

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    The staff of the Nuclear Regulatory Commission is performing nuclear power plant design certification reviews based on a design process plan that describes the human factors engineering (HFE) program elements that are necessary and sufficient to develop an acceptable detailed design specification and an acceptable implemented design. There are two principal reasons for this approach. First, the initial design certification applications submitted for staff review did not include detailed design information. Second, since human performance literature and industry experiences have shown that many significant human factors issues arise early in the design process, review of the design process activities and results is important to the evaluation of an overall design. However, current regulations and guidance documents do not address the criteria for design process review. Therefore, the HFE Program Review Model (HFE PRM) was developed as a basis for performing design certification reviews that include design process evaluations as well as review of the final design. A central tenet of the HFE PRM is that the HFE aspects of the plant should be developed, designed, and evaluated on the basis of a structured top-down system analysis using accepted HFE principles. The HFE PRM consists of ten component elements. Each element in divided into four sections: Background, Objective, Applicant Submittals, and Review Criteria. This report describes the development of the HFE PRM and gives a detailed description of each HFE review element.

  5. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  6. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  7. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  8. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  9. Multiple core computer processor with globally-accessible local memories

    Science.gov (United States)

    Shalf, John; Donofrio, David; Oliker, Leonid

    2016-09-20

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.

  10. Design of Processors with Reconfigurable Microarchitecture

    Directory of Open Access Journals (Sweden)

    Andrey Mokhov

    2014-01-01

    Full Text Available Energy becomes a dominating factor for a wide spectrum of computations: from intensive data processing in “big data” companies resulting in large electricity bills, to infrastructure monitoring with wireless sensors relying on energy harvesting. In this context it is essential for a computation system to be adaptable to the power supply and the service demand, which often vary dramatically during runtime. In this paper we present an approach to building processors with reconfigurable microarchitecture capable of changing the way they fetch and execute instructions depending on energy availability and application requirements. We show how to use Conditional Partial Order Graphs to formally specify the microarchitecture of such a processor, explore the design possibilities for its instruction set, and synthesise the instruction decoder using correct-by-construction techniques. The paper is focused on the design methodology, which is evaluated by implementing a power-proportional version of Intel 8051 microprocessor.

  11. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  12. Network to transmit prioritized subtask pockets to dedicated processors

    Energy Technology Data Exchange (ETDEWEB)

    Neches, P.M.

    1989-03-21

    A multiprocessor system distributing a workload among individual processors and operating with low usage of executive software and inter-processor communication to provide an overall workload processing function divisible into parallel processing subtasks is described, comprising: at least one processor system providing tasks for processing in the form of task messages; means coupled to receive the task messages from the processor system and including means to transform the task messages into subtask request packets including information as to one or more appropriate recipients; processor modules, each having assigned responsibilities with respect to the workload and each including means to determine whether the subtask is appropriate therefor, means for executing an appropriate subtask and means for providing a responsive task result packet after executing the subtask, the task result packet competing for priority with task result packets from at least one other processor module and with the subtask request packets from the interface processor means; and means coupling the interface processor means to the processor modules and the processor modules to each other and including means for concurrently receiving the packets and for determining priority between contending packets and distributing each packet having priority concurrently to all processor modules.

  13. Testing and operating a multiprocessor chip with processor redundancy

    Science.gov (United States)

    Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J

    2014-10-21

    A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.

  14. Multi-processor network implementations in Multibus II and VME

    International Nuclear Information System (INIS)

    Briegel, C.

    1992-01-01

    ACNET (Fermilab Accelerator Controls Network), a proprietary network protocol, is implemented in a multi-processor configuration for both Multibus II and VME. The implementations are contrasted by the bus protocol and software design goals. The Multibus II implementation provides for multiple processors running a duplicate set of tasks on each processor. For a network connected task, messages are distributed by a network round-robin scheduler. Further, messages can be stopped, continued, or re-routed for each task by user-callable commands. The VME implementation provides for multiple processors running one task across all processors. The process can either be fixed to a particular processor or dynamically allocated to an available processor depending on the scheduling algorithm of the multi-processing operating system. (author)

  15. Debugging in a multi-processor environment

    International Nuclear Information System (INIS)

    Spann, J.M.

    1981-01-01

    The Supervisory Control and Diagnostic System (SCDS) for the Mirror Fusion Test Facility (MFTF) consists of nine 32-bit minicomputers arranged in a tightly coupled distributed computer system utilizing a share memory as the data exchange medium. Debugging of more than one program in the multi-processor environment is a difficult process. This paper describes what new tools were developed and how the testing of software is performed in the SCDS for the MFTF project

  16. CoNNeCT Baseband Processor Module

    Science.gov (United States)

    Yamamoto, Clifford K; Jedrey, Thomas C.; Gutrich, Daniel G.; Goodpasture, Richard L.

    2011-01-01

    A document describes the CoNNeCT Baseband Processor Module (BPM) based on an updated processor, memory technology, and field-programmable gate arrays (FPGAs). The BPM was developed from a requirement to provide sufficient computing power and memory storage to conduct experiments for a Software Defined Radio (SDR) to be implemented. The flight SDR uses the AT697 SPARC processor with on-chip data and instruction cache. The non-volatile memory has been increased from a 20-Mbit EEPROM (electrically erasable programmable read only memory) to a 4-Gbit Flash, managed by the RTAX2000 Housekeeper, allowing more programs and FPGA bit-files to be stored. The volatile memory has been increased from a 20-Mbit SRAM (static random access memory) to a 1.25-Gbit SDRAM (synchronous dynamic random access memory), providing additional memory space for more complex operating systems and programs to be executed on the SPARC. All memory is EDAC (error detection and correction) protected, while the SPARC processor implements fault protection via TMR (triple modular redundancy) architecture. Further capability over prior BPM designs includes the addition of a second FPGA to implement features beyond the resources of a single FPGA. Both FPGAs are implemented with Xilinx Virtex-II and are interconnected by a 96-bit bus to facilitate data exchange. Dedicated 1.25- Gbit SDRAMs are wired to each Xilinx FPGA to accommodate high rate data buffering for SDR applications as well as independent SpaceWire interfaces. The RTAX2000 manages scrub and configuration of each Xilinx.

  17. Simplifying cochlear implant speech processor fitting

    OpenAIRE

    Willeboer, C.

    2008-01-01

    Conventional fittings of the speech processor of a cochlear implant (CI) rely to a large extent on the implant recipient's subjective responses. For each of the 22 intracochlear electrodes the recipient has to indicate the threshold level (T-level) and comfortable loudness level (C-level) while stimulated with pulse trains. Obtaining these behavioral measurements is a time-consuming task. It requires cooperation and considerable effort of the CI recipient. Especially in adults that have been ...

  18. Monitoring and modeling human interactions with ecosystems

    Science.gov (United States)

    Milesi, Cristina

    With rapidly increasing consumption rates and global population, there is a growing interest in understanding how to balance human activities with the other components of the Earth system. Humans alter ecosystem functioning with land cover changes, greenhouse gas emissions and overexploitation of natural resources. On the other side, climate and its inherent interannual variability drive global Net Primary Productivity (NPP), the base of energy for all trophic levels, shaping humans' distribution on the land surface and their sensitivity to natural and accelerated patterns of variation in ecosystem processes. In this thesis, I analyzed anthropogenic influences on ecosystems and ecosystems impacts on humans through a multi-scale approach. Anthropogenic influences were analyzed with a special focus on urban ecosystems, the living environment of nearly half of the global population and almost 90% of the population in the industrialized countries. A poorly quantified aspect of urban ecosystems is the biogeochemistry of urban vegetation, intensively managed through fertilization and irrigation. In chapter 1, adapting the ecosystem model Biome-BGC, I simulated the growth of turf grasses across the United States, and estimated their potential impact on the continental water and carbon budget. Using a remote sensing-based approach, I also developed a methodology to estimate the impact of land cover changes due to urbanization on the regional photosynthetic capacity (chapter 2), finding that low-density urbanization can retain high levels of net primary productivity, although at the expense of inefficient sprawl. One of the feedbacks of urbanization is the urban heat island effect, which I analyzed in conjunction with a remote sensing based estimate of fractional impervious surface area, showing how this is related to increases in land surface temperatures, independently from geographic location and population density (chapter 3). Finally, in chapter 4, I described the

  19. Intelligent trigger processor for the crystal box

    CERN Document Server

    Sanders, G H; Cooper, M D; Hart, G W; Hoffman, C M; Hogan, G E; Hughes, E B; Matis, H S; Rolfe, J; Sandberg, V D; Williams, R A; Wilson, S; Zeman, H

    1981-01-01

    A large solid angle angular modular NaI(Tl) detector with 432 phototubes and 88 trigger scintillators is being used to search simultaneously for three lepton flavor-changing decays of the muon. A beam of up to 10/sup 6/ muons stopping per second with a 6% duty factor would yield up to 1000 triggers per second from random triple coincidences. A reduction of the trigger rate to 10 Hz is required from a hardwired primary trigger processor. Further reduction to <1 Hz is achieved by a microprocessor-based secondary trigger processor. The primary trigger hardware imposes voter coincidence logic, stringent timing requirements, and a non-adjacency requirement in the trigger scintillators defined by hardwired circuits. Sophisticated geometric requirements are imposed by a PROM-based matrix logic, and energy and vector-momentum cuts are imposed by a hardwired processor using LSI flash ADC's and digital arithmetic logic. The secondary trigger employs four satellite microprocessors to do a sparse data scan, multiplex ...

  20. Modeling and remodeling of human extraction sockets.

    Science.gov (United States)

    Trombelli, Leonardo; Farina, Roberto; Marzola, Andrea; Bozzi, Leopoldo; Liljenberg, Birgitta; Lindhe, Jan

    2008-07-01

    The available studies on extraction wound repair in humans are affected by significant limitations and have failed to evaluate tissue alterations occurring in all compartments of the hard tissue defect. To monitor during a 6-month period the healing of human extraction sockets and include a semi-quantitative analysis of tissues and cell populations involved in various stages of the processes of modeling/remodeling. Twenty-seven biopsies, representative of the early (2-4 weeks, n=10), intermediate (6-8 weeks, n=6), and late phase (12-24 weeks, n=11) of healing, were collected and analysed. Granulation tissue that was present in comparatively large amounts in the early healing phase of socket healing, was in the interval between the early and intermediate observation phase replaced with provisional matrix and woven bone. The density of vascular structures and macrophages slowly decreased from 2 to 4 weeks over time. The presence of osteoblasts peaked at 6-8 weeks and remained almost stable thereafter; a small number of osteoclasts were present in a few specimens at each observation interval. The present findings demonstrated that great variability exists in man with respect to hard tissue formation within extraction sockets. Thus, whereas a provisional connective tissue consistently forms within the first weeks of healing, the interval during which mineralized bone is laid down is much less predictable.

  1. Molecular Modeling of Prion Transmission to Humans

    Directory of Open Access Journals (Sweden)

    Etienne Levavasseur

    2014-10-01

    Full Text Available Using different prion strains, such as the variant Creutzfeldt-Jakob disease agent and the atypical bovine spongiform encephalopathy agents, and using transgenic mice expressing human or bovine prion protein, we assessed the reliability of protein misfolding cyclic amplification (PMCA to model interspecies and genetic barriers to prion transmission. We compared our PMCA results with in vivo transmission data characterized by attack rates, i.e., the percentage of inoculated mice that developed the disease. Using 19 seed/substrate combinations, we observed that a significant PMCA amplification was only obtained when the mouse line used as substrate is susceptible to the corresponding strain. Our results suggest that PMCA provides a useful tool to study genetic barriers to transmission and to study the zoonotic potential of emerging prion strains.

  2. Power estimation on functional level for programmable processors

    Directory of Open Access Journals (Sweden)

    M. Schneider

    2004-01-01

    Full Text Available In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA. Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA. This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated

  3. Merged ozone profiles from four MIPAS processors

    Science.gov (United States)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv

  4. Hardware processor for tracking particles in an alternating-gradient synchrotron

    International Nuclear Information System (INIS)

    Johnson, M.; Avilez, C.

    1987-01-01

    We discuss the design and performance of special-purpose processors for tracking particles through an alternating-gradient synchrotron. We present block diagram designs for two hardware processors. Both processors use algorithms based on the 'kick' approximation, i.e., transport matrices are used for dipoles and quadrupoles, and the thin-lens approximation is used for all higher multipoles. The faster processor makes extensive use of memory look-up tables for evaluating functions. For the case of magnets with multipoles up to pole 30 and using one kick per magnet, this processor can track 19 particles through an accelerator at a rate that is only 220 times slower than the time it takes real particles to travel around the machine. For a model consisting of only thin lenses, it is only 150 times slower than real particles. An additional factor of 2 can be obtained with chips now becoming available. The number of magnets in the accelerator is limited only by the amount of memory available for storing magnet parameters. (author) 20 refs., 7 figs., 2 tabs

  5. Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2015-02-01

    We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.

  6. Heterogeneous Community-based mobility model for human opportunistic network

    DEFF Research Database (Denmark)

    Hu, Liang; Dittmann, Lars

    2009-01-01

    Human opportunistic networks can facilitate wireless content dissemination while humans are on the move. In such a network, content is disseminated via nodes relaying and nodes mobility (human mobility). Thus it is essential to understand and model the real human mobility. We present...... a heterogeneous community-based random way-point (HC-RWP) mobility model that captures the four important properties of real human mobility. These properties are based on both intuitive observations of daily human mobility and analysis of empirical mobility traces. By discrete event simulation, we show HC......-RWP captures essential statistic features of wide range of real human mobility traces reported in previous studies....

  7. High Fidelity, Numerical Investigation of Cross Talk in a Multi-Qubit Xmon Processor

    Science.gov (United States)

    Najafi-Yazdi, Alireza; Kelly, Julian; Martinis, John

    Unwanted electromagnetic interference between qubits, transmission lines, flux lines and other elements of a superconducting quantum processor poses a challenge in engineering such devices. This problem is exacerbated with scaling up the number of qubits. High fidelity, massively parallel computational toolkits, which can simulate the 3D electromagnetic environment and all features of the device, are instrumental in addressing this challenge. In this work, we numerically investigated the crosstalk between various elements of a multi-qubit quantum processor designed and tested by the Google team. The processor consists of 6 superconducting Xmon qubits with flux lines and gatelines. The device also consists of a Purcell filter for readout. The simulations are carried out with a high fidelity, massively parallel EM solver. We will present our findings regarding the sources of crosstalk in the device, as well as numerical model setup, and a comparison with available experimental data.

  8. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  10. Integrated Human Futures Modeling in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aamir, Munaf Syed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bernard, Michael Lewis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beyeler, Walter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fellner, Karen Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hayden, Nancy Kay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jeffers, Robert Fredric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Keller, Elizabeth James Kistin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Michael David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silver, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tidwell, Vincent C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Engelke, Peter [Atlantic Council, Washington, D.C. (United States); Burrow, Mat [Atlantic Council, Washington, D.C. (United States); Keith, Bruce [United States Military Academy, West Point, NY (United States)

    2016-01-01

    The Integrated Human Futures Project provides a set of analytical and quantitative modeling and simulation tools that help explore the links among human social, economic, and ecological conditions, human resilience, conflict, and peace, and allows users to simulate tradeoffs and consequences associated with different future development and mitigation scenarios. In the current study, we integrate five distinct modeling platforms to simulate the potential risk of social unrest in Egypt resulting from the Grand Ethiopian Renaissance Dam (GERD) on the Blue Nile in Ethiopia. The five platforms simulate hydrology, agriculture, economy, human ecology, and human psychology/behavior, and show how impacts derived from development initiatives in one sector (e.g., hydrology) might ripple through to affect other sectors and how development and security concerns may be triggered across the region. This approach evaluates potential consequences, intended and unintended, associated with strategic policy actions that span the development-security nexus at the national, regional, and international levels. Model results are not intended to provide explicit predictions, but rather to provide system-level insight for policy makers into the dynamics among these interacting sectors, and to demonstrate an approach to evaluating short- and long-term policy trade-offs across different policy domains and stakeholders. The GERD project is critical to government-planned development efforts in Ethiopia but is expected to reduce downstream freshwater availability in the Nile Basin, fueling fears of negative social and economic impacts that could threaten stability and security in Egypt. We tested these hypotheses and came to the following preliminary conclusions. First, the GERD will have an important short-term impact on water availability, food production, and hydropower production in Egypt, depending on the short- term reservoir fill rate. Second, the GERD will have a very small impact on

  11. Emulating Many-Body Localization with a Superconducting Quantum Processor.

    Science.gov (United States)

    Xu, Kai; Chen, Jin-Jun; Zeng, Yu; Zhang, Yu-Ran; Song, Chao; Liu, Wuxin; Guo, Qiujiang; Zhang, Pengfei; Xu, Da; Deng, Hui; Huang, Keqiang; Wang, H; Zhu, Xiaobo; Zheng, Dongning; Fan, Heng

    2018-02-02

    The law of statistical physics dictates that generic closed quantum many-body systems initialized in nonequilibrium will thermalize under their own dynamics. However, the emergence of many-body localization (MBL) owing to the interplay between interaction and disorder, which is in stark contrast to Anderson localization, which only addresses noninteracting particles in the presence of disorder, greatly challenges this concept, because it prevents the systems from evolving to the ergodic thermalized state. One critical evidence of MBL is the long-time logarithmic growth of entanglement entropy, and a direct observation of it is still elusive due to the experimental challenges in multiqubit single-shot measurement and quantum state tomography. Here we present an experiment fully emulating the MBL dynamics with a 10-qubit superconducting quantum processor, which represents a spin-1/2 XY model featuring programmable disorder and long-range spin-spin interactions. We provide essential signatures of MBL, such as the imbalance due to the initial nonequilibrium, the violation of eigenstate thermalization hypothesis, and, more importantly, the direct evidence of the long-time logarithmic growth of entanglement entropy. Our results lay solid foundations for precisely simulating the intriguing physics of quantum many-body systems on the platform of large-scale multiqubit superconducting quantum processors.

  12. Silicon quantum processor with robust long-distance qubit couplings.

    Science.gov (United States)

    Tosi, Guilherme; Mohiyaddin, Fahd A; Schmitt, Vivien; Tenberg, Stefanie; Rahman, Rajib; Klimeck, Gerhard; Morello, Andrea

    2017-09-06

    Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.Quantum computers will require a large network of coherent qubits, connected in a noise-resilient way. Tosi et al. present a design for a quantum processor based on electron-nuclear spins in silicon, with electrical control and coupling schemes that simplify qubit fabrication and operation.

  13. Pellet culture model for human primary osteoblasts.

    Science.gov (United States)

    Jähn, K; Richards, R G; Archer, C W; Stoddart, M J

    2010-09-06

    In vitro monolayer culture of human primary osteoblasts (hOBs) often shows unsatisfactory results for extracellular matrix deposition, maturation and calcification. Nevertheless, monolayer culture is still the method of choice for in vitro differentiation of primary osteoblasts. We believe that the delay in mature ECM production by the monolayer cultured osteoblasts is determined by their state of cell maturation. A functional relationship between the inhibition of osteoblast proliferation and the induction of genes associated with matrix maturation was suggested within a monolayer culture model for rat calvarial osteoblasts. We hypothesize, that a pellet culture model could be utilized to decrease initial proliferation and increase the transformation of osteoblasts into a more mature phenotype. We performed pellet cultures using hOBs and compared their differentiation potential to 2D monolayer cultures. Using the pellet culture model, we were able to generate a population of cuboidal shaped central osteoblastic cells. Increased proliferation, as seen during low-density monolayer culture, was absent in pellet cultures and monolayers seeded at 40,000 cells/cm2. Moreover, the expression pattern of phenotypic markers Runx2, osterix, osteocalcin, col I and E11 mRNA was significantly different depending on whether the cells were cultured in low density monolayer, high density monolayer or pellet culture. We conclude that the transformation of the osteoblast phenotype in vitro to a more mature stage can be achieved more rapidly in 3D culture. Moreover, that dense monolayer leads to the formation of more mature osteoblasts than low-density seeded monolayer, while hOB cells in pellets seem to have transformed even further along the osteoblast phenotype.

  14. Pellet culture model for human primary osteoblasts

    Directory of Open Access Journals (Sweden)

    K Jähn

    2010-09-01

    Full Text Available In vitro monolayer culture of human primary osteoblasts (hOBs often shows unsatisfactory results for extracellular matrix deposition, maturation and calcification. Nevertheless, monolayer culture is still the method of choice for in vitro differentiation of primary osteoblasts. We believe that the delay in mature ECM production by the monolayer cultured osteoblasts is determined by their state of cell maturation. A functional relationship between the inhibition of osteoblast proliferation and the induction of genes associated with matrix maturation was suggested within a monolayer culture model for rat calvarial osteoblasts. We hypothesize, that a pellet culture model could be utilized to decrease initial proliferation and increase the transformation of osteoblasts into a more mature phenotype. We performed pellet cultures using hOBs and compared their differentiation potential to 2D monolayer cultures. Using the pellet culture model, we were able to generate a population of cuboidal shaped central osteoblastic cells. Increased proliferation, as seen during low-density monolayer culture, was absent in pellet cultures and monolayers seeded at 40,000 cells/cm2. Moreover, the expression pattern of phenotypic markers Runx2, osterix, osteocalcin, col I and E11 mRNA was significantly different depending on whether the cells were cultured in low density monolayer, high density monolayer or pellet culture. We conclude that the transformation of the osteoblast phenotype in vitro to a more mature stage can be achieved more rapidly in 3D culture. Moreover, that dense monolayer leads to the formation of more mature osteoblasts than low-density seeded monolayer, while hOB cells in pellets seem to have transformed even further along the osteoblast phenotype.

  15. Modcomp MAX IV System Processors reference guide

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, J.

    1990-10-01

    A user almost always faces a big problem when having to learn to use a new computer system. The information necessary to use the system is often scattered throughout many different manuals. The user also faces the problem of extracting the information really needed from each manual. Very few computer vendors supply a single Users Guide or even a manual to help the new user locate the necessary manuals. Modcomp is no exception to this, Modcomp MAX IV requires that the user be familiar with the system file usage which adds to the problem. At General Atomics there is an ever increasing need for new users to learn how to use the Modcomp computers. This paper was written to provide a condensed Users Reference Guide'' for Modcomp computer users. This manual should be of value not only to new users but any users that are not Modcomp computer systems experts. This Users Reference Guide'' is intended to provided the basic information for the use of the various Modcomp System Processors necessary to, create, compile, link-edit, and catalog a program. Only the information necessary to provide the user with a basic understanding of the Systems Processors is included. This document provides enough information for the majority of programmers to use the Modcomp computers without having to refer to any other manuals. A lot of emphasis has been placed on the file description and usage for each of the System Processors. This allows the user to understand how Modcomp MAX IV does things rather than just learning the system commands.

  16. Optical linear algebra processors - Architectures and algorithms

    Science.gov (United States)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  17. Guidance for Industry: Food Producers, Processors, and ...

    Science.gov (United States)

    ... สํานักงานอาหารและยาได้ตีพิมพ์เอกสารไว้สองฉบับประกอบคําแนะนําเรื่องความปลอดภัย ของอาหารชื่อ "Food Producers, Processors, and Transporters: Food ...

  18. Lattice gauge theory using parallel processors

    International Nuclear Information System (INIS)

    Lee, T.D.; Chou, K.C.; Zichichi, A.

    1987-01-01

    The book's contents include: Lattice Gauge Theory Lectures: Introduction and Current Fermion Simulations; Monte Carlo Algorithms for Lattice Gauge Theory; Specialized Computers for Lattice Gauge Theory; Lattice Gauge Theory at Finite Temperature: A Monte Carlo Study; Computational Method - An Elementary Introduction to the Langevin Equation, Present Status of Numerical Quantum Chromodynamics; Random Lattice Field Theory; The GF11 Processor and Compiler; and The APE Computer and First Physics Results; Columbia Supercomputer Project: Parallel Supercomputer for Lattice QCD; Statistical and Systematic Errors in Numerical Simulations; Monte Carlo Simulation for LGT and Programming Techniques on the Columbia Supercomputer; Food for Thought: Five Lectures on Lattice Gauge Theory

  19. The design of a graphics processor

    International Nuclear Information System (INIS)

    Holmes, M.; Thorne, A.R.

    1975-12-01

    The design of a graphics processor is described which takes into account known and anticipated user requirements, the availability of cheap minicomputers, the state of integrated circuit technology, and the overall need to minimise cost for a given performance. The main user needs are the ability to display large high resolution pictures, and to dynamically change the user's view in real time by means of fast coordinate processing hardware. The transformations that can be applied to 2D or 3D coordinates either singly or in combination are: translation, scaling, mirror imaging, rotation, and the ability to map the transformation origin on to any point on the screen. (author)

  20. Integral Fast Reactor fuel pin processor

    International Nuclear Information System (INIS)

    Levinskas, D.

    1993-01-01

    This report discusses the pin processor which receives metal alloy pins cast from recycled Integral Fast Reactor (IFR) fuel and prepares them for assembly into new IFR fuel elements. Either full length as-cast or precut pins are fed to the machine from a magazine, cut if necessary, and measured for length, weight, diameter and deviation from straightness. Accepted pins are loaded into cladding jackets located in a magazine, while rejects and cutting scraps are separated into trays. The magazines, trays, and the individual modules that perform the different machine functions are assembled and removed using remote manipulators and master-slaves

  1. Humanized In Vivo Model for Streptococcal Impetigo

    Science.gov (United States)

    Scaramuzzino, Dominick A.; McNiff, Jennifer M.; Bessen, Debra E.

    2000-01-01

    An in vivo model for group A streptococcal (GAS) impetigo was developed, whereby human neonatal foreskin engrafted onto SCID mice was superficially damaged and bacteria were topically applied. Severe infection, indicated by a purulent exudate, could be induced with as few as 1,000 CFU of a virulent strain. Early findings (48 h) showed a loss of stratum corneum and adherence of short chains of gram-positive cocci to the external surface of granular keratinocytes. This was followed by an increasing infiltration of polymorphonuclear leukocytes (neutrophils) of mouse origin, until a thick layer of pus covered an intact epidermis, with massive clumps of cocci accumulated at the outer rim of the pus layer. By 7 days postinoculation, the epidermis was heavily eroded; in some instances, the dermis contained pockets (ulcers) filled with cocci, similar to that observed for ecthyma. Importantly, virulent GAS underwent reproduction, resulting in a net increase in CFU of 20- to 14,000-fold. The majority of emm pattern D strains had a higher gross pathology score than emm pattern A, B, or C (A–C) strains, consistent with epidemiological findings that pattern D strains have a strong tendency to cause impetigo, whereas pattern A–C strains are more likely to cause pharyngitis. PMID:10768985

  2. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  3. A long term model of circulation. [human body

    Science.gov (United States)

    White, R. J.

    1974-01-01

    A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.

  4. Case-Based Reasoning for Human Behavior Modeling

    Science.gov (United States)

    2006-02-16

    edition may be used. Case Based Reasoning for Human Behavior Modeling CDRL A002 for Contract N00014-03-C-0178 February 16, 2006 Document...maintaining a useful repository demand that reuse be supported for human behavior modeling even if other model construction aids are also available...et al. (2001). Results of the Common Human Behavior Representation And Interchange System (CHRIS) Workshop. Fall 2001 Simulation Interoperability

  5. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    Science.gov (United States)

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  6. A Real Time Digital Coincidence Processor for positron emission tomography

    International Nuclear Information System (INIS)

    Dent, H.M.; Jones, W.F.; Casey, M.E.

    1986-01-01

    A Real Time Digital Coincidence Processor has been developed for use in the Positron Emission Tomograph (PET) ECAT scanners manufactured by Computer Technology and Imaging, Inc. (CTI). The primary functions of the Coincidence Processor include: receive from the BGO detector modules serial data, which includes timing information and detector identification; process the received data to form coincidence detector pairs; and present the coincidence pair data to a Real Time Sorter. The primary design emphasis was placed on the Coincidence Processor being able to process the detector data into coincidence pairs at real time rates. This paper briefly describes the Coincidence Processor and some of the considerations that went into its design

  7. Implementation of quantum maps by programmable quantum processors

    International Nuclear Information System (INIS)

    Hillery, Mark; Ziman, Mario; Buzek, Vladimir

    2002-01-01

    A quantum processor is a device with a data register and a program register. The input to the program register determines the operation, which is a completely positive linear map, that will be performed on the state in the data register. We develop a mathematical description for these devices, and apply it to several different examples of processors. The problem of finding a processor that will be able to implement a given set of mappings is also examined, and it is shown that, while it is possible to design a finite processor to realize the phase-damping channel, it is not possible to do so for the amplitude-damping channel

  8. Special processor for in-core control systems

    International Nuclear Information System (INIS)

    Golovanov, M.N.; Duma, V.R.; Levin, G.L.; Mel'nikov, A.V.; Polikanin, A.V.; Filatov, V.P.

    1978-01-01

    The BUTs-20 special processor is discussed, designed to control the units of the in-core control equipment which are incorporated into the VECTOR communication channel, and to provide preliminary data processing prior to computer calculations. A set of instructions and flowsheet of the processor, organization of its communication with memories and other units of the system are given. The processor components: a control unit and an arithmetic logical unit are discussed. It is noted that the special processor permits more effective utilization of the computer time

  9. Bounds on achievable accuracy in analog optical linear-algebra processors

    Science.gov (United States)

    Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.

    1990-07-01

    Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the

  10. A sample application of nuclear power human resources model

    International Nuclear Information System (INIS)

    Gurgen, A.; Ergun, S.

    2016-01-01

    One of the most important issues for a new comer country initializing the nuclear power plant projects is to have both quantitative and qualitative models for the human resources development. For the quantitative model of human resources development for Turkey, “Nuclear Power Human Resources (NPHR) Model” developed by the Los Alamos National Laboratory was used to determine the number of people that will be required from different professional or occupational fields in the planning of human resources for Akkuyu, Sinop and the third nuclear power plant projects. The number of people required for different professions for the Nuclear Energy Project Implementation Department, the regulatory authority, project companies, construction, nuclear power plants and the academy were calculated. In this study, a sample application of the human resources model is presented. The results of the first tries to calculate the human resources needs of Turkey were obtained. Keywords: Human Resources Development, New Comer Country, NPHR Model

  11. Reconfigurable Very Long Instruction Word (VLIW) Processor

    Science.gov (United States)

    Velev, Miroslav N.

    2015-01-01

    Future NASA missions will depend on radiation-hardened, power-efficient processing systems-on-a-chip (SOCs) that consist of a range of processor cores custom tailored for space applications. Aries Design Automation, LLC, has developed a processing SOC that is optimized for software-defined radio (SDR) uses. The innovation implements the Institute of Electrical and Electronics Engineers (IEEE) RazorII voltage management technique, a microarchitectural mechanism that allows processor cores to self-monitor, self-analyze, and selfheal after timing errors, regardless of their cause (e.g., radiation; chip aging; variations in the voltage, frequency, temperature, or manufacturing process). This highly automated SOC can also execute legacy PowerPC 750 binary code instruction set architecture (ISA), which is used in the flight-control computers of many previous NASA space missions. In developing this innovation, Aries Design Automation has made significant contributions to the fields of formal verification of complex pipelined microprocessors and Boolean satisfiability (SAT) and has developed highly efficient electronic design automation tools that hold promise for future developments.

  12. Human tumor infiltrating lymphocytes cooperatively regulate prostate tumor growth in a humanized mouse model

    OpenAIRE

    Roth, Michael D; Harui, Airi

    2015-01-01

    BACKGROUND: The complex interactions that occur between human tumors, tumor infiltrating lymphocytes (TIL) and the systemic immune system are likely to define critical factors in the host response to cancer. While conventional animal models have identified an array of potential anti-tumor therapies, mouse models often fail to translate into effective human treatments. Our goal is to establish a humanized tumor model as a more effective pre-clinical platform for understanding and manipulating ...

  13. Human Systems Integration (HSI) Tradeoff Model

    Science.gov (United States)

    2014-03-01

    distribution is unlimited. Release # 88ABW-2014- 1475, dated 7 Apr 2014. on grammatical and software errors and the recommendation to add a screen (Figure 17...SIGNATURE// MATTHEW T. TARANTO, Major, USAF Chief, Human Systems Analysis Division Human Systems Integration Directorate...enhancing the understanding of HSI tradeoffs. At the direction of 711 HPW/HP, the Survivability/ Vulnerability Information Analysis Center (SURVIAC

  14. A model of the human retina

    DEFF Research Database (Denmark)

    Jørgensen, John Leif

    1998-01-01

    Traditionally, the human eye is perceived as being "just" a camera, that renders an accurate, although limited, image for processing in the brain. This interpretation probably stems from the apparent similarity between a video- or photo-camera and a human eye with respect to the lens, the iris...

  15. Modelling Human Exposure to Chemicals in Food

    NARCIS (Netherlands)

    Slob W

    1993-01-01

    Exposure to foodborne chemicals is often estimated using the average consumption pattern in the human population. To protect the human population instead of the average individual, however, interindividual variability in consumption behaviour must be taken into account. This report shows how food

  16. Human-water interface in hydrological modelling

    NARCIS (Netherlands)

    Wada, Yoshihide; Bierkens, Marc F.P.; Roo, de Ad; Dirmeyer, Paul A.; Famiglietti, James S.; Hanasaki, Naota; Konar, Megan; Liu, Junguo; Schmied, Hannes Möller; Oki, Taikan; Pokhrel, Yadu; Sivapalan, Murugesu; Troy, Tara J.; Dijk, Van Albert I.J.M.; Emmerik, Van Tim; Huijgevoort, Van Marjolein H.J.; Lanen, van Henny A.J.; Vörösmarty, Charles J.; Wanders, Niko; Wheater, Howard

    2017-01-01

    Over recent decades, the global population has been rapidly increasing and human activities have altered terrestrial water fluxes to an unprecedented extent. The phenomenal growth of the human footprint has significantly modified hydrological processes in various ways (e.g. irrigation, artificial

  17. Case Study of Using High Performance Commercial Processors in Space

    Science.gov (United States)

    Ferguson, Roscoe C.; Olivas, Zulema

    2009-01-01

    The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.

  18. Reconstruction of human mammary tissues in a mouse model.

    Science.gov (United States)

    Proia, David A; Kuperwasser, Charlotte

    2006-01-01

    Establishing a model system that more accurately recapitulates both normal and neoplastic breast epithelial development in rodents is central to studying human breast carcinogenesis. However, the inability of human breast epithelial cells to colonize mouse mammary fat pads is problematic. Considering that the human breast is a more fibrous tissue than is the adipose-rich stroma of the murine mammary gland, our group sought to bypass the effects of the rodent microenvironment through incorporation of human stromal fibroblasts. We have been successful in reproducibly recreating functionally normal breast tissues from reduction mammoplasty tissues, in what we term the human-in-mouse (HIM) model. Here we describe our relatively simple and inexpensive techniques for generating this orthotopic xenograft model. Whether the model is to be applied for understanding normal human breast development or tumorigenesis, investigators with minimal animal surgery skills, basic cell culture techniques and access to human breast tissue will be able to generate humanized mouse glands within 3 months. Clearing the mouse of its endogenous epithelium with subsequent stromal humanization takes 1 month. The subsequent implantation of co-mixed human epithelial cells and stromal cells occurs 2 weeks after humanization, so investigators should expect to observe the desired outgrowths 2 months afterward. As a whole, this model system has the potential to improve the understanding of crosstalk between tissue stroma and the epithelium as well as factors involved in breast stem cell biology tumor initiation and progression.

  19. Pharmacological migraine provocation: a human model of migraine

    DEFF Research Database (Denmark)

    Ashina, Messoud; Hansen, Jakob Møller

    2010-01-01

    . If a naturally occurring substance can provoke migraine in human patients, then it is likely, although not certain, that blocking its effect will be effective in the treatment of acute migraine attacks. To this end, a human in vivo model of experimental headache and migraine in humans has been developed...

  20. Parallel processors and nonlinear structural dynamics algorithms and software

    Science.gov (United States)

    Belytschko, Ted

    1989-01-01

    A nonlinear structural dynamics finite element program was developed to run on a shared memory multiprocessor with pipeline processors. The program, WHAMS, was used as a framework for this work. The program employs explicit time integration and has the capability to handle both the nonlinear material behavior and large displacement response of 3-D structures. The elasto-plastic material model uses an isotropic strain hardening law which is input as a piecewise linear function. Geometric nonlinearities are handled by a corotational formulation in which a coordinate system is embedded at the integration point of each element. Currently, the program has an element library consisting of a beam element based on Euler-Bernoulli theory and trianglar and quadrilateral plate element based on Mindlin theory.

  1. Clock generators for SOC processors circuits and architectures

    CERN Document Server

    Fahim, Amr

    2004-01-01

    This book explores the design of fully-integrated frequency synthesizers suitable for system-on-a-chip (SOC) processors. The text takes a more global design perspective in jointly examining the design space at the circuit level as well as at the architectural level. The comprehensive coverage includes summary chapters on circuit theory as well as feedback control theory relevant to the operation of phase locked loops (PLLs). On the circuit level, the discussion includes low-voltage analog design in deep submicron digital CMOS processes, effects of supply noise, substrate noise, as well device noise. On the architectural level, the discussion includes PLL analysis using continuous-time as well as discrete-time models, linear and nonlinear effects of PLL performance, and detailed analysis of locking behavior. The book provides numerous real world applications, as well as practical rules-of-thumb for modern designers to use at the system, architectural, as well as the circuit level.

  2. Directions in parallel processor architecture, and GPUs too

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  3. Reasoning about Human Participation in Self-Adaptive Systems

    Science.gov (United States)

    2015-01-16

    these adaptation models as stochastic multiplayer games (SMGs) that can be used to analyze human-system-environment interactions. We illustrate our...models as stochastic multiplayer games (SMGs) that can be used to analyze human-system-environment interactions. To explore these issues, we propose...select a set of devices of size MAX DEVS PN (a constant that represents the maximum number of devices that can be attached to a processor node) among

  4. Onboard Data Processors for Planetary Ice-Penetrating Sounding Radars

    Science.gov (United States)

    Tan, I. L.; Friesenhahn, R.; Gim, Y.; Wu, X.; Jordan, R.; Wang, C.; Clark, D.; Le, M.; Hand, K. P.; Plaut, J. J.

    2011-12-01

    Among the many concerns faced by outer planetary missions, science data storage and transmission hold special significance. Such missions must contend with limited onboard storage, brief data downlink windows, and low downlink bandwidths. A potential solution to these issues lies in employing onboard data processors (OBPs) to convert raw data into products that are smaller and closely capture relevant scientific phenomena. In this paper, we present the implementation of two OBP architectures for ice-penetrating sounding radars tasked with exploring Europa and Ganymede. Our first architecture utilizes an unfocused processing algorithm extended from the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS, Jordan et. al. 2009). Compared to downlinking raw data, we are able to reduce data volume by approximately 100 times through OBP usage. To ensure the viability of our approach, we have implemented, simulated, and synthesized this architecture using both VHDL and Matlab models (with fixed-point and floating-point arithmetic) in conjunction with Modelsim. Creation of a VHDL model of our processor is the principle step in transitioning to actual digital hardware, whether in a FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit), and successful simulation and synthesis strongly indicate feasibility. In addition, we examined the tradeoffs faced in the OBP between fixed-point accuracy, resource consumption, and data product fidelity. Our second architecture is based upon a focused fast back projection (FBP) algorithm that requires a modest amount of computing power and on-board memory while yielding high along-track resolution and improved slope detection capability. We present an overview of the algorithm and details of our implementation, also in VHDL. With the appropriate tradeoffs, the use of OBPs can significantly reduce data downlink requirements without sacrificing data product fidelity. Through the development

  5. Pharmacological migraine provocation: a human model of migraine

    DEFF Research Database (Denmark)

    Ashina, Messoud; Hansen, Jakob Møller

    2010-01-01

    for migraine mechanisms. So far, however, animal models cannot predict the efficacy of new therapies for migraine. Because migraine attacks are fully reversible and can be aborted by therapy, the headache- or migraine-provoking property of naturally occurring signaling molecules can be tested in a human model....... If a naturally occurring substance can provoke migraine in human patients, then it is likely, although not certain, that blocking its effect will be effective in the treatment of acute migraine attacks. To this end, a human in vivo model of experimental headache and migraine in humans has been developed...

  6. Numerical Modeling of Electromagnetic Field Effects on the Human Body

    Directory of Open Access Journals (Sweden)

    Zuzana Psenakova

    2006-01-01

    Full Text Available Interactions of electromagnetic field (EMF with environment and with tissue of human beings are still under discussion and many research teams are investigating it. The human simulation models are used for biomedical research in a lot of areas, where it is advantage to replace real human body (tissue by the numerical model. Biological effects of EMF are one of the areas, where numerical models are used with many advantages. On the other side, this research is very specific and it is always quite hard to simulate realistic human tissue. This paper deals with different possibilities of numerical modelling of electromagnetic field effects on the human body (especially calculation of the specific absorption rate (SAR distribution in human body and thermal effect.

  7. Minimizing Human Risk: Human Performance Models in the Space Human Factors and Habitability and Behavioral Health and Performance Elements

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Human space exploration has never been more exciting than it is today. Human presence to outer worlds is becoming a reality as humans are leveraging much of our prior knowledge to the new mission of going to Mars. Exploring the solar system at greater distances from Earth than ever before will possess some unique challenges, which can be overcome thanks to the advances in modeling and simulation technologies. The National Aeronautics and Space Administration (NASA) is at the forefront of exploring our solar system. NASA's Human Research Program (HRP) focuses on discovering the best methods and technologies that support safe and productive human space travel in the extreme and harsh space environment. HRP uses various methods and approaches to answer questions about the impact of long duration missions on the human in space including: gravity's impact on the human body, isolation and confinement on the human, hostile environments impact on the human, space radiation, and how the distance is likely to impact the human. Predictive models are included in the HRP research portfolio as these models provide valuable insights into human-system operations. This paper will provide an overview of NASA's HRP and will present a number of projects that have used modeling and simulation to provide insights into human-system issues (e.g. automation, habitat design, schedules) in anticipation of space exploration.

  8. Animal models for human genetic diseases | Sharif | African Journal ...

    African Journals Online (AJOL)

    The study of human genetic diseases can be greatly aided by animal models because of their similarity to humans in terms of genetics. In addition to understand diverse aspects of basic biology, model organisms are extensively used in applied research in agriculture, industry, and also in medicine, where they are used to ...

  9. General Computational Model for Human Musculoskeletal System of Spine

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2012-01-01

    Full Text Available A general computational model of the human lumbar spine and trunk muscles including optimization formulations was provided. For a given condition, the trunk muscle forces could be predicted considering the human physiology including the follower load concept. The feasibility of the solution could be indirectly validated by comparing the compressive force, the shear force, and the joint moment. The presented general computational model and optimization technology can be fundamental tools to understand the control principle of human trunk muscles.

  10. Specific and General Human Capital in an Endogenous Growth Model

    OpenAIRE

    Evangelia Vourvachaki; Vahagn Jerbashian; : Sergey Slobodyan

    2014-01-01

    In this article, we define specific (general) human capital in terms of the occupations whose use is spread in a limited (wide) set of industries. We analyze the growth impact of an economy's composition of specific and general human capital, in a model where education and research and development are costly and complementary activities. The model suggests that a declining share of specific human capital, as observed in the Czech Republic, can be associated with a lower rate of long-term grow...

  11. Human-water interface in hydrological modelling

    OpenAIRE

    Wada, Yoshihide; Bierkens, Marc F.P.; Roo, de, Ad; Dirmeyer, Paul A.; Famiglietti, James S.; Hanasaki, Naota; Konar, Megan; Liu, Junguo; Schmied, Hannes Möller; Oki, Taikan; Pokhrel, Yadu; Sivapalan, Murugesu; Troy, Tara J.; Dijk, Van, Albert I.J.M.; Emmerik, Van, Tim

    2017-01-01

    Over recent decades, the global population has been rapidly increasing and human activities have altered terrestrial water fluxes to an unprecedented extent. The phenomenal growth of the human footprint has significantly modified hydrological processes in various ways (e.g. irrigation, artificial dams, and water diversion) and at various scales (from a watershed to the globe). During the early 1990s, awareness of the potential for increased water scarcity led to the first detailed global wate...

  12. Lipsi: Probably the Smallest Processor in the World

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2018-01-01

    , in dedicated hardware, usually as a state machine or a combination of communicating state machines, these functionalities may also be implemented by a small processor. In this paper, we present Lipsi, a very tiny processor to make it possible to implement classic finite state machine logic in software...

  13. Assessment of Processors and Marketers of Sheabutter ( Vitellaria ...

    African Journals Online (AJOL)

    The study examined the processing and marketing of Shea butter in Zuru Local Government Area of Kebbi State, Nigeria to identify the socioeconomic characteristics of Shea butter processors and marketers, the average cost and return of Shea butter processors and marketers and the determinant variables of profitability ...

  14. ACP/R3000 processors in data acquisition systems

    International Nuclear Information System (INIS)

    Deppe, J.; Areti, H.; Atac, R.

    1989-02-01

    We describe ACP/R3000 processor based data acquisition systems for high energy physics. This VME bus compatible processor board, with a computational power equivalent to 15 VAX 11/780s or better, contains 8 Mb of memory for event buffering and has a high speed secondary bus that allows data gathering from front end electronics. 2 refs., 3 figs

  15. Improving the performance of probabilistic programmable quantum processors

    International Nuclear Information System (INIS)

    Hillery, Mark; Ziman, Mario; Buzek, Vladimir

    2004-01-01

    We present a systematic analysis of how one can improve performance of probabilistic programmable quantum processors. We generalize a simple Vidal-Masanes-Cirac processor that realizes U(1) rotations on a qubit with the phase of the rotation encoded in a state of the program register. We show how the probability of success of the probabilistic processor can be enhanced by using the processor in loops. In addition we show that the same strategy can be utilized for a probabilistic implementation of nonunitary transformations on qubits. In addition, we show that an arbitrary SU(2) transformation of qubits can be encoded in program state of a universal programmable probabilistic quantum processor. The probability of success of this processor can be enhanced by a systematic correction of errors via conditional loops. Finally, we show that all our results can be generalized also for qudits. In particular, we show how to implement SU(N) rotations of qudits via programmable quantum processor and how the performance of the processor can be enhanced when it is used in loops

  16. Message Passing on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Puffitsch, Wolfgang; Schoeberl, Martin

    2015-01-01

    Real-time systems need time-predictable computing platforms. For a multicore processor to be time-predictable, communication between processor cores needs to be time-predictable as well. This paper presents a time-predictable message-passing library for such a platform. We show how to build up...

  17. Excavator-based processor operator productivity and cost analysis ...

    African Journals Online (AJOL)

    Operator impact on productivity and cost using similar processor machines was addressed in this case study. The study had two objectives: (1) determine the extent of operator productivity variation between six processor operators in a harvesting operation; and (2) determine potential cost implications associated with ...

  18. Biomass is beginning to threaten the wood-processors

    International Nuclear Information System (INIS)

    Beer, G.; Sobinkovic, B.

    2004-01-01

    In this issue an exploitation of biomass in Slovak Republic is analysed. Some new projects of constructing of the stoke-holds for biomass processing are published. The grants for biomass are ascending the prices of wood raw material, which is thus becoming less accessible for the wood-processors. An excessive wood export threatens the domestic processors

  19. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  20. Bank switched memory interface for an image processor

    International Nuclear Information System (INIS)

    Barron, M.; Downward, J.

    1980-09-01

    A commercially available image processor is interfaced to a PDP-11/45 through an 8K window of memory addresses. When the image processor was not in use it was desired to be able to use the 8K address space as real memory. The standard method of accomplishing this would have been to use UNIBUS switches to switch in either the physical 8K bank of memory or the image processor memory. This method has the disadvantage of being rather expensive. As a simple alternative, a device was built to selectively enable or disable either an 8K bank of memory or the image processor memory. To enable the image processor under program control, GEN is contracted in size, the memory is disabled, a device partition for the image processor is created above GEN, and the image processor memory is enabled. The process is reversed to restore memory to GEN. The hardware to enable/disable the image and computer memories is controlled using spare bits from a DR-11K output register. The image processor and physical memory can be switched in or out on line with no adverse affects on the system's operation

  1. Temporal Partitioning and Multi-Processor Scheduling for Reconfigurable Architectures

    DEFF Research Database (Denmark)

    Popp, Andreas; Le Moullec, Yannick; Koch, Peter

    This poster presentation outlines a proposed framework for handling mapping of signal processing applications to heterogeneous reconfigurable architectures. The methodology consists of an extension to traditional multi-processor scheduling by creating a separate HW track for generation of groups...... of tasks that are handled similarly to SW processes in a traditional multi-processor scheduling context....

  2. Evaluation of the Intel Sandy Bridge-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2012-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing an 8-core “Sandy Bridge-EP” processor with Intel’s previous microarchitecture, the “Westmere-EP”. The Intel marketing names for these processors are “Xeon E5-2600 processor series” and “Xeon 5600 processor series”, respectively. Both processors are produced in a 32nm process, and both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores ...

  3. Recursive Matrix Inverse Update On An Optical Processor

    Science.gov (United States)

    Casasent, David P.; Baranoski, Edward J.

    1988-02-01

    A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.

  4. Digital Signal Processor System for AC Power Drivers

    Directory of Open Access Journals (Sweden)

    Ovidiu Neamtu

    2009-10-01

    Full Text Available DSP (Digital Signal Processor is the bestsolution for motor control systems to make possible thedevelopment of advanced motor drive systems. The motorcontrol processor calculates the required motor windingvoltage magnitude and frequency to operate the motor atthe desired speed. A PWM (Pulse Width Modulationcircuit controls the on and off duty cycle of the powerinverter switches to vary the magnitude of the motorvoltages.

  5. Molecular processors: from qubits to fuzzy logic.

    Science.gov (United States)

    Gentili, Pier Luigi

    2011-03-14

    Single molecules or their assemblies are information processing devices. Herein it is demonstrated how it is possible to process different types of logic through molecules. As long as decoherent effects are maintained far away from a pure quantum mechanical system, quantum logic can be processed. If the collapse of superimposed or entangled wavefunctions is unavoidable, molecules can still be used to process either crisp (binary or multi-valued) or fuzzy logic. The way for implementing fuzzy inference engines is declared and it is supported by the examples of molecular fuzzy logic systems devised so far. Fuzzy logic is drawing attention in the field of artificial intelligence, because it models human reasoning quite well. This ability may be due to some structural analogies between a fuzzy logic system and the human nervous system. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Humanized mouse model for assessing the human immune response to xenogeneic and allogeneic decellularized biomaterials.

    Science.gov (United States)

    Wang, Raymond M; Johnson, Todd D; He, Jingjin; Rong, Zhili; Wong, Michelle; Nigam, Vishal; Behfar, Atta; Xu, Yang; Christman, Karen L

    2017-06-01

    Current assessment of biomaterial biocompatibility is typically implemented in wild type rodent models. Unfortunately, different characteristics of the immune systems in rodents versus humans limit the capability of these models to mimic the human immune response to naturally derived biomaterials. Here we investigated the utility of humanized mice as an improved model for testing naturally derived biomaterials. Two injectable hydrogels derived from decellularized porcine or human cadaveric myocardium were compared. Three days and one week after subcutaneous injection, the hydrogels were analyzed for early and mid-phase immune responses, respectively. Immune cells in the humanized mouse model, particularly T-helper cells, responded distinctly between the xenogeneic and allogeneic biomaterials. The allogeneic extracellular matrix derived hydrogels elicited significantly reduced total, human specific, and CD4 + T-helper cell infiltration in humanized mice compared to xenogeneic extracellular matrix hydrogels, which was not recapitulated in wild type mice. T-helper cells, in response to the allogeneic hydrogel material, were also less polarized towards a pro-remodeling Th2 phenotype compared to xenogeneic extracellular matrix hydrogels in humanized mice. In both models, both biomaterials induced the infiltration of macrophages polarized towards a M2 phenotype and T-helper cells polarized towards a Th2 phenotype. In conclusion, these studies showed the importance of testing naturally derived biomaterials in immune competent animals and the potential of utilizing this humanized mouse model for further studying human immune cell responses to biomaterials in an in vivo environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Automated Sequence Processor: Something Old, Something New

    Science.gov (United States)

    Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry

    2012-01-01

    High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..

  8. Processor-in-memory-and-storage architecture

    Science.gov (United States)

    DeBenedictis, Erik

    2018-01-02

    A method and apparatus for performing reliable general-purpose computing. Each sub-core of a plurality of sub-cores of a processor core processes a same instruction at a same time. A code analyzer receives a plurality of residues that represents a code word corresponding to the same instruction and an indication of whether the code word is a memory address code or a data code from the plurality of sub-cores. The code analyzer determines whether the plurality of residues are consistent or inconsistent. The code analyzer and the plurality of sub-cores perform a set of operations based on whether the code word is a memory address code or a data code and a determination of whether the plurality of residues are consistent or inconsistent.

  9. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  10. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  11. Simulation of a processor switching circuit with APLSV

    International Nuclear Information System (INIS)

    Dilcher, H.

    1979-01-01

    The report describes the simulation of a processor switching circuit with APL. Furthermore an APL function is represented to simulate a processor in an assembly like language. Both together serve as a tool for studying processor properties. By means of the programming function it is also possible to program other simulated processors. The processor is to be used in the processing of data in real time analysis that occur in high energy physics experiments. The data are already offered to the computer in digitalized form. A typical data rate is at 10 KB/ sec. The data are structured in blocks. The particular blocks are 1 KB wide and are independent from each other. Aprocessor has to decide, whether the block data belong to an event that is part of the backround noise and can therefore be forgotten, or whether the data should be saved for a later evaluation. (orig./WB) [de

  12. Experimental testing of the noise-canceling processor.

    Science.gov (United States)

    Collins, Michael D; Baer, Ralph N; Simpson, Harry J

    2011-09-01

    Signal-processing techniques for localizing an acoustic source buried in noise are tested in a tank experiment. Noise is generated using a discrete source, a bubble generator, and a sprinkler. The experiment has essential elements of a realistic scenario in matched-field processing, including complex source and noise time series in a waveguide with water, sediment, and multipath propagation. The noise-canceling processor is found to outperform the Bartlett processor and provide the correct source range for signal-to-noise ratios below -10 dB. The multivalued Bartlett processor is found to outperform the Bartlett processor but not the noise-canceling processor. © 2011 Acoustical Society of America

  13. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  14. A numerical model for blast injury of human thorax based on digitized visible human.

    Science.gov (United States)

    Li, Xiao-Fang; Kuang, Jiang-Ming; Nie, Si-Bing; Xu, Jing; Zhu, Jin; Liu, Yi-He

    2017-12-04

    Knowledge of the pressure distribution around human thorax in blast help to understand the injury mechanisms and their assessment. To investigate the transmission mechanism of the pressure on human thorax in blast, a three dimension surface model of human thorax was constructed in this work. To increase the precious of this model, tetrahedron element division method was applied to transfer the rough 3D surface model to hexahedral elements model. Using this model, the high pressure duration was computationally solved using numerical simulation of the hexahedral elements. Simulation results showed that the apex of lungs was subjected to the largest stress in a blast. In order to verify this result, an animal experiment was performed on a dog. The animal experimental results was shown to have a same variation tendency with the calculation results based on our numerical model of human thorax, which made this model reliable for the blast injury research.

  15. Applications and Limitations of Mouse Models for Understanding Human Atherosclerosis

    Science.gov (United States)

    von Scheidt, Moritz; Zhao, Yuqi; Kurt, Zeyneb; Pan, Calvin; Zeng, Lingyao; Yang, Xia; Schunkert, Heribert; Lusis, Aldons J.

    2017-01-01

    Most of the biological understanding of mechanisms underlying coronary artery disease (CAD) derives from studies of mouse models. The identification of multiple CAD loci and strong candidate genes in large human genome-wide association studies (GWAS) presented an opportunity to examine the relevance of mouse models for the human disease. We comprehensively reviewed the mouse literature, including 827 literature-derived genes, and compared it to human data. First, we observed striking concordance of risk factors for atherosclerosis in mice and humans. Second, there was highly significant overlap of mouse genes with human genes identified by GWAS. In particular, of the 46 genes with strong association signals in CAD-GWAS that were studied in mouse models all but one exhibited consistent effects on atherosclerosis-related phenotypes. Third, we compared 178 CAD-associated pathways derived from human GWAS with 263 from mouse studies and observed that over 50% were consistent between both species. PMID:27916529

  16. A Fully Automatic Burnt Area Mapping Processor Based on AVHRR Imagery—A TIMELINE Thematic Processor

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2018-02-01

    Full Text Available The German Aerospace Center’s (DLR TIMELINE project (“Time Series Processing of Medium Resolution Earth Observation Data Assessing Long-Term Dynamics in our Natural Environment” aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration (NOAA—Advanced Very High-Resolution Radiometer (AVHRR raw data into Level (L 1b, L2, and L3 products. This article presents the current status of the fully automated L3 burnt area mapping processor, which is based on multi-temporal datasets. The advantages of the proposed approach are (I the combined use of different indices to improve the classification result, (II the provision of a fully automated processor, (III the generation and usage of an up-to-date cloud-free pre-fire dataset, (IV classification with adaptive thresholding, and (V the assignment of five different probability levels to the burnt areas detected. The results of the AVHRR data-based burn scar mapping processor were validated with the Moderate Resolution Imaging Spectroradiometer (MODIS burnt area product MCD64 at four different European study sites. In addition, the accuracy of the AVHRR-based classification and that of the MCD64 itself were assessed by means of Landsat imagery.

  17. Zebrafish heart as a model for human cardiac electrophysiology.

    Science.gov (United States)

    Vornanen, Matti; Hassinen, Minna

    2016-01-01

    The zebrafish (Danio rerio) has become a popular model for human cardiac diseases and pharmacology including cardiac arrhythmias and its electrophysiological basis. Notably, the phenotype of zebrafish cardiac action potential is similar to the human cardiac action potential in that both have a long plateau phase. Also the major inward and outward current systems are qualitatively similar in zebrafish and human hearts. However, there are also significant differences in ionic current composition between human and zebrafish hearts, and the molecular basis and pharmacological properties of human and zebrafish cardiac ionic currents differ in several ways. Cardiac ionic currents may be produced by non-orthologous genes in zebrafish and humans, and paralogous gene products of some ion channels are expressed in the zebrafish heart. More research on molecular basis of cardiac ion channels, and regulation and drug sensitivity of the cardiac ionic currents are needed to enable rational use of the zebrafish heart as an electrophysiological model for the human heart.

  18. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  19. Mouse models for understanding human developmental anomalies

    International Nuclear Information System (INIS)

    Generoso, W.M.

    1989-01-01

    The mouse experimental system presents an opportunity for studying the nature of the underlying mutagenic damage and the molecular pathogenesis of this class of anomalies by virtue of the accessibility of the zygote and its descendant blastomeres. Such studies could contribute to the understanding of the etiology of certain sporadic but common human malformations. The vulnerability of the zygotes to mutagens as demonstrated in the studies described in this report should be a major consideration in chemical safety evaluation. It raises questions regarding the danger to human zygotes when the mother is exposed to drugs and environmental chemicals

  20. Mouse models for understanding human developmental anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Generoso, W.M.

    1989-01-01

    The mouse experimental system presents an opportunity for studying the nature of the underlying mutagenic damage and the molecular pathogenesis of this class of anomalies by virtue of the accessibility of the zygote and its descendant blastomeres. Such studies could contribute to the understanding of the etiology of certain sporadic but common human malformations. The vulnerability of the zygotes to mutagens as demonstrated in the studies described in this report should be a major consideration in chemical safety evaluation. It raises questions regarding the danger to human zygotes when the mother is exposed to drugs and environmental chemicals.

  1. [Attempt at computer modeling of evolution of human society].

    Science.gov (United States)

    Levchenko, V F; Menshutkin, V V

    2009-01-01

    A model of evolution of human society and biosphere, which is based on the concepts of V. I. Vernadskii about noosphere and of L. N. Gumilev about ethnogenesis is developed and studied. The mathematical apparatus of the model is composition of finite stochastic automata. By using this model, a possibility of the global ecological crisis is demonstrated in the case of preservation of the current tendencies of interaction of the biosphere and the human civilization.

  2. Human Cancer Models Initiative | Office of Cancer Genomics

    Science.gov (United States)

    The Human Cancer Models Initiative (HCMI) is an international consortium that is generating novel human tumor-derived culture models, which are annotated with genomic and clinical data. In an effort to advance cancer research and more fully understand how in vitro findings are related to clinical biology, HCMI-developed models and related data will be available as a community resource for cancer and other research.

  3. Synthetic vision and memory model for virtual human - biomed 2010.

    Science.gov (United States)

    Zhao, Yue; Kang, Jinsheng; Wright, David

    2010-01-01

    This paper describes the methods and case studies of a novel synthetic vision and memory model for virtual human. The synthetic vision module simulates the biological / optical abilities and limitations of the human vision. The module is based on a series of collision detection between the boundary of virtual humans field of vision (FOV) volume and the surface of objects in a recreated 3D environment. The memory module simulates a short-term memory capability by employing a simplified memory structure (first-in-first-out stack). The synthetic vision and memory model has been integrated into a virtual human modelling project, Intelligent Virtual Modelling. The project aimed to improve the realism and autonomy of virtual humans.

  4. Behavior genetic modeling of human fertility

    DEFF Research Database (Denmark)

    Rodgers, J L; Kohler, H P; Kyvik, K O

    2001-01-01

    Behavior genetic designs and analysis can be used to address issues of central importance to demography. We use this methodology to document genetic influence on human fertility. Our data come from Danish twin pairs born from 1953 to 1959, measured on age at first attempt to get pregnant (First...

  5. Pig models for the human heart failure syndrome

    DEFF Research Database (Denmark)

    Hunter, Ingrid; Terzic, Dijana; Zois, Nora Elisabeth

    2014-01-01

    Human heart failure remains a challenging illness despite advances in the diagnosis and treatment of heart failure patients. There is a need for further improvement of our understanding of the failing myocardium and its molecular deterioration. Porcine models provide an important research tool...... in this respect as molecular changes can be examined in detail, which is simply not feasible in human patients. However, the human heart failure syndrome is based on symptoms and signs, where pig models mostly mimic the myocardial damage, but without decisive data on clinical presentation and, therefore, a heart...... to elucidate the human heart failure syndrome....

  6. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  7. Human tissue models in cancer research: looking beyond the mouse.

    Science.gov (United States)

    Jackson, Samuel J; Thomas, Gareth J

    2017-08-01

    Mouse models, including patient-derived xenograft mice, are widely used to address questions in cancer research. However, there are documented flaws in these models that can result in the misrepresentation of human tumour biology and limit the suitability of the model for translational research. A coordinated effort to promote the more widespread development and use of 'non-animal human tissue' models could provide a clinically relevant platform for many cancer studies, maximising the opportunities presented by human tissue resources such as biobanks. A number of key factors limit the wide adoption of non-animal human tissue models in cancer research, including deficiencies in the infrastructure and the technical tools required to collect, transport, store and maintain human tissue for lab use. Another obstacle is the long-standing cultural reliance on animal models, which can make researchers resistant to change, often because of concerns about historical data compatibility and losing ground in a competitive environment while new approaches are embedded in lab practice. There are a wide range of initiatives that aim to address these issues by facilitating data sharing and promoting collaborations between organisations and researchers who work with human tissue. The importance of coordinating biobanks and introducing quality standards is gaining momentum. There is an exciting opportunity to transform cancer drug discovery by optimising the use of human tissue and reducing the reliance on potentially less predictive animal models. © 2017. Published by The Company of Biologists Ltd.

  8. Drosophila Melanogaster as an Emerging Translational Model of Human Nephrolithiasis

    Science.gov (United States)

    Miller, Joe; Chi, Thomas; Kapahi, Pankaj; Kahn, Arnold J.; Kim, Man Su; Hirata, Taku; Romero, Michael F.; Dow, Julian A.T.; Stoller, Marshall L.

    2013-01-01

    Purpose The limitations imposed by human clinical studies and mammalian models of nephrolithiasis have hampered the development of effective medical treatments and preventative measures for decades. The simple but elegant Drosophila melanogaster is emerging as a powerful translational model of human disease, including nephrolithiasis and may provide important information essential to our understanding of stone formation. We present the current state of research using D. melanogaster as a model of human nephrolithiasis. Materials and Methods A comprehensive review of the English language literature was performed using PUBMED. When necessary, authoritative texts on relevant subtopics were consulted. Results The genetic composition, anatomic structure and physiologic function of Drosophila Malpighian tubules are remarkably similar to those of the human nephron. The direct effects of dietary manipulation, environmental alteration, and genetic variation on stone formation can be observed and quantified in a matter of days. Several Drosophila models of human nephrolithiasis, including genetically linked and environmentally induced stones, have been developed. A model of calcium oxalate stone formation is among the most recent fly models of human nephrolithiasis. Conclusions The ability to readily manipulate and quantify stone formation in D. melanogaster models of human nephrolithiasis presents the urologic community with a unique opportunity to increase our understanding of this enigmatic disease. PMID:23500641

  9. Human tissue models in cancer research: looking beyond the mouse

    Directory of Open Access Journals (Sweden)

    Samuel J. Jackson

    2017-08-01

    Full Text Available Mouse models, including patient-derived xenograft mice, are widely used to address questions in cancer research. However, there are documented flaws in these models that can result in the misrepresentation of human tumour biology and limit the suitability of the model for translational research. A coordinated effort to promote the more widespread development and use of ‘non-animal human tissue’ models could provide a clinically relevant platform for many cancer studies, maximising the opportunities presented by human tissue resources such as biobanks. A number of key factors limit the wide adoption of non-animal human tissue models in cancer research, including deficiencies in the infrastructure and the technical tools required to collect, transport, store and maintain human tissue for lab use. Another obstacle is the long-standing cultural reliance on animal models, which can make researchers resistant to change, often because of concerns about historical data compatibility and losing ground in a competitive environment while new approaches are embedded in lab practice. There are a wide range of initiatives that aim to address these issues by facilitating data sharing and promoting collaborations between organisations and researchers who work with human tissue. The importance of coordinating biobanks and introducing quality standards is gaining momentum. There is an exciting opportunity to transform cancer drug discovery by optimising the use of human tissue and reducing the reliance on potentially less predictive animal models.

  10. Modeling cognition and disease using human glial chimeric mice

    DEFF Research Database (Denmark)

    Goldman, Steven A.; Nedergaard, Maiken; Windrem, Martha S.

    2015-01-01

    As new methods for producing and isolating human glial progenitor cells (hGPCs) have been developed, the disorders of myelin have become especially compelling targets for cell-based therapy. Yet as animal modeling of glial progenitor cell-based therapies has progressed, it has become clear......, oligodendrocytes as well. As a result, the recipient brains may become inexorably humanized with regards to their resident glial populations, yielding human glial chimeric mouse brains. These brains provide us a fundamentally new tool by which to assess the species-specific attributes of glia in modulating human...... for studying the human-specific contributions of glia to psychopathology, as well as to higher cognition. As such, the assessment of human glial chimeric mice may provide us new insight into the species-specific contributions of glia to human cognitive evolution, as well as to the pathogenesis of human...

  11. Human Digital Modeling & Hand Scanning Lab

    Data.gov (United States)

    Federal Laboratory Consortium — This laboratory incorporates specialized scanning equipment, computer workstations and software applications for the acquisition and analysis of digitized models of...

  12. Modelling Cerebral Blood Flow Autoregulation in Humans

    National Research Council Canada - National Science Library

    Panerai, R

    2001-01-01

    ...% of CBF regulatory,' mechanisms and their interaction with other haemodynamic variables such as intracranial pressure and blood gases, Mathematical models have been able to reproduce many known...

  13. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  14. Acceleration of spiking neural network based pattern recognition on NVIDIA graphics processors.

    Science.gov (United States)

    Han, Bing; Taha, Tarek M

    2010-04-01

    There is currently a strong push in the research community to develop biological scale implementations of neuron based vision models. Systems at this scale are computationally demanding and generally utilize more accurate neuron models, such as the Izhikevich and the Hodgkin-Huxley models, in favor of the more popular integrate and fire model. We examine the feasibility of using graphics processing units (GPUs) to accelerate a spiking neural network based character recognition network to enable such large scale systems. Two versions of the network utilizing the Izhikevich and Hodgkin-Huxley models are implemented. Three NVIDIA general-purpose (GP) GPU platforms are examined, including the GeForce 9800 GX2, the Tesla C1060, and the Tesla S1070. Our results show that the GPGPUs can provide significant speedup over conventional processors. In particular, the fastest GPGPU utilized, the Tesla S1070, provided a speedup of 5.6 and 84.4 over highly optimized implementations on the fastest central processing unit (CPU) tested, a quadcore 2.67 GHz Xeon processor, for the Izhikevich and the Hodgkin-Huxley models, respectively. The CPU implementation utilized all four cores and the vector data parallelism offered by the processor. The results indicate that GPUs are well suited for this application domain.

  15. Mechanical Impedance Modeling of Human Arm: A survey

    Science.gov (United States)

    Puzi, A. Ahmad; Sidek, S. N.; Sado, F.

    2017-03-01

    Human arm mechanical impedance plays a vital role in describing motion ability of the upper limb. One of the impedance parameters is stiffness which is defined as the ratio of an applied force to the measured deformation of the muscle. The arm mechanical impedance modeling is useful in order to develop a better controller for system that interacts with human as such an automated robot-assisted platform for automated rehabilitation training. The aim of the survey is to summarize the existing mechanical impedance models of human upper limb so to justify the need to have an improved version of the arm model in order to facilitate the development of better controller of such systems with ever increase in complexity. In particular, the paper will address the following issue: Human motor control and motor learning, constant and variable impedance models, methods for measuring mechanical impedance and mechanical impedance modeling techniques.

  16. Human Spaceflight Architecture Model (HSFAM) Data Dictionary

    Science.gov (United States)

    Shishko, Robert

    2016-01-01

    HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.

  17. Pig models for the human heart failure syndrome

    DEFF Research Database (Denmark)

    Hunter, Ingrid; Terzic, Dijana; Zois, Nora Elisabeth

    2014-01-01

    failure diagnosis. In perspective, pig models are in need of some verification in terms of the clinical definition of the experimental condition. After all, humans are not pigs, pigs are not humans, and the difference between the species needs to be better understood before pig models can fully be used......Human heart failure remains a challenging illness despite advances in the diagnosis and treatment of heart failure patients. There is a need for further improvement of our understanding of the failing myocardium and its molecular deterioration. Porcine models provide an important research tool...

  18. A Computational Model of Human Table Tennis for Robot Application

    Science.gov (United States)

    Mülling, Katharina; Peters, Jan

    Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.

  19. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  20. A multisensory integration model of human stance control.

    Science.gov (United States)

    van der Kooij, H; Jacobs, R; Koopman, B; Grootenboer, H

    1999-05-01

    A model is presented to study and quantify the contribution of all available sensory information to human standing based on optimal estimation theory. In the model, delayed sensory information is integrated in such a way that a best estimate of body orientation is obtained. The model approach agrees with the present theory of the goal of human balance control. The model is not based on purely inverted pendulum body dynamics, but rather on a three-link segment model of a standing human on a movable support base. In addition, the model is non-linear and explicitly addresses the problem of multisensory integration and neural time delays. A predictive element is included in the controller to compensate for time delays, necessary to maintain erect body orientation. Model results of sensory perturbations on total body sway closely resemble experimental results. Despite internal and external perturbations, the controller is able to stabilise the model of an inherently unstable standing human with neural time delays of 100 ms. It is concluded, that the model is capable of studying and quantifying multisensory integration in human stance control. We aim to apply the model in (1) the design and development of prostheses and orthoses and (2) the diagnosis of neurological balance disorders.

  1. Pellet culture model for human primary osteoblasts

    OpenAIRE

    K Jähn; RG Richards; CW Archer; MJ Stoddart

    2010-01-01

    In vitro monolayer culture of human primary osteoblasts (hOBs) often shows unsatisfactory results for extracellular matrix deposition, maturation and calcification. Nevertheless, monolayer culture is still the method of choice for in vitro differentiation of primary osteoblasts. We believe that the delay in mature ECM production by the monolayer cultured osteoblasts is determined by their state of cell maturation. A functional relationship between the inhibition of osteoblast proliferation an...

  2. Human reconstructed skin xenografts on mice to model skin physiology.

    Science.gov (United States)

    Salgado, Giorgiana; Ng, Yi Zhen; Koh, Li Fang; Goh, Christabelle S M; Common, John E

    Xenograft models to study skin physiology have been popular for scientific use since the 1970s, with various developments and improvements to the techniques over the decades. Xenograft models are particularly useful and sought after due to the lack of clinically relevant animal models in predicting drug effectiveness in humans. Such predictions could in turn boost the process of drug discovery, since novel drug compounds have an estimated 8% chance of FDA approval despite years of rigorous preclinical testing and evaluation, albeit mostly in non-human models. In the case of skin research, the mouse persists as the most popular animal model of choice, despite its well-known anatomical differences with human skin. Differences in skin biology are especially evident when trying to dissect more complex skin conditions, such as psoriasis and eczema, where interactions between the immune system, epidermis and the environment likely occur. While the use of animal models are still considered the gold standard for systemic toxicity studies under controlled environments, there are now alternative models that have been approved for certain applications. To overcome the biological limitations of the mouse model, research efforts have also focused on "humanizing" the mice model to better recapitulate human skin physiology. In this review, we outline the different approaches undertaken thus far to study skin biology using human tissue xenografts in mice and the technical challenges involved. We also describe more recent developments to generate humanized multi-tissue compartment mice that carry both a functioning human immune system and skin xenografts. Such composite animal models provide promising opportunities to study drugs, disease and differentiation with greater clinical relevance. Copyright © 2017 International Society of Differentiation. Published by Elsevier B.V. All rights reserved.

  3. A Human View Model for Socio-Technical Interactions

    Science.gov (United States)

    Handley, Holly A.; Tolk, Andreas

    2012-01-01

    The Human View was developed as an additional architectural viewpoint to focus on the human part of a system. The Human View can be used to collect and organize data in order to understand how human operators interact and impact the other elements of a system. This framework can also be used to develop a model to describe how humans interact with each other in network enabled systems. These socio-technical interactions form the foundation of the emerging area of Human Interoperability. Human Interoperability strives to understand the relationships required between human operators that impact collaboration across networked environments, including the effect of belonging to different organizations. By applying organizational relationship concepts from network theory to the Human View elements, and aligning these relationships with a model developed to identify layers of coalition interoperability, the conditions for different levels for Human Interoperability for network enabled systems can be identified. These requirements can then be captured in the Human View products to improve the overall network enabled system.

  4. First-level trigger processor for the ZEUS calorimeter

    International Nuclear Information System (INIS)

    Dawson, J.W.; Talaga, R.L.; Burr, G.W.; Laird, R.J.; Smith, W.; Lackey, J.

    1990-01-01

    The design of the first-level trigger processor for the Zeus calorimeter is discussed. This processor accepts data from the 13,000 photomultipliers of the calorimeter, which is topologically divided into 16 regions, and after regional preprocessing performs logical and numerical operations that cross regional boundaries. Because the crossing period at the HERA collider is 96 ns, it is necessary that first-level trigger decisions be made in pipelined hardware. One microsecond is allowed for the processor to perform the required logical and numerical operations, during which time the data from ten crossings would be resident in the processor while being clocked through the pipelined hardware. The circuitry is implemented in 100K emitter-coupled logic (ECL), advanced CMOS discrete devices and programmable gate arrays, and operates in a VME environment. All tables and registers are written/read from VME, and all diagnostic codes are executed from VME. Preprocessed data flows into the processor at a rate of 5.2 Gbyte/s, and processed data flows from the processor to the global first-level trigger at a rate of 70 Mbyte/s. The system allows for subsets of the logic to be configured by software and for various important variables to be histogrammed as they flow through the processor

  5. A dedicated line-processor as used at the SHF

    International Nuclear Information System (INIS)

    Bevan, A.V.; Hatley, R.W.; Price, D.R.; Rankin, P.

    1985-01-01

    A hardwired trigger processor was used at the SLAC Hybrid Facility to find evidence for charged tracks originating from the fiducial volume of a 40'' rapidcycling bubble chamber. Straight-line projections of these tracks in the plane perpendicular to the applied magnetic field were searched for using data from three sets of proportional wire chambers (PWC). This information was made directly available to the processor by means of a special digitizing card. The results memory of the processor simulated read-only memory in a 168/E processor and was accessible by it. The 168/E controlled the issuing of a trigger command to the bubble chamber flash tubes. The same design of digitizer card used by the line processor was incorporated into the 168/E, again as read only memory, which allowed it access to the raw data for continual monitoring of trigger integrity. The design logic of the trigger processor was verified by running real PWC data through a FORTRAN simulation of the hardware. This enabled the debugging to become highly automated since a step by step, computer controlled comparison of processor registers to simulation predictions could be made

  6. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  7. A computationally efficient electrophysiological model of human ventricular cells

    NARCIS (Netherlands)

    Bernus, O.; Wilders, R.; Zemlin, C. W.; Verschelde, H.; Panfilov, A. V.

    2002-01-01

    Recent experimental and theoretical results have stressed the importance of modeling studies of reentrant arrhythmias in cardiac tissue and at the whole heart level. We introduce a six-variable model obtained by a reformulation of the Priebe-Beuckelmann model of a single human ventricular cell. The

  8. Some aspects of statistical modeling of human-error probability

    International Nuclear Information System (INIS)

    Prairie, R.R.

    1982-01-01

    Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element

  9. A Review on Human Respiratory Modeling.

    Science.gov (United States)

    Ghafarian, Pardis; Jamaati, Hamidreza; Hashemian, Seyed Mohammadreza

    2016-01-01

    Input impedance of the respiratory system is measured by forced oscillation technique (FOT). Multiple prior studies have attempted to match the electromechanical models of the respiratory system to impedance data. Since the mechanical behavior of airways and the respiratory system as a whole are similar to an electrical circuit in a combination of series and parallel formats some theories were introduced according to this issue. It should be noted that, the number of elements used in these models might be less than those required due to the complexity of the pulmonary-chest wall anatomy. Various respiratory models have been proposed based on this idea in order to demonstrate and assess the different parts of respiratory system related to children and adults data. With regard to our knowledge, some of famous respiratory models in related to obstructive, restrictive diseases and also Acute Respiratory Distress Syndrome (ARDS) are reviewed in this article.

  10. Human Communication--A New Model.

    Science.gov (United States)

    McLeish, John

    1978-01-01

    Pavlov's organism-in-the-environment model was adapted to a functional analysis of communication, expecially abstract and symbolic activities. A classification of discrimination response and reinforcement patterns was given. (CP)

  11. Human Endothelial Cell Models in Biomaterial Research.

    Science.gov (United States)

    Hauser, Sandra; Jung, Friedrich; Pietzsch, Jens

    2017-03-01

    Endothelial cell (EC) models have evolved as important tools in biomaterial research due to ubiquitously occurring interactions between implanted materials and the endothelium. However, screening the available literature has revealed a gap between material scientists and physiologists in terms of their understanding of these biomaterial-endothelium interactions and their relative importance. Consequently, EC models are often applied in nonphysiological experimental setups, or too extensive conclusions are drawn from their results. The question arises whether this might be one reason why, among the many potential biomaterials, only a few have found their way into the clinic. In this review, we provide an overview of established EC models and possible selection criteria to enable researchers to determine the most reliable and relevant EC model to use. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Model-Based approaches to Human-Automation Systems Design

    DEFF Research Database (Denmark)

    Jamieson, Greg A.; Andersson, Jonas; Bisantz, Ann

    2012-01-01

    Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However...... (and reportedly one or two critics) can engage one another on several agreed questions about such frameworks. The goal is to aid non-aligned practitioners in choosing between alternative frameworks for their human-automation interaction design challenges....

  13. Ethernet-Enabled Power and Communication Module for Embedded Processors

    Science.gov (United States)

    Perotti, Jose; Oostdyk, Rebecca

    2010-01-01

    The power and communications module is a printed circuit board (PCB) that has the capability of providing power to an embedded processor and converting Ethernet packets into serial data to transfer to the processor. The purpose of the new design is to address the shortcomings of previous designs, including limited bandwidth and program memory, lack of control over packet processing, and lack of support for timing synchronization. The new design of the module creates a robust serial-to-Ethernet conversion that is powered using the existing Ethernet cable. This innovation has a small form factor that allows it to power processors and transducers with minimal space requirements.

  14. Interfacing a processor core in FPGA to an audio system

    OpenAIRE

    Mateos, José Ignacio

    2006-01-01

    The thesis project consists on developing an interface for a Nios II processor integrated in a board of Altera (UP3- 2C35F672C6 Cyclone II). The main goal is show how the Nios II processor can interact with the other components of the board.The Quartus II software has been used to create to vhdl code of the interfaces, compile it and download it into the board. The Nios II IDE tool is used to build the C/C++ files and download them into the processor. It has been prepared an application for t...

  15. Relevance of animal models to human tardive dyskinesia

    Directory of Open Access Journals (Sweden)

    Blanchet Pierre J

    2012-03-01

    Full Text Available Abstract Tardive dyskinesia remains an elusive and significant clinical entity that can possibly be understood via experimentation with animal models. We conducted a literature review on tardive dyskinesia modeling. Subchronic antipsychotic drug exposure is a standard approach to model tardive dyskinesia in rodents. Vacuous chewing movements constitute the most common pattern of expression of purposeless oral movements and represent an impermanent response, with individual and strain susceptibility differences. Transgenic mice are also used to address the contribution of adaptive and maladaptive signals induced during antipsychotic drug exposure. An emphasis on non-human primate modeling is proposed, and past experimental observations reviewed in various monkey species. Rodent and primate models are complementary, but the non-human primate model appears more convincingly similar to the human condition and better suited to address therapeutic issues against tardive dyskinesia.

  16. Computational models of human vision with applications

    Science.gov (United States)

    Wandell, Brian A.

    1987-01-01

    The research program supported by this grant was initiated in l977 by the Joint Institute for Aeronautics and Acoustics of the Department of Aeronautics and Astronautics at Stanford University. The purpose of the research was to study human performance with the goal of improving the design of flight instrumentation. By mutual agreement between the scientists at NASA-Ames and Stanford, all research activities in this area were consolidated into a single funding mechanism, NCC 2-307 (Center of Excellence Grant, 7/1/84 - present). This is the final report on this research grant.

  17. Median and Morphological Specialized Processors for a Real-Time Image Data Processing

    Directory of Open Access Journals (Sweden)

    Kazimierz Wiatr

    2002-01-01

    Full Text Available This paper presents the considerations on selecting a multiprocessor MISD architecture for fast implementation of the vision image processing. Using the author′s earlier experience with real-time systems, implementing of specialized hardware processors based on the programmable FPGA systems has been proposed in the pipeline architecture. In particular, the following processors are presented: median filter and morphological processor. The structure of a universal reconfigurable processor developed has been proposed as well. Experimental results are presented as delays on LCA level implementation for median filter, morphological processor, convolution processor, look-up-table processor, logic processor and histogram processor. These times compare with delays in general purpose processor and DSP processor.

  18. Quantitative modeling of human performance in complex, dynamic systems

    National Research Council Canada - National Science Library

    Baron, Sheldon; Kruser, Dana S; Huey, Beverly Messick

    1990-01-01

    ... Sheldon Baron, Dana S. Kruser, and Beverly Messick Huey, editors Panel on Human Performance Modeling Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1990 Copyrightoriginal retained, the be not from cannot book, paper original however, for version forma...

  19. How do humans inspect BPMN models: an exploratory study

    DEFF Research Database (Denmark)

    Haisjackl, Cornelia; Soffer, Pnina; Lim, Shao Yi

    2016-01-01

    by humans, what strategies are taken, what challenges arise, and what cognitive processes are involved. This paper contributes toward such an understanding and reports an exploratory study investigating how humans identify and classify quality issues in BPMN process models. Providing preliminary answers...

  20. Pharmacological migraine provocation: a human model of migraine

    DEFF Research Database (Denmark)

    Ashina, Messoud; Hansen, Jakob Møller

    2010-01-01

    for migraine mechanisms. So far, however, animal models cannot predict the efficacy of new therapies for migraine. Because migraine attacks are fully reversible and can be aborted by therapy, the headache- or migraine-provoking property of naturally occurring signaling molecules can be tested in a human model......In vitro studies have contributed to the characterization of receptors in cranial blood vessels and the identification of possible new antimigraine agents. Animal models enable the study of vascular responses, neurogenic inflammation, and peptide release, and thus have provided leads in the search....... If a naturally occurring substance can provoke migraine in human patients, then it is likely, although not certain, that blocking its effect will be effective in the treatment of acute migraine attacks. To this end, a human in vivo model of experimental headache and migraine in humans has been developed...

  1. Can inter-human communications be modeled as "autopoietic"?

    NARCIS (Netherlands)

    Leydesdorff, L.

    2014-01-01

    Open peer commentary on the article "Social Autopoiesis?" by Hugo Urrestarazu. Upshot: The dynamics of expectations in inter-human communications can be modelled as "autopoiesis." Consciousness and communications couple not only structurally (Maturana), but also penetrate each other reflexively

  2. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  3. Human casualties in earthquakes: Modelling and mitigation

    Science.gov (United States)

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  4. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking.

    Science.gov (United States)

    Bolton, Matthew L; Bass, Ellen J; Siminiceanu, Radu I

    2012-11-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.

  5. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking

    Science.gov (United States)

    Bolton, Matthew L.; Bass, Ellen J.; Siminiceanu, Radu I.

    2012-01-01

    Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel’s zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development. PMID:23105914

  6. Multipurpose silicon photonics signal processor core.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José

    2017-09-21

    Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.

  7. Preventing Precipitation in the ISS Urine Processor

    Science.gov (United States)

    Muirhead, Dean; Carter, Layne; Williamson, Jill; Chambers, Antja

    2017-01-01

    The ISS Urine Processor Assembly (UPA) was initially designed to achieve 85% recovery of water from pretreated urine on ISS. Pretreated urine is comprised of crew urine treated with flush water, an oxidant (chromium trioxide), and an inorganic acid (sulfuric acid) to control microbial growth and inhibit precipitation. Unfortunately, initial operation of the UPA on ISS resulted in the precipitation of calcium sulfate at 85% recovery. This occurred because the calcium concentration in the crew urine was elevated in microgravity due to bone loss. The higher calcium concentration precipitated with sulfate from the pretreatment acid, resulting in a failure of the UPA due to the accumulation of solids in the Distillation Assembly. Since this failure, the UPA has been limited to a reduced recovery of water from urine to prevent calcium sulfate from reaching the solubility limit. NASA personnel have worked to identify a solution that would allow the UPA to return to a nominal recovery rate of 85%. This effort has culminated with the development of a pretreatment based on phosphoric acid instead of sulfuric acid. By eliminating the sulfate associated with the pretreatment, the brine can be concentrated to a much higher concentration before calcium sulfate reach the solubility limit. This paper summarizes the development of this pretreatment and the testing performed to verify its implementation on ISS.

  8. Element Load Data Processor (ELDAP) Users Manual

    Science.gov (United States)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  9. Project Report: Automatic Sequence Processor Software Analysis

    Science.gov (United States)

    Benjamin, Brandon

    2011-01-01

    The Mission Planning and Sequencing (MPS) element of Multi-Mission Ground System and Services (MGSS) provides space missions with multi-purpose software to plan spacecraft activities, sequence spacecraft commands, and then integrate these products and execute them on spacecraft. Jet Propulsion Laboratory (JPL) is currently is flying many missions. The processes for building, integrating, and testing the multi-mission uplink software need to be improved to meet the needs of the missions and the operations teams that command the spacecraft. The Multi-Mission Sequencing Team is responsible for collecting and processing the observations, experiments and engineering activities that are to be performed on a selected spacecraft. The collection of these activities is called a sequence and ultimately a sequence becomes a sequence of spacecraft commands. The operations teams check the sequence to make sure that no constraints are violated. The workflow process involves sending a program start command, which activates the Automatic Sequence Processor (ASP). The ASP is currently a file-based system that is comprised of scripts written in perl, c-shell and awk. Once this start process is complete, the system checks for errors and aborts if there are any; otherwise the system converts the commands to binary, and then sends the resultant information to be radiated to the spacecraft.

  10. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  11. KIDNEY DISEASE VISUALIZED ON DIGITAL PROCESSOR

    Directory of Open Access Journals (Sweden)

    Rade R. Babić

    2013-09-01

    Full Text Available Radiological methods of examination in diagnosis of pathological conditions and diseases of urinary system are numerous and various, reliable and dominant. They became indispensable and without competition, among other diagnostic methods, using the digital techniques. The aim of this paper was to present the radiological image of pathological conditions and diseases of urinary system diagnosed by intravenous urography using digital techniques and to show the diagnostic possibilities and importance of digital techniques in diagnostic radiology. The paper analyzes pathological conditions and diseases of the kidney in a series of 3100 intravenous urographies (IVU performed at the Radiology Center, Clinical Center Niš, during the period 2009-2012. Radiographic examination was performed on X-ray device with a TV chain Schimadzu. IVU was performed according to the standard protocol. Contrast media: Ultravist 370®. X-ray images were digitally processed in Agfa CR-30 digital processor. The results are shown illustratively, by urographic images - anomalies, calculosis, hydronephrosis, tumors and other pathological conditions and diseases of the urinary system. This paper presents numerous and various pathological conditions and diseases of the urinary system. Among the valuable radiological examination methods IVU has maintained a leading position. The usage of digital techniques made IVU faster, easy and efficient method of examination, while the obtained urograms are of satisfactory quality and adequate contrast visualization of the urinary system.

  12. Slime mould processors, logic gates and sensors.

    Science.gov (United States)

    Adamatzky, A

    2015-07-28

    A heterotic, or hybrid, computation implies that two or more substrates of different physical nature are merged into a single device with indistinguishable parts. These hybrid devices then undertake coherent acts on programmable and sensible processing of information. We study the potential of heterotic computers using slime mould acting under the guidance of chemical, mechanical and optical stimuli. Plasmodium of acellular slime mould Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioural morphological patterns in response to changing environmental conditions. Given data represented by chemical or physical stimuli, we can employ and modify the behaviour of the slime mould to make it solve a range of computing and sensing tasks. We overview results of laboratory experimental studies on prototyping of the slime mould morphological processors for approximation of Voronoi diagrams, planar shapes and solving mazes, and discuss logic gates implemented via collision of active growing zones and tactile responses of P. polycephalum. We also overview a range of electronic components--memristor, chemical, tactile and colour sensors-made of the slime mould. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. Comparison of the Gene Expression Profiles of Human Hematopoietic Stem Cells between Humans and a Humanized Xenograft Model.

    Science.gov (United States)

    Matsuzawa, Hideyuki; Matsushita, Hiromichi; Yahata, Takashi; Tanaka, Masayuki; Ando, Kiyoshi

    2017-04-20

    The aim of this study is to evaluate the feasibility of NOD/Shi-scid-IL2Rγ null (NOG) mice transplanted with human CD34 + /CD38 - /Lin -/low hematopoietic cells from cord blood (CB) as an experimental model of the gene expression in human hematopoiesis. We compared the gene expressions of human CD34 + /CD38 - /Lin -/low cells from human bone marrow (BM) and in xenograft models. The microarray data revealed that 25 KEGG pathways were extracted from the comparison of human CD34 + /CD38 - /Lin -/low HSCs between CB and BM, and that 17 of them--which were mostly related to cellular survival, RNA metabolism and lymphoid development--were shared with the xenograft model. When the probes that were commonly altered in CD34 + /CD38 - /Lin -/low cells from both human and xenograft BM were analyzed, most of them, including the genes related hypoxia, hematopoietic differentiation, epigenetic modification, translation initiation, and RNA degradation, were downregulated. These alterations of gene expression suggest a reduced differentiation capacity and likely include key alterations of gene expression for settlement of CB CD34 + /CD38 - /Lin -/low cells in BM. Our findings demonstrate that the xenograft model of human CB CD34 + /CD38 - /Lin -/low cells using NOG mice was useful, at least in part, for the evaluation of the gene expression profile of human hematopoietic stem cells.

  14. Modeling human disease using organotypic cultures

    DEFF Research Database (Denmark)

    Schweiger, Pawel J; Jensen, Kim B

    2016-01-01

    Reliable disease models are needed in order to improve quality of healthcare. This includes gaining better understanding of disease mechanisms, developing new therapeutic interventions and personalizing treatment. Up-to-date, the majority of our knowledge about disease states comes from in vivo...

  15. A computational model of human auditory signal processing and perception

    OpenAIRE

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell t...

  16. Building a Shared Definitional Model of Long Duration Human Spaceflight

    Science.gov (United States)

    Arias, Diana; Orr, Martin; Whitmire, Alexandra; Leveton, Lauren; Sandoval, Luis

    2012-01-01

    Objective: To establish the need for a shared definitional model of long duration human spaceflight, that would provide a framework and vision to facilitate communication, research and practice In 1956, on the eve of human space travel, Hubertus Strughold first proposed a "simple classification of the present and future stages of manned flight" that identified key factors, risks and developmental stages for the evolutionary journey ahead. As we look to new destinations, we need a current shared working definitional model of long duration human space flight to help guide our path. Here we describe our preliminary findings and outline potential approaches for the future development of a definition and broader classification system

  17. Humanized in vivo Model for Autoimmune Diabetes

    Science.gov (United States)

    2010-05-07

    guinea - pig polyclonal anti-insulin (1:100 dilution, Abcam Ab7842-500, Cambridge, MA) and a secondary goat anti- guinea - pig Alexa-fluor 568 (1:100 dilu...which is reported to accelerate experimental autoimmune encephalomyelitis (a mouse model of multiple sclerosis). Our reasoning was that, as T cells...HL, Sobel RA, Kuchroo VK. IL-10 is critical in the regulation of autoimmune encephalomyelitis as demonstrated by studies of IL-10- and IL-4

  18. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  19. Monte Carlo modeling of human tooth optical coherence tomography imaging

    Science.gov (United States)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-07-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  20. High-Performance Linear Algebra Processor using FPGA

    National Research Council Canada - National Science Library

    Johnson, J

    2004-01-01

    With recent advances in FPGA (Field Programmable Gate Array) technology it is now feasible to use these devices to build special purpose processors for floating point intensive applications that arise in scientific computing...

  1. Complexity of scheduling multiprocessor tasks with prespecified processor allocations

    NARCIS (Netherlands)

    Hoogeveen, J.A.; van de Velde, S.L.; van de Velde, S.L.; Veltman, Bart

    1995-01-01

    We investigate the computational complexity of scheduling multiprocessor tasks with prespecified processor allocations. We consider two criteria: minimizing schedule length and minimizing the sum of the task completion times. In addition, we investigate the complexity of problems when precedence

  2. A Shared Memory Module for Asynchronous Arrays of Processors

    Directory of Open Access Journals (Sweden)

    Meeuwsen MichaelJ

    2007-01-01

    Full Text Available A shared memory module connecting multiple independently clocked processors is presented. The memory module itself is independently clocked, supports hardware address generation, mutual exclusion, and multiple addressing modes. The architecture supports independent address generation and data generation/consumption by different processors which increases efficiency and simplifies programming for many embedded and DSP tasks. Simultaneous access by different processors is arbitrated using a least-recently-serviced priority scheme. Simulations show high throughputs over a variety of memory loads. A standard cell implementation shares an 8 K-word SRAM among four processors, and can support a 64 K-word SRAM with no additional changes. It cycles at 555 MHz and occupies 1.2 mm2 in 0.18 μm CMOS.

  3. Fault Mitigation Schemes for Future Spaceflight Multicore Processors

    Science.gov (United States)

    Some, Rafi; Gostelow, Kim P.; Lai, John; Reder, Leonard; Alexander, James; Clement, Brad

    2012-01-01

    The goal of this work is to achieve fail-operational and graceful-degradation behavior in realistic flight mission scenarios, of multicore processors such as Mars Entry-Descent-Landing (EDL) and Primitive Body proximity operations.

  4. Nanofilm processors controlled by electrolyte flows of femtoliter volume.

    Science.gov (United States)

    Nolte, Marius; Knoll, Meinhard

    2013-06-25

    Nanofilm processors are a new kind of smart system based on the lateral self-oxidation of nanoscale aluminum films. The time dependency of these devices is controlled by electrolyte flows of femtoliter volume which can be modulated by different mechanisms. In this paper, we provide a deeper investigation of the electrolyte transport in the nanofilm processor and the different possibilities to control the aluminum oxidation velocity. A method for the in situ investigation of the acidic characteristic of the channel electrolyte is demonstrated. The obtained results form a set of instruments for constructing more complex electrolyte circuits and should allow the creation of nanofilm processors of arbitrary time dependence. Because the nanofilm processor combines different functional blocks and can operate in a self-sustained manner, without requiring batteries, this smart system may serve as a basis for many potential applications.

  5. Reconfigurable VLIW Processor for Software Defined Radio, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and formally verify a VLIW processor that is radiation-hardened, and where the VLIW instructions consist of predicated RISC instructions from the...

  6. Assembly processor program converts symbolic programming language to machine language

    Science.gov (United States)

    Pelto, E. V.

    1967-01-01

    Assembly processor program converts symbolic programming language to machine language. This program translates symbolic codes into computer understandable instructions, assigns locations in storage for successive instructions, and computer locations from symbolic addresses.

  7. Reconfigurable VLIW Processor for Software Defined Radio Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and formally verify a VLIW processor that is radiation-hardened, and where the VLIW instructions consist of predicated RISC instructions from the...

  8. 2009 Survey of Gulf of Mexico Dockside Seafood Processors

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This survey gathered and analyze economic data from seafood processors throughout the states in the Gulf region. The survey sought to collect financial variables...

  9. Radiation Tolerant Software Defined Video Processor, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  10. A model for assessing human cognitive reliability in PRA studies

    International Nuclear Information System (INIS)

    Hannaman, G.W.; Spurgin, A.J.; Lukic, Y.

    1985-01-01

    This paper summarizes the status of a research project sponsored by EPRI as part of the Probabilistic Risk Assessment (PRA) technology improvement program and conducted by NUS Corporation to develop a model of Human Cognitive Reliability (HCR). The model was synthesized from features identified in a review of existing models. The model development was based on the hypothesis that the key factors affecting crew response times are separable. The inputs to the model consist of key parameters the values of which can be determined by PRA analysts for each accident situation being assessed. The output is a set of curves which represent the probability of control room crew non-response as a function of time for different conditions affecting their performance. The non-response probability is then a contributor to the overall non-success of operating crews to achieve a functional objective identified in the PRA study. Simulator data and some small scale tests were utilized to illustrate the calibration of interim HCR model coefficients for different types of cognitive processing since the data were sparse. The model can potentially help PRA analysts make human reliability assessments more explicit. The model incorporates concepts from psychological models of human cognitive behavior, information from current collections of human reliability data sources and crew response time data from simulator training exercises

  11. Lumped parametric model of the human ear for sound transmission.

    Science.gov (United States)

    Feng, Bin; Gan, Rong Z

    2004-09-01

    A lumped parametric model of the human auditoria peripherals consisting of six masses suspended with six springs and ten dashpots was proposed. This model will provide the quantitative basis for the construction of a physical model of the human middle ear. The lumped model parameters were first identified using published anatomical data, and then determined through a parameter optimization process. The transfer function of the middle ear obtained from human temporal bone experiments with laser Doppler interferometers was used for creating the target function during the optimization process. It was found that, among 14 spring and dashpot parameters, there were five parameters which had pronounced effects on the dynamic behaviors of the model. The detailed discussion on the sensitivity of those parameters was provided with appropriate applications for sound transmission in the ear. We expect that the methods for characterizing the lumped model of the human ear and the model parameters will be useful for theoretical modeling of the ear function and construction of the ear physical model.

  12. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in total, simulate the general character of operator performance. (author)

  13. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, James

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance. (author)

  14. Immunology studies in non-human primate models of tuberculosis.

    Science.gov (United States)

    Flynn, JoAnne L; Gideon, Hannah P; Mattila, Joshua T; Lin, Philana Ling

    2015-03-01

    Non-human primates, primarily macaques, have been used to study tuberculosis for decades. However, in the last 15 years, this model has been refined substantially to allow careful investigations of the immune response and host-pathogen interactions in Mycobacterium tuberculosis infection. Low-dose challenge with fully virulent strains in cynomolgus macaques result in the full clinical spectrum seen in humans, including latent and active infection. Reagents from humans are usually cross-reactive with macaques, further facilitating the use of this model system to study tuberculosis. Finally, macaques develop the spectrum of granuloma types seen in humans, providing a unique opportunity to investigate bacterial and host factors at the local (lung and lymph node) level. Here, we review the past decade of immunology and pathology studies in macaque models of tuberculosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Human pluripotent stem cells: an emerging model in developmental biology.

    Science.gov (United States)

    Zhu, Zengrong; Huangfu, Danwei

    2013-02-01

    Developmental biology has long benefited from studies of classic model organisms. Recently, human pluripotent stem cells (hPSCs), including human embryonic stem cells and human induced pluripotent stem cells, have emerged as a new model system that offers unique advantages for developmental studies. Here, we discuss how studies of hPSCs can complement classic approaches using model organisms, and how hPSCs can be used to recapitulate aspects of human embryonic development 'in a dish'. We also summarize some of the recently developed genetic tools that greatly facilitate the interrogation of gene function during hPSC differentiation. With the development of high-throughput screening technologies, hPSCs have the potential to revolutionize gene discovery in mammalian development.

  16. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  17. Joint Experiment on Scalable Parallel Processors (JESPP) Parallel Data Management

    Science.gov (United States)

    2006-05-01

    thousand entities to a WAN including multiple Beowulf clusters and hundreds of processors simulating hundreds of thousands of entities. An...support larger simulations on Beowulf clusters ISI implemented a distributed logger. Data is logged locally on each processor running a simulator...development and execution effort (Lucas, 2003). Common SPPs include the IBM SP, SGI Origin, Cray T3E, and the “ Beowulf ” Linux clusters. Traditionally

  18. GA103: A microprogrammable processor for online filtering

    International Nuclear Information System (INIS)

    Calzas, A.; Danon, G.; Bouquet, B.

    1981-01-01

    GA 103 is a 16 bit microprogrammable processor which emulates the PDP 11 instruction set. It is based on the Am 2900 slices. It allows user-implemented microinstructions and addition of hardwired processors. It will perform on-line filtering tasks in the NA 14 experiment at CERN, based on the reconstruction of transverse momentum of photons detected in a lead glass calorimeter. (orig.)

  19. On the Distribution of Control in Asynchronous Processor Architectures

    OpenAIRE

    Rebello, Vinod

    1997-01-01

    The effective performance of computer systems is to a large measure determined by the synergy between the processor architecture, the instruction set and the compiler. In the past, the sequencing of information within processor architectures has normally been synchronous: controlled centrally by a clock. However, this global signal could possibly limit the future gains in performance that can potentially be achieved through improvements in implementation technology. T...

  20. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors.......It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  1. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors.......It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  2. UA1 upgrade first-level calorimeter trigger processor

    International Nuclear Information System (INIS)

    Bains, N.; Charlton, D.; Ellis, N.; Garvey, J.; Gregory, J.; Jimack, M.P.; Jovanovic, P.; Kenyon, I.R.; Baird, S.A.; Campbell, D.; Cawthraw, M.; Coughlan, J.; Flynn, P.; Galagedera, S.; Grayer, G.; Halsall, R.; Shah, T.P.; Stephens, R.; Eisenhandler, E.; Fensome, I.; Landon, M.

    1989-01-01

    A new first-level trigger processor has been built for the UA1 experiment on the Cern SppS Collider. The processor exploits the fine granularity of the new UA1 uranium-TMP calorimeter to improve the selectivity of the trigger. The new electron trigger has improved hadron jet rejection, achieved by requiring low energy deposition around the electromagnetic cluster. A missing transverse energy trigger and a total energy trigger have also been implemented. (orig.)

  3. Modeling aspects of human memory for scientific study.

    Energy Technology Data Exchange (ETDEWEB)

    Caudell, Thomas P. (University of New Mexico); Watson, Patrick (University of Illinois - Champaign-Urbana Beckman Institute); McDaniel, Mark A. (Washington University); Eichenbaum, Howard B. (Boston University); Cohen, Neal J. (University of Illinois - Champaign-Urbana Beckman Institute); Vineyard, Craig Michael; Taylor, Shawn Ellis; Bernard, Michael Lewis; Morrow, James Dan; Verzi, Stephen J.

    2009-10-01

    Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.

  4. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  5. Discrete time modelization of human pilot behavior

    Science.gov (United States)

    Cavalli, D.; Soulatges, D.

    1975-01-01

    This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.

  6. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  7. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  8. A UNIX-based prototype biomedical virtual image processor

    International Nuclear Information System (INIS)

    Fahy, J.B.; Kim, Y.

    1987-01-01

    The authors have developed a multiprocess virtual image processor for the IBM PC/AT, in order to maximize image processing software portability for biomedical applications. An interprocess communication scheme, based on two-way metacode exchange, has been developed and verified for this purpose. Application programs call a device-independent image processing library, which transfers commands over a shared data bridge to one or more Autonomous Virtual Image Processors (AVIP). Each AVIP runs as a separate process in the UNIX operating system, and implements the device-independent functions on the image processor to which it corresponds. Application programs can control multiple image processors at a time, change the image processor configuration used at any time, and are completely portable among image processors for which an AVIP has been implemented. Run-time speeds have been found to be acceptable for higher level functions, although rather slow for lower level functions, owing to the overhead associated with sending commands and data over the shared data bridge

  9. Review of trigger and on-line processors at SLAC

    International Nuclear Information System (INIS)

    Lankford, A.J.

    1984-07-01

    The role of trigger and on-line processors in reducing data rates to manageable proportions in e + e - physics experiments is defined not by high physics or background rates, but by the large event sizes of the general-purpose detectors employed. The rate of e + e - annihilation is low, and backgrounds are not high; yet the number of physics processes which can be studied is vast and varied. This paper begins by briefly describing the role of trigger processors in the e + e - context. The usual flow of the trigger decision process is illustrated with selected examples of SLAC trigger processing. The features are mentioned of triggering at the SLC and the trigger processing plans of the two SLC detectors: The Mark II and the SLD. The most common on-line processors at SLAC, the BADC, the SLAC Scanner Processor, the SLAC FASTBUS Controller, and the VAX CAMAC Channel, are discussed. Uses of the 168/E, 3081/E, and FASTBUS VAX processors are mentioned. The manner in which these processors are interfaced and the function they serve on line is described. Finally, the accelerator control system for the SLC is outlined. This paper is a survey in nature, and hence, relies heavily upon references to previous publications for detailed description of work mentioned here. 27 references, 9 figures, 1 table

  10. Reconfigurable signal processor designs for advanced digital array radar systems

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  11. Parallelization of applications for networks with homogeneous and heterogeneous processors

    International Nuclear Information System (INIS)

    Colombet, L.

    1994-01-01

    The aim of this thesis is to study and develop efficient methods for parallelization of scientific applications on parallel computers with distributed memory. The first part presents two libraries of PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) communication tools. They allow implementation of programs on most parallel machines, but also on heterogeneous computer networks. This chapter illustrates the problems faced when trying to evaluate performances of networks with heterogeneous processors. To evaluate such performances, the concepts of speed-up and efficiency have been modified and adapted to account for heterogeneity. The second part deals with a study of parallel application libraries such as ScaLAPACK and with the development of communication masking techniques. The general concept is based on communication anticipation, in particular by pipelining message sending operations. Experimental results on Cray T3D and IBM SP1 machines validates the theoretical studies performed on basic algorithms of the libraries discussed above. Two examples of scientific applications are given: the first is a model of young stars for astrophysics and the other is a model of photon trajectories in the Compton effect. (J.S.). 83 refs., 65 figs., 24 tabs

  12. 7 CFR 160.50 - Reports to be made by accredited processors.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Reports to be made by accredited processors. 160.50... made by accredited processors. Each accredited processor shall furnish the Administrator such reports... processor to keep such records as may be necessary for him to submit correct reports, or failure by the...

  13. Observations and modeling of deterministic properties of human ...

    Indian Academy of Sciences (India)

    We show that the properties of both models are different from those obtained for Type-I intermittency in the presence of additive noise. The two models help to explain some of the features seen in the intermittency in human heart rate variability. Keywords. Heart rate variability; intermittency; non-stationary dynamical systems.

  14. Comparative homology modeling of human rhodopsin with several ...

    African Journals Online (AJOL)

    The molecular structure of rhodopsin has been studied by cryo-electron microscopic, Nuclear Magnetic Resonance (NMR) and X-ray crystallographic techniques in bovine. A humble effort has been ... Key words: Homology modeling, human rhodopsin, bovine templates, sequence alignment, model building, energy profiles.

  15. Human surrogate models of histaminergic and non-histaminergic itch

    DEFF Research Database (Denmark)

    Andersen, Hjalte Holm; Elberling, Jesper; Arendt-Nielsen, Lars

    2015-01-01

    Within the last decade understanding of the mechanistic basis of itch has improved significantly, resulting in the development of several human surrogate models of itch and related dysesthetic states. Well-characterized somatosensory models are useful in basic studies in healthy volunteers...

  16. Computational 3-D Model of the Human Respiratory System

    Science.gov (United States)

    We are developing a comprehensive, morphologically-realistic computational model of the human respiratory system that can be used to study the inhalation, deposition, and clearance of contaminants, while being adaptable for age, race, gender, and health/disease status. The model ...

  17. Growth Modeling of Human Mandibles using Non-Euclidean Metrics

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Larsen, Rasmus; Wrobel, Mark

    2003-01-01

    From a set of 31 three-dimensional CT scans we model the temporal shape and size of the human mandible. Each anatomical structure is represented using 14851 semi-landmarks, and mapped into Procrustes tangent space. Exploratory subspace analyses are performed leading to linear models of mandible s...

  18. Child human model development: a hybrid validation approach

    NARCIS (Netherlands)

    Forbes, P.A.; Rooij, L. van; Rodarius, C.; Crandall, J.

    2008-01-01

    The current study presents a development and validation approach of a child human body model that will help understand child impact injuries and improve the biofidelity of child anthropometric test devices. Due to the lack of fundamental child biomechanical data needed to fully develop such models a

  19. Gene therapy in nonhuman primate models of human autoimmune disease

    NARCIS (Netherlands)

    t'Hart, B. A.; Vervoordeldonk, M.; Heeney, J. L.; Tak, P. P.

    2003-01-01

    Before autoimmune diseases in humans can be treated with gene therapy, the safety and efficacy of the used vectors must be tested in valid experimental models. Monkeys, such as the rhesus macaque or the common marmoset, provide such models. This publication reviews the state of the art in monkey

  20. Impact Analysis of a Biomechanical Model of the Human Thorax

    National Research Council Canada - National Science Library

    Jolly, Johannes

    2000-01-01

    .... The objective of the study was to create a viable finite element model of the human thorax. This objective was accomplished through the construction of a three-dimensional finite element model in DYNA3D, a finite element analysis program...

  1. Mathematical Analysis of a Model for Human Immunodeficiency ...

    African Journals Online (AJOL)

    ADOWIE PERE

    ABSTRACT: The objective of this paper is to present a mathematical model formulated to investigate the dynamics of human immunodeficiency virus (HIV). The disease free equilibrium of the model was found to be locally and globally asymptotically stable. The endemic equilibrium point exists and it was discovered that the ...

  2. Numerical human model for impact and seating comfort

    NARCIS (Netherlands)

    Hoof, J.F.A.M. van; Lange, R. de; Verver, M.M.

    2003-01-01

    This paper presents a detailed numerical model of the human body that can be used to evaluate both safety and comfort aspects of vehicle interiors. The model is based on a combination of rigid body and finite element techniques to provide an optimal combination of computational efficiency and

  3. Establishment and characterization of a reconstructed Chinese human epidermis model.

    Science.gov (United States)

    Qiu, J; Zhong, L; Zhou, M; Chen, D; Huang, X; Chen, J; Chen, M; Ni, H; Cai, Z

    2016-02-01

    In vitro reconstructed human epidermis is a powerful tool for both basic research and industrial applications in dermatology, pharmacology and the cosmetic field. By growing keratinocytes of Chinese origin on a collagen matrix after a submerged culture followed by an air-liquid interface culture, an in vitro reconstructed Chinese human epidermis model was obtained. This Chinese epidermis model was further characterized. The reconstructed human epidermis model (China EpiSkin model) exhibits morphological features similar to native skin and shows similar expression profile of proliferation (Ki67) and differentiation (K14 and K10 cytokeratins, filaggrin) markers. Corneodesmosomes, lamellar lipids, desmosomes, keratohyalin granules, keratin filaments and membrane-coating granules are also observed at the ultrastructure level. Moreover, China EpiSkin model contains most of the major lipid classes normally found in the native skin and potentially could present the properties of skin barrier. More importantly, the model production achieves high reproducibility and low intra- and inter-batch variations. This is the first reconstructed Chinese human epidermis model reported to meet the high quality standard with industrialized production criteria. This China EpiSkin model can be used for both skin research and safety assessment in vitro. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. A Systemic Cause Analysis Model for Human Performance Technicians

    Science.gov (United States)

    Sostrin, Jesse

    2011-01-01

    This article presents a systemic, research-based cause analysis model for use in the field of human performance technology (HPT). The model organizes the most prominent barriers to workplace learning and performance into a conceptual framework that explains and illuminates the architecture of these barriers that exist within the fabric of everyday…

  5. Human strategic reasoning in dynamic games: Experiments, logics, cognitive models

    NARCIS (Netherlands)

    Ghosh, Sujata; Halder, Tamoghna; Sharma, Khyati; Verbrugge, Rineke

    2015-01-01

    © Springer-Verlag Berlin Heidelberg 2015.This article provides a three-way interaction between experiments, logic and cognitive modelling so as to bring out a shared perspective among these diverse areas, aiming towards better understanding and better modelling of human strategic reasoning in

  6. Pattern-Recognition Processor Using Holographic Photopolymer

    Science.gov (United States)

    Chao, Tien-Hsin; Cammack, Kevin

    2006-01-01

    proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square

  7. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  8. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. We report on the performance of the intermedia...

  9. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed to execute pattern matching with a high degree of parallelism. The AM system finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 828 2 Gbit/s serial links for a total in/out bandwidth of 56 Gb/s. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. ...

  10. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beccherle, R; Beretta, M; Cipriani, R; Citraro, S; Citterio, M; Colombo, A; Crescioli, F; Dimas, D; Donati, S; Giannetti, P; Kordas, K; Lanza, A; Liberali, V; Luciano, P; Magalotti, D; Neroutsos, P; Nikolaidis, S; Piendibene, M; Sakellariou, A; Shojaii, S; Sotiropoulou, C-L; Stabile, A

    2014-01-01

    The Associative Memory (AM) system of the FTK processor has been designed to perform pattern matching using the hit information of the ATLAS silicon tracker. The AM is the heart of the FTK and it finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside the FTK, multiple designs and tests have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of the AM chip, an ASIC designed and optimized to perform pattern matching, and two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. Special relevance will be given to the AMchip design that includes two custom cells optimized for low consumption. We repo...

  11. Quantitative Modeling of Human-Environment Interactions in Preindustrial Time

    Science.gov (United States)

    Sommer, Philipp S.; Kaplan, Jed O.

    2017-04-01

    Quantifying human-environment interactions and anthropogenic influences on the environment prior to the Industrial revolution is essential for understanding the current state of the earth system. This is particularly true for the terrestrial biosphere, but marine ecosystems and even climate were likely modified by human activities centuries to millennia ago. Direct observations are however very sparse in space and time, especially as one considers prehistory. Numerical models are therefore essential to produce a continuous picture of human-environment interactions in the past. Agent-based approaches, while widely applied to quantifying human influence on the environment in localized studies, are unsuitable for global spatial domains and Holocene timescales because of computational demands and large parameter uncertainty. Here we outline a new paradigm for the quantitative modeling of human-environment interactions in preindustrial time that is adapted to the global Holocene. Rather than attempting to simulate agency directly, the model is informed by a suite of characteristics describing those things about society that cannot be predicted on the basis of environment, e.g., diet, presence of agriculture, or range of animals exploited. These categorical data are combined with the properties of the physical environment in coupled human-environment model. The model is, at its core, a dynamic global vegetation model with a module for simulating crop growth that is adapted for preindustrial agriculture. This allows us to simulate yield and calories for feeding both humans and their domesticated animals. We couple this basic caloric availability with a simple demographic model to calculate potential population, and, constrained by labor requirements and land limitations, we create scenarios of land use and land cover on a moderate-resolution grid. We further implement a feedback loop where anthropogenic activities lead to changes in the properties of the physical

  12. Geometry Modeling Program Implementation of Human Hip Tissue

    Directory of Open Access Journals (Sweden)

    WANG Mo-nan

    2017-10-01

    Full Text Available Abstract:Aiming to design a simulate software of human tissue modeling and analysis,Visual Studio 2010 is selected as a development tool to develop a 3 D reconstruction software of human tissue with language C++.It can be used alone. It also can be a module of the virtual surgery systems. The system includes medical image segmentation modules and 3 D reconstruction modules,and can realize the model visualization. This software system has been used to reconstruct hip muscles,femur and hip bone accurately. The results show these geometry models can simulate the structure of hip tissues.

  13. Geometry Modeling Program Implementation of Human Hip Tissue

    Directory of Open Access Journals (Sweden)

    WANG Monan

    2017-04-01

    Full Text Available Aiming to design a simulate software of human tissue modeling and analysis,Visual Studio 2010 is selected as a development tool to develop a 3 D reconstruction software of human tissue with language C++.It can be used alone. It also can be a module of the virtual surgery systems. The system includes medical image segmentation modules and 3 D reconstruction modules,and can realize the model visualization. This software system has been used to reconstruct hip muscles,femur and hip bone accurately. The results show these geometry models can simulate the structure of hip tissues.

  14. Modeling Human Behavior to Anticipate Insider Attacks

    Directory of Open Access Journals (Sweden)

    Ryan E Hohimer

    2011-01-01

    Full Text Available The insider threat ranks among the most pressing cyber-security challenges that threaten government and industry information infrastructures. To date, no systematic methods have been developed that provide a complete and effective approach to prevent data leakage, espionage, and sabotage. Current practice is forensic in nature, relegating to the analyst the bulk of the responsibility to monitor, analyze, and correlate an overwhelming amount of data. We describe a predictive modeling framework that integrates a diverse set of data sources from the cyber domain, as well as inferred psychological/motivational factors that may underlie malicious insider exploits. This comprehensive threat assessment approach provides automated support for the detection of high-risk behavioral "triggers" to help focus the analyst's attention and inform the analysis. Designed to be domain-independent, the system may be applied to many different threat and warning analysis/sense-making problems.

  15. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    Science.gov (United States)

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  16. Integrating modelling and smart sensors for environmental and human health.

    Science.gov (United States)

    Reis, Stefan; Seto, Edmund; Northcross, Amanda; Quinn, Nigel W T; Convertino, Matteo; Jones, Rod L; Maier, Holger R; Schlink, Uwe; Steinle, Susanne; Vieno, Massimo; Wimberly, Michael C

    2015-12-01

    Sensors are becoming ubiquitous in everyday life, generating data at an unprecedented rate and scale. However, models that assess impacts of human activities on environmental and human health, have typically been developed in contexts where data scarcity is the norm. Models are essential tools to understand processes, identify relationships, associations and causality, formalize stakeholder mental models, and to quantify the effects of prevention and interventions. They can help to explain data, as well as inform the deployment and location of sensors by identifying hotspots and areas of interest where data collection may achieve the best results. We identify a paradigm shift in how the integration of models and sensors can contribute to harnessing 'Big Data' and, more importantly, make the vital step from 'Big Data' to 'Big Information'. In this paper, we illustrate current developments and identify key research needs using human and environmental health challenges as an example.

  17. Use of electromyography measurement in human body modeling

    Directory of Open Access Journals (Sweden)

    Valdmanová L.

    2011-06-01

    Full Text Available The aim of this study is to test the use of the human body model for the muscle activity computation. This paper shows the comparison of measured and simulated muscle activities. Muscle active states of biceps brachia muscle are monitored by method called electromyography (EMG in a given position and for given subsequently increasing loads. The same conditions are used for simulation using a human body model (Hynčík, L., Rigid Body Based Human Model for Crash Test Purposes, EngineeringMechanics, 5 (8 (2001 1–6. This model consists of rigid body segments connected by kinematic joints and involves all major muscle bunches. Biceps brachia active states are evaluated by a special muscle balance solver. Obtained simulation results show the acceptable correlation with the experimental results. The analysis shows that the validation procedure of muscle activities determination is usable.

  18. A Systems Model for Teaching Human Resource Management

    Directory of Open Access Journals (Sweden)

    George R. Greene

    2013-07-01

    Full Text Available Efficient and effective human resource management is a complex, involved, and interactive process. This article presents and discusses a unique systems approach model for teaching human resource (people management processes, and the important inter-relationships within that process. The model contains two unique components related to key sub-processes: incentives management and performance evaluation. We have not observed a model applying a systems thinking paradigm presented in any textbook, journal article, business publication, or other literature addressing the topic. For nearly three decades, the model has been used in teaching a comprehensive, meaningful understanding of the human resource management process that can be effectively implemented in both corporate and academic learning venues.

  19. A comparative study of seven human cochlear filter models.

    Science.gov (United States)

    Saremi, Amin; Beutelmann, Rainer; Dietz, Mathias; Ashida, Go; Kretzberg, Jutta; Verhulst, Sarah

    2016-09-01

    Auditory models have been developed for decades to simulate characteristics of the human auditory system, but it is often unknown how well auditory models compare to each other or perform in tasks they were not primarily designed for. This study systematically analyzes predictions of seven publicly-available cochlear filter models in response to a fixed set of stimuli to assess their capabilities of reproducing key aspects of human cochlear mechanics. The following features were assessed at frequencies of 0.5, 1, 2, 4, and 8 kHz: cochlear excitation patterns, nonlinear response growth, frequency selectivity, group delays, signal-in-noise processing, and amplitude modulation representation. For each task, the simulations were compared to available physiological data recorded in guinea pigs and gerbils as well as to human psychoacoustics data. The presented results provide application-oriented users with comprehensive information on the advantages, limitations and computation costs of these seven mainstream cochlear filter models.

  20. Modelling human myoblasts survival upon xenotransplantation into immunodeficient mouse muscle.

    Science.gov (United States)

    Praud, Christophe; Vauchez, Karine; Zongo, Pascal; Vilquin, Jean-Thomas

    2018-03-15

    Cell transplantation has been challenged in several clinical indications of genetic or acquired muscular diseases, but therapeutic success were mitigated. To understand and improve the yields of tissue regeneration, we aimed at modelling the fate of CD56-positive human myoblasts after transplantation. Using immunodeficient severe combined immunodeficiency (SCID) mice as recipients, we assessed the survival, integration and satellite cell niche occupancy of human myoblasts by a triple immunohistochemical labelling of laminin, dystrophin and human lamin A/C. The counts were integrated into a classical mathematical decline equation. After injection, human cells were essentially located in the endomysium, then they disappeared progressively from D0 to D28. The final number of integrated human nuclei was grossly determined at D2 after injection, suggesting that no more efficient fusion between donor myoblasts and host fibers occurs after the resolution of the local damages created by needle insertion. Almost 1% of implanted human cells occupied a satellite-like cell niche. Our mathematical model validated by histological counting provided a reliable quantitative estimate of human myoblast survival and/or incorporation into SCID muscle fibers. Informations brought by histological labelling and this mathematical model are complementary. Copyright © 2018 Elsevier Inc. All rights reserved.