WorldWideScience

Sample records for model human processor

  1. Mathematically modelling the effects of pacing, finger strategies and urgency on numerical typing performance with queuing network model human processor.

    Science.gov (United States)

    Lin, Cheng-Jhe; Wu, Changxu

    2012-01-01

    Numerical typing is an important perceptual-motor task whose performance may vary with different pacing, finger strategies and urgency of situations. Queuing network-model human processor (QN-MHP), a computational architecture, allows performance of perceptual-motor tasks to be modelled mathematically. The current study enhanced QN-MHP with a top-down control mechanism, a close-loop movement control and a finger-related motor control mechanism to account for task interference, endpoint reduction, and force deficit, respectively. The model also incorporated neuromotor noise theory to quantify endpoint variability in typing. The model predictions of typing speed and accuracy were validated with Lin and Wu's (2011) experimental results. The resultant root-mean-squared errors were 3.68% with a correlation of 95.55% for response time, and 35.10% with a correlation of 96.52% for typing accuracy. The model can be applied to provide optimal speech rates for voice synthesis and keyboard designs in different numerical typing situations. An enhanced QN-MHP model was proposed in the study to mathematically account for the effects of pacing, finger strategies and internalised urgency on numerical typing performance. The model can be used to provide optimal pacing for voice synthesise systems and suggested optimal numerical keyboard designs under urgency.

  2. Models of Communication for Multicore Processors

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sørensen, Rasmus Bo; Sparsø, Jens

    2015-01-01

    To efficiently use multicore processors we need to ensure that almost all data communication stays on chip, i.e., the bits moved between tasks executing on different processor cores do not leave the chip. Different forms of on-chip communication are supported by different hardware mechanism, e.......g., shared caches with cache coherency protocols, core-to-core networks-on-chip, and shared scratchpad memories. In this paper we explore the different hardware mechanism for on-chip communication and how they support or favor different models of communication. Furthermore, we discuss the usability...... of the different models of communication for real-time systems....

  3. Models of Communication for Multicore Processors

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Sørensen, Rasmus Bo; Sparsø, Jens

    2015-01-01

    To efficiently use multicore processors we need to ensure that almost all data communication stays on chip, i.e., the bits moved between tasks executing on different processor cores do not leave the chip. Different forms of on-chip communication are supported by different hardware mechanism, e.......g., shared caches with cache coherency protocols, core-to-core networks-on-chip, and shared scratchpad memories. In this paper we explore the different hardware mechanism for on-chip communication and how they support or favor different models of communication. Furthermore, we discuss the usability...... of the different models of communication for real-time systems....

  4. Model of computation for Fourier optical processors

    Science.gov (United States)

    Naughton, Thomas J.

    2000-05-01

    We present a novel and simple theoretical model of computation that captures what we believe are the most important characteristics of an optical Fourier transform processor. We use this abstract model to reason about the computational properties of the physical systems it describes. We define a grammar for our model's instruction language, and use it to write algorithms for well-known filtering and correlation techniques. We also suggest suitable computational complexity measures that could be used to analyze any coherent optical information processing technique, described with the language, for efficiency. Our choice of instruction language allows us to argue that algorithms describable with this model should have optical implementations that do not require a digital electronic computer to act as a master unit. Through simulation of a well known model of computation from computer theory we investigate the general-purpose capabilities of analog optical processors.

  5. Keystone Business Models for Network Security Processors

    Directory of Open Access Journals (Sweden)

    Arthur Low

    2013-07-01

    Full Text Available Network security processors are critical components of high-performance systems built for cybersecurity. Development of a network security processor requires multi-domain experience in semiconductors and complex software security applications, and multiple iterations of both software and hardware implementations. Limited by the business models in use today, such an arduous task can be undertaken only by large incumbent companies and government organizations. Neither the “fabless semiconductor” models nor the silicon intellectual-property licensing (“IP-licensing” models allow small technology companies to successfully compete. This article describes an alternative approach that produces an ongoing stream of novel network security processors for niche markets through continuous innovation by both large and small companies. This approach, referred to here as the "business ecosystem model for network security processors", includes a flexible and reconfigurable technology platform, a “keystone” business model for the company that maintains the platform architecture, and an extended ecosystem of companies that both contribute and share in the value created by innovation. New opportunities for business model innovation by participating companies are made possible by the ecosystem model. This ecosystem model builds on: i the lessons learned from the experience of the first author as a senior integrated circuit architect for providers of public-key cryptography solutions and as the owner of a semiconductor startup, and ii the latest scholarly research on technology entrepreneurship, business models, platforms, and business ecosystems. This article will be of interest to all technology entrepreneurs, but it will be of particular interest to owners of small companies that provide security solutions and to specialized security professionals seeking to launch their own companies.

  6. A processor sharing model for wireless data communication

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    occupies these servers for an exponentially distributed holding time with mean $1/( mu)$. However, in lack of requested resources some Time Division Multiple Access (TDMA) implementations for mobile data communication like High Speed Circuit Switched Data (HSCSD) and General Packet Radio Service (GPRS......) allow already established resources for data connections to be downgraded to allow a new connection to be established. As noted by Litjens and Boucherie (2002) this resembles classical processor sharing models, and in this spirit we formulate a variant of the processor sharing model with a limited...

  7. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  8. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  9. Accelerate Climate Models with the IBM Cell Processor

    Science.gov (United States)

    Zhou, S.; Duffy, D.; Clune, T.; Williams, S.; Suarez, M.; Halem, M.

    2008-12-01

    Ever increasing model resolutions and physical processes in climate models demand continual computing power increases. The IBM Cell processor's order-of- magnitude peak performance increase over conventional processors makes it very attractive for fulfilling this requirement. However, the Cell's characteristics: 256KB local memory per SPE and the new low-level communication mechanism, make it very challenging to port an application. We selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics components (~50% total computation time), (2) has a high computational load relative to data traffic to/from main memory, and (3) performs independent calculations across multiple columns. We converted the baseline code (single-precision, Fortran code) to C and ported it to an IBM BladeCenter QS20, manually SIMDizing 4 independent columns, and found that a Cell with 8 SPEs can process more than 3000 columns per second. Compared with the baseline results, the Cell is ~6.76x, ~8.91x, ~9.85x faster than a core on Intel's Xeon Woodcrest, Dempsey, and Itanium2 respectively. Our analysis shows that the Cell could also speed up the dynamics component (~25% total computation time). We believe this dramatic performance improvement makes the Cell processor very competitive, at least as an accelerator. We will report our experience in porting both the C and Fortran codes and will discuss our work in porting other climate model components.

  10. Processor core model for quantum computing.

    Science.gov (United States)

    Yung, Man-Hong; Benjamin, Simon C; Bose, Sougato

    2006-06-09

    We describe an architecture based on a processing "core," where multiple qubits interact perpetually, and a separate "store," where qubits exist in isolation. Computation consists of single qubit operations, swaps between the store and the core, and free evolution of the core. This enables computation using physical systems where the entangling interactions are "always on." Alternatively, for switchable systems, our model constitutes a prescription for optimizing many-qubit gates. We discuss implementations of the quantum Fourier transform, Hamiltonian simulation, and quantum error correction.

  11. Image processing algorithm acceleration using reconfigurable macro processor model

    Institute of Scientific and Technical Information of China (English)

    孙广富; 陈华明; 卢焕章

    2004-01-01

    The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of reconfigurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented.Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of "hardware" function that can be called by the DSP in high-level algorithm.It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.

  12. Feasibility analysis of real-time physical modeling using WaveCore processor technology on FPGA

    NARCIS (Netherlands)

    Verstraelen, Math; Pfeifle, Florian; Bader, Rolf

    2015-01-01

    WaveCore is a scalable many-core processor technology. This technology is specifically developed and optimized for real-time acoustical modeling applications. The programmable WaveCore soft-core processor is silicon-technology independent and hence can be targeted to ASIC or FPGA technologies. The W

  13. Impacts of the IBM Cell Processor to Support Climate Models

    Science.gov (United States)

    Zhou, Shujia; Duffy, Daniel; Clune, Tom; Suarez, Max; Williams, Samuel; Halem, Milt

    2008-01-01

    NASA is interested in the performance and cost benefits for adapting its applications to the IBM Cell processor. However, its 256KB local memory per SPE and the new communication mechanism, make it very challenging to port an application. We selected the solar radiation component of the NASA GEOS-5 climate model, which: (1) is representative of column physics (approximately 50% computational time), (2) has a high computational load relative to transferring data from and to main memory, (3) performs independent calculations across multiple columns. We converted the baseline code (single-precision, Fortran) to C and ported it with manually SIMDizing 4 independent columns and found that a Cell with 8 SPEs can process 2274 columns per second. Compared with the baseline results, the Cell is approximately 5.2X, approximately 8.2X, approximately 15.1X faster than a core on Intel Woodcrest, Dempsey, and Itanium2, respectively. We believe this dramatic performance improvement makes a hybrid cluster with Cell and traditional nodes competitive.

  14. Modeling and control of fuel cell systems and fuel processors

    Science.gov (United States)

    Pukrushpan, Jay Tawee

    Fuel cell systems offer clean and efficient energy production and are currently under intensive development by several manufacturers for both stationary and mobile applications. The viability, efficiency, and robustness of this technology depend on understanding, predicting, and controlling the unique transient behavior of the fuel cell system. In this thesis, we employ phenomenological modeling and multivariable control techniques to provide fast and consistent system dynamic behavior. Moreover, a framework for analyzing and evaluating different control architectures and sensor sets is provided. Two fuel cell related control problems are investigated in this study, namely, the control of the cathode oxygen supply for a high-pressure direct hydrogen Fuel Cell System (FCS) and control of the anode hydrogen supply from a natural gas Fuel Processor System (FPS). System dynamic analysis and control design is carried out using model-based linear control approaches. A system level dynamic model suitable for each control problem is developed from physics-based component models. The transient behavior captured in the model includes flow characteristics, inertia dynamics, lumped-volume manifold filling dynamics, time evolving spatially-homogeneous reactant pressure or mole fraction, membrane humidity, and the Catalytic Partial Oxidation (CPOX) reactor temperature. The goal of the FCS control problem is to effectively regulate the oxygen concentration in the cathode by quickly and accurately replenishing oxygen depleted during power generation. The features and limitations of different control configurations and the effect of various measurement on the control performance are examined. For example, an observability analysis suggests using the stack voltage measurement as feedback to the observer-based controller to improve the closed loop performance. The objective of the FPS control system is to regulate both the CPOX temperature and anode hydrogen concentration. Linear

  15. Functional Level Power Analysis: An Efficient Approach for Modeling the Power Consumption of Complex Processors

    OpenAIRE

    2004-01-01

    A high-level consumption estimation methodology and its associated tool, SoftExplorer, are presented. The estimation methodology uses a functional modeling of the processor combined with a parametric model to allow the designer to estimate the power consumption when the embedded software is executed on the target. SoftExplorer uses as input the assembly code generated by the compiler; its efficiency is compared to SimplePower's approach. Results for different processors (TI C62, C67, C55 and ...

  16. Choosing processor array configuration by performance modeling for a highly parallel linear algebra algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Littlefield, R.J.; Maschhoff, K.J.

    1991-04-01

    Many linear algebra algorithms utilize an array of processors across which matrices are distributed. Given a particular matrix size and a maximum number of processors, what configuration of processors, i.e., what size and shape array, will execute the fastest The answer to this question depends on tradeoffs between load balancing, communication startup and transfer costs, and computational overhead. In this paper we analyze in detail one algorithm: the blocked factored Jacobi method for solving dense eigensystems. A performance model is developed to predict execution time as a function of the processor array and matrix sizes, plus the basic computation and communication speeds of the underlying computer system. In experiments on a large hypercube (up to 512 processors), this model has been found to be highly accurate (mean error {approximately} 2%) over a wide range of matrix sizes (10 {times} 10 through 200 {times} 200) and processor counts (1 to 512). The model reveals, and direct experiment confirms, that the tradeoffs mentioned above can be surprisingly complex and counterintuitive. We propose decision procedures based directly on the performance model to choose configurations for fastest execution. The model-based decision procedures are compared to a heuristic strategy and shown to be significantly better. 7 refs., 8 figs., 1 tab.

  17. The impact of accelerator processors for high-throughput molecular modeling and simulation.

    Science.gov (United States)

    Giupponi, G; Harvey, M J; De Fabritiis, G

    2008-12-01

    The recent introduction of cost-effective accelerator processors (APs), such as the IBM Cell processor and Nvidia's graphics processing units (GPUs), represents an important technological innovation which promises to unleash the full potential of atomistic molecular modeling and simulation for the biotechnology industry. Present APs can deliver over an order of magnitude more floating-point operations per second (flops) than standard processors, broadly equivalent to a decade of Moore's law growth, and significantly reduce the cost of current atom-based molecular simulations. In conjunction with distributed and grid-computing solutions, accelerated molecular simulations may finally be used to extend current in silico protocols by the use of accurate thermodynamic calculations instead of approximate methods and simulate hundreds of protein-ligand complexes with full molecular specificity, a crucial requirement of in silico drug discovery workflows.

  18. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  19. A seasonal model of contracts between a monopsonistic processor and smallholder pepper producers in Costa Rica

    NARCIS (Netherlands)

    Sáenz Segura, F.; Haese, D' M.F.C.; Schipper, R.A.

    2010-01-01

    We model the contractual arrangements between smallholder pepper (Piper nigrum L.) producers and a single processor in Costa Rica. Producers in the El Roble settlement sell their pepper to only one processing firm, which exerts its monopsonistic bargaining power by setting the purchase price of

  20. Scaling-up spatially-explicit ecological models using graphics processors

    NARCIS (Netherlands)

    Koppel, Johan van de; Gupta, Rohit; Vuik, Cornelis

    2011-01-01

    How the properties of ecosystems relate to spatial scale is a prominent topic in current ecosystem research. Despite this, spatially explicit models typically include only a limited range of spatial scales, mostly because of computing limitations. Here, we describe the use of graphics processors to

  1. A seasonal model of contracts between a monopsonistic processor and smallholder pepper producers in Costa Rica

    NARCIS (Netherlands)

    Sáenz Segura, F.; Haese, D' M.F.C.; Schipper, R.A.

    2010-01-01

    We model the contractual arrangements between smallholder pepper (Piper nigrum L.) producers and a single processor in Costa Rica. Producers in the El Roble settlement sell their pepper to only one processing firm, which exerts its monopsonistic bargaining power by setting the purchase price of fres

  2. Toward Performance Portability of the FV3 Weather Model on CPU, GPU and MIC Processors

    Science.gov (United States)

    Govett, Mark; Rosinski, James; Middlecoff, Jacques; Schramm, Julie; Stringer, Lynd; Yu, Yonggang; Harrop, Chris

    2017-04-01

    The U.S. National Weather Service has selected the FV3 (Finite Volume cubed) dynamical core to become part of the its next global operational weather prediction model. While the NWS is preparing to run FV3 operationally in late 2017, NOAA's Earth System Research Laboratory is adapting the model to be capable of running on next-generation GPU and MIC processors. The FV3 model was designed in the 1990s, and while it has been extensively optimized for traditional CPU chips, some code refactoring has been required to expose sufficient parallelism needed to run on fine-grain GPU processors. The code transformations must demonstrate bit-wise reproducible results with the original CPU code, and between CPU, GPU and MIC processors. We will describe the parallelization and performance while attempting to maintain performance portability between CPU, GPU and MIC with the Fortran source code. Performance results will be shown using NOAA's new Pascal based fine-grain GPU system (800 GPUs), and for the Knights Landing processor on the National Science Foundation (NSF) Stampede-2 system.

  3. The Meteorology-Chemistry Interface Processor (MCIP for the CMAQ modeling system

    Directory of Open Access Journals (Sweden)

    T. L. Otte

    2009-12-01

    Full Text Available The Community Multiscale Air Quality (CMAQ modeling system, a state-of-the-science regional air quality modeling system developed by the US Environmental Protection Agency, is being used for a variety of environmental modeling problems including regulatory applications, air quality forecasting, evaluation of emissions control strategies, process-level research, and interactions of global climate change and regional air quality. The Meteorology-Chemistry Interface Processor (MCIP is a vital piece of software within the CMAQ modeling system that serves to, as best as possible, maintain dynamic consistency between the meteorological model and the chemical transport model. MCIP acts as both a post-processor to the meteorological model and a pre-processor to the CMAQ modeling system. MCIP's functions are to ingest the meteorological model output fields in their native formats, perform horizontal and vertical coordinate transformations, diagnose additional atmospheric fields, define gridding parameters, and prepare the meteorological fields in a form required by the CMAQ modeling system. This paper provides an updated overview of MCIP, documenting the scientific changes that have been made since it was first released as part of the CMAQ modeling system in 1998.

  4. An FFT Performance Model for Optimizing General-Purpose Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    Ling Li; Yun-Ji Chen; Dao-Fu Liu; Cheng Qian; Wei-Wu Hu

    2011-01-01

    General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and memory access,optimizing the FFT performance of a GPP also benefits the performances of many other applications.To facilitate the analysis of FFT,this paper proposes a theoretical model of the FFT processing.The model gives out a tight lower bound of the runtime of FFT on a GPP,and guides the architecture optimization for GPP as well.Based on the model,two theorems on optimization of architecture parameters are deduced,which refer to the lower bounds of register number and memory bandwidth.Experimental results on different processor architectures (including Intel Core i7 and Godson-3B) validate the performance model.The above investigations were adopted in the development of Godson-3B,which is an industrial GPP.The optimization techniques deduced from our performance model improve the FFT performance by about 40%,while incurring only 0.8% additional area cost.Consequently,Godson-3B solves the 1024-point single-precision complex FFT in 0.368 μs with about 40 Watt power consumption,and has the highest performance-per-watt in complex FFT among processors as far as we know.This work could benefit optimization of other GPPs as well.

  5. Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system

    Science.gov (United States)

    Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena

    2010-05-01

    A post-processor that accounts for the hydrologic uncertainty in a probabilistic streamflow forecast system is necessary to account for the uncertainty introduced by the hydrological model. In this study different variants of an autoregressive error model that can be used as a post-processor for short to medium range streamflow forecasts, are evaluated. The deterministic HBV model is used to form the basis for the streamflow forecast. The general structure of the error models then used as post-processor is a first order autoregressive model of the form dt = αdt-1 + σɛt where dt is the model error (observed minus simulated streamflow) at time t, α and σ are the parameters of the error model, and ɛt is the residual error described through a probability distribution. The following aspects are investigated: (1) Use of constant parameters α and σ versus the use of state dependent parameters. The state dependent parameters vary depending on the states of temperature, precipitation, snow water equivalent and simulated streamflow. (2) Use of a Standard Normal distribution for ɛt versus use of an empirical distribution function constituted through the normalized residuals of the error model in the calibration period. (3) Comparison of two different transformations, i.e. logarithmic versus square root, that are applied to the streamflow data before the error model is applied. The reason for applying a transformation is to make the residuals of the error model homoscedastic over the range of streamflow values of different magnitudes. Through combination of these three characteristics, eight variants of the autoregressive post-processor are generated. These are calibrated and validated in 55 catchments throughout Norway. The discrete ranked probability score with 99 flow percentiles as standardized thresholds is used for evaluation. In addition, a non-parametric bootstrap is used to construct confidence intervals and evaluate the significance of the results. The main

  6. Support for the Logical Execution Time Model on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Kluge, Florian; Schoeberl, Martin; Ungerer, Theo

    2016-01-01

    The logical execution time (LET) model increases the compositionality of real-time task sets. Removal or addition of tasks does not influence the communication behavior of other tasks. In this work, we extend a multicore operating system running on a time-predictable multicore processor to support...... the LET model. For communication between tasks we use message passing on a time-predictable network-on-chip to avoid the bottleneck of shared memory. We report our experiences and present results on the costs in terms of memory and execution time....

  7. A pre-processor of trace gases and aerosols emission fields for regional and global atmospheric chemistry models

    Directory of Open Access Journals (Sweden)

    S. R. Freitas

    2010-06-01

    Full Text Available The pre-processor PREP-CHEM-SRC presented in the paper is a comprehensive tool aiming at preparing emissions fields of trace gases and aerosols for use in regional or global transport models. The emissions considered are urban/industrial, biogenic, biomass burning, volcanic, biofuel use and burning from agricultural waste sources from most recent databases or from satellite fire detections for biomass burning. A plumerise model is used to derive the height of smoke emissions from satellite fire products. The pre-processor provides emission fields interpolated onto the transport model grid. Several map projections can be chosen. The way to include these emissions in transport models is also detailed. The pre-processor is coded using Fortran 90 and C and is driven by a namelist allowing the user to choose the type of emissions and the database.

  8. The Mission Assessment Post Processor (MAPP): A New Tool for Performance Evaluation of Human Lunar Missions

    Science.gov (United States)

    Williams, Jacob; Stewart, Shaun M.; Lee, David E.; Davis, Elizabeth C.; Condon, Gerald L.; Senent, Juan

    2010-01-01

    The National Aeronautics and Space Administration s (NASA) Constellation Program paves the way for a series of lunar missions leading to a sustained human presence on the Moon. The proposed mission design includes an Earth Departure Stage (EDS), a Crew Exploration Vehicle (Orion) and a lunar lander (Altair) which support the transfer to and from the lunar surface. This report addresses the design, development and implementation of a new mission scan tool called the Mission Assessment Post Processor (MAPP) and its use to provide insight into the integrated (i.e., EDS, Orion, and Altair based) mission cost as a function of various mission parameters and constraints. The Constellation architecture calls for semiannual launches to the Moon and will support a number of missions, beginning with 7-day sortie missions, culminating in a lunar outpost at a specified location. The operational lifetime of the Constellation Program can cover a period of decades over which the Earth-Moon geometry (particularly, the lunar inclination) will go through a complete cycle (i.e., the lunar nodal cycle lasting 18.6 years). This geometry variation, along with other parameters such as flight time, landing site location, and mission related constraints, affect the outbound (Earth to Moon) and inbound (Moon to Earth) translational performance cost. The mission designer must determine the ability of the vehicles to perform lunar missions as a function of this complex set of interdependent parameters. Trade-offs among these parameters provide essential insights for properly assessing the ability of a mission architecture to meet desired goals and objectives. These trades also aid in determining the overall usable propellant required for supporting nominal and off-nominal missions over the entire operational lifetime of the program, thus they support vehicle sizing.

  9. Simulation-based Modeling Frameworks for Networked Multi-processor System-on-Chip

    DEFF Research Database (Denmark)

    Mahadevan, Shankar

    2006-01-01

    This thesis deals with modeling aspects of multi-processor system-on-chip (MpSoC) design affected by the on-chip interconnect, also called the Network-on-Chip (NoC), at various levels of abstraction. To begin with, we undertook a comprehensive survey of research and design practices of networked Mp......SoC. The survey presents the challenges of modeling and performance analysis of the hardware and the software components used in such devices. These challenges are further exasperated in a mixed abstraction workspace, which is typical of complex MpSoC design environment. We provide two simulation-based frameworks...... and the RIPE frameworks allows easy incorporation of IP cores from either frameworks, into a new instance of the design. This could pave the way for seamless design evaluation from system-level to cycletrue abstraction in future component-based MpSoC design practice....

  10. Quantum chemical calculations using the Floating Point Systems, Inc. Model 164 attached processor

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, R.; Bair, R.A.; Eades, R.A.; Wagner, A.F.; Davis, M.J.; Harding, L.B.; Dunning, T.H. Jr.

    1983-01-01

    The Theoretical Chemistry Group at Argonne National Laboratory has had a Floating Point System, Inc., Model 164 Attached Processor (FPS-164) for several months. Actual production calculations, as well as benchmark calculations, indicate that the FPS-164 is capable of performance comparable to large mainframe computers, the groups experience with the FPS-164 includes the conversion of a complete system of electronic structure codes, including integral evaluation programs, generalized valence bond programs, integral transformation programs, and unitary group configuration interaction programs, and two classical trajectory codes. Timings of these programs at various levels of optimization along with estimates of the amount of effort required to make the necessary program modifications are discussed. 10 references, 2 figures, 2 tables.

  11. Shortcut model for water-balanced operation in fuel processor fuel cell systems

    NARCIS (Netherlands)

    Biesheuvel, P.M.; Kramer, G.J.

    2004-01-01

    In a fuel processor, a hydrocarbon or oxygenate fuel is catalytically converted into a mixture rich in hydrogen which can be fed to a fuel cell to generate electricity. In these fuel processor fuel cell systems (FPFCs), water is recovered from the exhaust gases and recycled back into the system. We

  12. Recent developments in predictive uncertainty assessment based on the model conditional processor approach

    Directory of Open Access Journals (Sweden)

    G. Coccia

    2011-10-01

    Full Text Available The work aims at discussing the role of predictive uncertainty in flood forecasting and flood emergency management, its relevance to improve the decision making process and the techniques to be used for its assessment.

    Real time flood forecasting requires taking into account predictive uncertainty for a number of reasons. Deterministic hydrological/hydraulic forecasts give useful information about real future events, but their predictions, as usually done in practice, cannot be taken and used as real future occurrences but rather used as pseudo-measurements of future occurrences in order to reduce the uncertainty of decision makers. Predictive Uncertainty (PU is in fact defined as the probability of occurrence of a future value of a predictand (such as water level, discharge or water volume conditional upon prior observations and knowledge as well as on all the information we can obtain on that specific future value from model forecasts. When dealing with commensurable quantities, as in the case of floods, PU must be quantified in terms of a probability distribution function which will be used by the emergency managers in their decision process in order to improve the quality and reliability of their decisions.

    After introducing the concept of PU, the presently available processors are introduced and discussed in terms of their benefits and limitations. In this work the Model Conditional Processor (MCP has been extended to the possibility of using two joint Truncated Normal Distributions (TNDs, in order to improve adaptation to low and high flows.

    The paper concludes by showing the results of the application of the MCP on two case studies, the Po river in Italy and the Baron Fork river, OK, USA. In the Po river case the data provided by the Civil Protection of the Emilia Romagna region have been used to implement an operational example, where the predicted variable is the observed water level. In the Baron Fork River

  13. Temperature modeling and emulation of an ASIC temperature monitor system for Tightly-Coupled Processor Arrays (TCPAs)

    OpenAIRE

    E. Glocker; S. Boppu; Chen, Q; Schlichtmann, U.; Teich, J.; D. Schmitt-Landsiedel

    2014-01-01

    This contribution provides an approach for emulating the behaviour of an ASIC temperature monitoring system (TMon) during run-time for a tightly-coupled processor array (TCPA) of a heterogeneous invasive multi-tile architecture to be used for FPGA prototyping. It is based on a thermal RC modeling approach. Also different usage scenarios of TCPA are analyzed and compared.

  14. Decomposing the queue length distribution of processor-sharing models into queue lengths of permanent customer queues

    NARCIS (Netherlands)

    Cheung, S.-K.; Berg, H. van den; Boucherie, R.J.

    2005-01-01

    We obtain a decomposition result for the steady state queue length distribution in egalitarian processor-sharing (PS) models. In particular, for multi-class egalitarian PS queues, we show that the marginal queue length distribution for each class equals the queue length distribution of an equivalent

  15. Emerging Trends in Embedded Processors

    Directory of Open Access Journals (Sweden)

    Gurvinder Singh

    2014-05-01

    Full Text Available An Embedded Processors is simply a µProcessors that has been “Embedded” into a device. Embedded systems are important part of human life. For illustration, one cannot visualize life without mobile phones for personal communication. Embedded systems are used in many places like healthcare, automotive, daily life, and in different offices and industries.Embedded Processors develop new research area in the field of hardware designing.

  16. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  17. GNAQPMS v1.1: accelerating the Global Nested Air Quality Prediction Modeling System (GNAQPMS) on Intel Xeon Phi processors

    Science.gov (United States)

    Wang, Hui; Chen, Huansheng; Wu, Qizhong; Lin, Junmin; Chen, Xueshun; Xie, Xinwei; Wang, Rongrong; Tang, Xiao; Wang, Zifa

    2017-08-01

    The Global Nested Air Quality Prediction Modeling System (GNAQPMS) is the global version of the Nested Air Quality Prediction Modeling System (NAQPMS), which is a multi-scale chemical transport model used for air quality forecast and atmospheric environmental research. In this study, we present the porting and optimisation of GNAQPMS on a second-generation Intel Xeon Phi processor, codenamed Knights Landing (KNL). Compared with the first-generation Xeon Phi coprocessor (codenamed Knights Corner, KNC), KNL has many new hardware features such as a bootable processor, high-performance in-package memory and ISA compatibility with Intel Xeon processors. In particular, we describe the five optimisations we applied to the key modules of GNAQPMS, including the CBM-Z gas-phase chemistry, advection, convection and wet deposition modules. These optimisations work well on both the KNL 7250 processor and the Intel Xeon E5-2697 V4 processor. They include (1) updating the pure Message Passing Interface (MPI) parallel mode to the hybrid parallel mode with MPI and OpenMP in the emission, advection, convection and gas-phase chemistry modules; (2) fully employing the 512 bit wide vector processing units (VPUs) on the KNL platform; (3) reducing unnecessary memory access to improve cache efficiency; (4) reducing the thread local storage (TLS) in the CBM-Z gas-phase chemistry module to improve its OpenMP performance; and (5) changing the global communication from writing/reading interface files to MPI functions to improve the performance and the parallel scalability. These optimisations greatly improved the GNAQPMS performance. The same optimisations also work well for the Intel Xeon Broadwell processor, specifically E5-2697 v4. Compared with the baseline version of GNAQPMS, the optimised version was 3.51 × faster on KNL and 2.77 × faster on the CPU. Moreover, the optimised version ran at 26 % lower average power on KNL than on the CPU. With the combined performance and energy

  18. Smart composite material system with sensor, actuator, and processor functions: a model of holding and releasing a ball

    Science.gov (United States)

    Oishi, Ryutaro; Yoshida, Hitoshi; Nagai, Hideki; Xu, Ya; Jang, Byung-Koog

    2002-07-01

    A smart composite material system which has three smart functions of sensor, actuator and processor has been developed intend to apply to structure of house for controlling ambient temperature and humidity, hands of robot for holding and feeling an object, and so on. A carbon fiber reinforced plastics (CFRP) is used as matrix in the smart composite. The size of the matrix is 120mm x 24mm x 0.45mm. The CFRP plate is combined two Ni-Ti shape memory alloy (SMA) wires with an elastic rubber to construct a composite material. The composite material has a characteristic of reversible response with respect to temperature. A photo-sensor and temperature sensor are embedded in the composite material. The composite material has a processor function to combine with a simple CPU (processor) unit. For demonstrating the capability of the composite material system, a model is built up for controlling certain behaviors such as gripping and releasing a spherical object. The amplitude of gripping force is (3.0 plus/minus 0.3) N in the measurement, which is consistent with our calculation of 2.7 N. Out of a variety of functions to be executed by the CPU, it is shown to exert calculation and decision making in regard to object selection, object holding, and ON-OFF control of action by external commands.

  19. Experiences modeling ocean circulation problems on a 30 node commodity cluster with 3840 GPU processor cores.

    Science.gov (United States)

    Hill, C.

    2008-12-01

    Low cost graphic cards today use many, relatively simple, compute cores to deliver support for memory bandwidth of more than 100GB/s and theoretical floating point performance of more than 500 GFlop/s. Right now this performance is, however, only accessible to highly parallel algorithm implementations that, (i) can use a hundred or more, 32-bit floating point, concurrently executing cores, (ii) can work with graphics memory that resides on the graphics card side of the graphics bus and (iii) can be partially expressed in a language that can be compiled by a graphics programming tool. In this talk we describe our experiences implementing a complete, but relatively simple, time dependent shallow-water equations simulation targeting a cluster of 30 computers each hosting one graphics card. The implementation takes into account the considerations (i), (ii) and (iii) listed previously. We code our algorithm as a series of numerical kernels. Each kernel is designed to be executed by multiple threads of a single process. Kernels are passed memory blocks to compute over which can be persistent blocks of memory on a graphics card. Each kernel is individually implemented using the NVidia CUDA language but driven from a higher level supervisory code that is almost identical to a standard model driver. The supervisory code controls the overall simulation timestepping, but is written to minimize data transfer between main memory and graphics memory (a massive performance bottle-neck on current systems). Using the recipe outlined we can boost the performance of our cluster by nearly an order of magnitude, relative to the same algorithm executing only on the cluster CPU's. Achieving this performance boost requires that many threads are available to each graphics processor for execution within each numerical kernel and that the simulations working set of data can fit into the graphics card memory. As we describe, this puts interesting upper and lower bounds on the problem sizes

  20. Auditory-like filterbank: An optimal speech processor for efficient human speech communication

    Indian Academy of Sciences (India)

    Prasanta Kumar Ghosh; Louis M Goldstein; Shrikanth S Narayanan

    2011-10-01

    The transmitter and the receiver in a communication system have to be designed optimally with respect to one another to ensure reliable and efficient communication. Following this principle, we derive an optimal filterbank for processing speech signal in the listener’s auditory system (receiver), so that maximum information about the talker’s (transmitter) message can be obtained from the filterbank output, leading to efficient communication between the talker and the listener. We consider speech data of 45 talkers from three different languages for designing optimal filterbanks separately for each of them. We find that the computationally derived optimal filterbanks are similar to the empirically established auditory (cochlear) filterbank in the human ear. We also find that the output of the empirically established auditory filterbank provides more than 90% of the maximum information about the talker’s message provided by the output of the optimal filterbank. Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans.

  1. Implications of the Turing machine model of computation for processor and programming language design

    Science.gov (United States)

    Hunter, Geoffrey

    2004-01-01

    A computational process is classified according to the theoretical model that is capable of executing it; computational processes that require a non-predeterminable amount of intermediate storage for their execution are Turing-machine (TM) processes, while those whose storage are predeterminable are Finite Automation (FA) processes. Simple processes (such as traffic light controller) are executable by Finite Automation, whereas the most general kind of computation requires a Turing Machine for its execution. This implies that a TM process must have a non-predeterminable amount of memory allocated to it at intermediate instants of its execution; i.e. dynamic memory allocation. Many processes encountered in practice are TM processes. The implication for computational practice is that the hardware (CPU) architecture and its operating system must facilitate dynamic memory allocation, and that the programming language used to specify TM processes must have statements with the semantic attribute of dynamic memory allocation, for in Alan Turing"s thesis on computation (1936) the "standard description" of a process is invariant over the most general data that the process is designed to process; i.e. the program describing the process should never have to be modified to allow for differences in the data that is to be processed in different instantiations; i.e. data-invariant programming. Any non-trivial program is partitioned into sub-programs (procedures, subroutines, functions, modules, etc). Examination of the calls/returns between the subprograms reveals that they are nodes in a tree-structure; this tree-structure is independent of the programming language used to encode (define) the process. Each sub-program typically needs some memory for its own use (to store values intermediate between its received data and its computed results); this locally required memory is not needed before the subprogram commences execution, and it is not needed after its execution terminates

  2. Quality-Driven Model-Based Design of MultiProcessor Embedded Systems for Highlydemanding Applications

    DEFF Research Database (Denmark)

    Jozwiak, Lech; Madsen, Jan

    2013-01-01

    opportunities have been created. The traditional applications can be served much better and numerous new sorts of embedded systems became technologically feasible and economically justified. Various monitoring, control, communication or multi-media systems that can be put on or embedded in (mobile, poorly...... unusual silicon and system complexity. The combination of the huge complexity with the stringent application requirements results in numerous serious design and development challenges, such as: accounting in design for more aspects and changed relationships among aspects, complex multi-objective MPSo......The recent spectacular progress in modern nano-dimension semiconductor technology enabled implementation of a complete complex multi-processor system on a single chip (MPSoC), global networking and mobile wire-less communication, and facilitated a fast progress in these areas. New important...

  3. PEM Fuel Cells with Bio-Ethanol Processor Systems A Multidisciplinary Study of Modelling, Simulation, Fault Diagnosis and Advanced Control

    CERN Document Server

    Feroldi, Diego; Outbib, Rachid

    2012-01-01

    An apparently appropriate control scheme for PEM fuel cells may actually lead to an inoperable plant when it is connected to other unit operations in a process with recycle streams and energy integration. PEM Fuel Cells with Bio-Ethanol Processor Systems presents a control system design that provides basic regulation of the hydrogen production process with PEM fuel cells. It then goes on to construct a fault diagnosis system to improve plant safety above this control structure. PEM Fuel Cells with Bio-Ethanol Processor Systems is divided into two parts: the first covers fuel cells and the second discusses plants for hydrogen production from bio-ethanol to feed PEM fuel cells. Both parts give detailed analyses of modeling, simulation, advanced control, and fault diagnosis. They give an extensive, in-depth discussion of the problems that can occur in fuel cell systems and propose a way to control these systems through advanced control algorithms. A significant part of the book is also given over to computer-aid...

  4. Fast Forwarding with Network Processors

    OpenAIRE

    Lefèvre, Laurent; Lemoine, E.; Pham, C; Tourancheau, B.

    2003-01-01

    Forwarding is a mechanism found in many network operations. Although a regular workstation is able to perform forwarding operations it still suffers from poor performances when compared to dedicated hardware machines. In this paper we study the possibility of using Network Processors (NPs) to improve the capability of regular workstations to forward data. We present a simple model and an experimental study demonstrating that even though NPs are less powerful than Host Processors (HPs) they ca...

  5. MATHEMATICAL MODELING OF MUTUALLY BENEFICIAL RELATIONS BEETWEEN RAW MATERIAL PRODUCERS AND PROCESSORS BASED ON NONLINEAR DEMAND FUNCTION

    Directory of Open Access Journals (Sweden)

    Loyko V. I.

    2015-06-01

    Full Text Available Agricultural producers interested in marketing of raw materials, whereas processing companies are interested in the establishment of raw material zones, providing capacity utilization; therefore, the establishment of sustainable linkages between producers and processors of raw materials is an objective necessity. In the article, with the help of mathematical methods we examine the conditions of mutually beneficial economic relations between agricultural producers and processing enterprises. Mathematical model for estimating the profits of the company is built of the following conditions: producers sell processing plants raw materials, determined by the coefficient of the interest in the partnership at an agreed purchase price, and the remaining raw materials are processed, so they can sell their products independently. Profit of the processing plant is determined by the mathematical model. To describe the nonlinear market-based sales of goods from its retail price we used a hyperbolic demand function

  6. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 6. NDDL Processor Product Specification. Sections 1.0 through 3.10.8 CPFNXT.

    Science.gov (United States)

    1985-11-01

    Conceptual Schema to External Schema Translation or Mapping program. Conceptual Schema: (CS) Common Data Model Processor: ( CIDP ) Common Data Model: (CDM...statement was not written into the include file Itself. The most common reason for this is that the Include file comes from system libraries that were not

  7. A Directory of Human Performance Models for System Design (Defence Research Group Panel 8 on the Defence Applications of Human and Bio-Medical Sciences)

    Science.gov (United States)

    1992-12-27

    Government Printing Office. Henneman , R. L (1988): Human problem solving in dynamic environments. In Rouse, W. B. (Ed.). Advances in man-Machine systems...to a light) the human must behave as a serial processor. For other tasks (typing. reading. simultaneous translation ) integrated, parallel operation...A perceptual model that translates displayed variables into noisy, delayed, perceived variables. (2) An information processor consisting of an

  8. Cluster Algorithm Special Purpose Processor

    Science.gov (United States)

    Talapov, A. L.; Shchur, L. N.; Andreichenko, V. B.; Dotsenko, Vl. S.

    We describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.

  9. Cluster algorithm special purpose processor

    Energy Technology Data Exchange (ETDEWEB)

    Talapov, A.L.; Shchur, L.N.; Andreichenko, V.B.; Dotsenko, V.S. (Landau Inst. for Theoretical Physics, GSP-1 117940 Moscow V-334 (USSR))

    1992-08-10

    In this paper, the authors describe a Special Purpose Processor, realizing the Wolff algorithm in hardware, which is fast enough to study the critical behaviour of 2D Ising-like systems containing more than one million spins. The processor has been checked to produce correct results for a pure Ising model and for Ising model with random bonds. Its data also agree with the Nishimori exact results for spin glass. Only minor changes of the SPP design are necessary to increase the dimensionality and to take into account more complex systems such as Potts models.

  10. Scalability of human models

    NARCIS (Netherlands)

    Rodarius, C.; Rooij, L. van; Lange, R. de

    2007-01-01

    The objective of this work was to create a scalable human occupant model that allows adaptation of human models with respect to size, weight and several mechanical parameters. Therefore, for the first time two scalable facet human models were developed in MADYMO. First, a scalable human male was

  11. Towards a Process Algebra for Shared Processors

    DEFF Research Database (Denmark)

    Buchholtz, Mikael; Andersen, Jacob; Løvengreen, Hans Henrik

    2002-01-01

    We present initial work on a timed process algebra that models sharing of processor resources allowing preemption at arbitrary points in time. This enables us to model both the functional and the timely behaviour of concurrent processes executed on a single processor. We give a refinement relation...

  12. Model-Based Design of Energy Efficient Palladium Membrane Water Gas Shift Fuel Processors for PEM Fuel Cell Power Plants

    Science.gov (United States)

    Gummalla, Mallika; Vanderspurt, Thomas Henry; Emerson, Sean; She, Ying; Dardas, Zissis; Olsommer, Benoît

    An integrated, palladium alloy membrane Water-Gas Shift (WGS) reactor can significantly reduce the size, cost and complexity of a fuel processor for a Polymer Electrolyte Membrane fuel cell power system.

  13. Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors

    Science.gov (United States)

    2013-02-01

    Acceleration platforms examined. x86 Cell GPGPU HTM [22] × × Dean [25] × × Izhikevich [26] × × × Hodgkin- Huxley [27] × × × Morris Lecar [28...examined. These are the Hodgkin- Huxley [27], Izhikevich [26], Wilson [29], and Morris-Lecar [28] models. The Hodgkin– Huxley model is considered to be...and inactivation of Na currents). Table 2 compares the computation properties of the four models. The Hodgkin– Huxley model utilizes exponential

  14. RPC Stereo Processor (rsp) - a Software Package for Digital Surface Model and Orthophoto Generation from Satellite Stereo Imagery

    Science.gov (United States)

    Qin, R.

    2016-06-01

    Large-scale Digital Surface Models (DSM) are very useful for many geoscience and urban applications. Recently developed dense image matching methods have popularized the use of image-based very high resolution DSM. Many commercial/public tools that implement matching methods are available for perspective images, but there are rare handy tools for satellite stereo images. In this paper, a software package, RPC (rational polynomial coefficient) stereo processor (RSP), is introduced for this purpose. RSP implements a full pipeline of DSM and orthophoto generation based on RPC modelled satellite imagery (level 1+), including level 2 rectification, geo-referencing, point cloud generation, pan-sharpen, DSM resampling and ortho-rectification. A modified hierarchical semi-global matching method is used as the current matching strategy. Due to its high memory efficiency and optimized implementation, RSP can be used in normal PC to produce large format DSM and orthophotos. This tool was developed for internal use, and may be acquired by researchers for academic and non-commercial purpose to promote the 3D remote sensing applications.

  15. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  16. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    Science.gov (United States)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  17. Advanced Multiple Processor Configuration Study. Final Report.

    Science.gov (United States)

    Clymer, S. J.

    This summary of a study on multiple processor configurations includes the objectives, background, approach, and results of research undertaken to provide the Air Force with a generalized model of computer processor combinations for use in the evaluation of proposed flight training simulator computational designs. An analysis of a real-time flight…

  18. Modeling and simulation of heat sinks for computer processors in COMSOL Multiphysics

    OpenAIRE

    2012-01-01

    In this study, the heat transfer of three desktop- computer heat sinks was analyzed. The objective of using these heat sinks is to avoid overheating of the computer’s processing unit and in turn reduce the corresponding loss in the unit’s service time. The heat sinks were modeled using COMSOL Multiphysics with the actual dimensions of the devices, and heat generation was modeled with a point source. In the next step, the heat sink designs were modified to achieve a lower temperature in the hi...

  19. Experimentally modeling stochastic processes with less memory by the use of a quantum processor.

    Science.gov (United States)

    Palsson, Matthew S; Gu, Mile; Ho, Joseph; Wiseman, Howard M; Pryde, Geoff J

    2017-02-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process' statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems.

  20. Experimentally modeling stochastic processes with less memory by the use of a quantum processor

    Science.gov (United States)

    Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.

    2017-01-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218

  1. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    directly. The feasibility analysis showed a difference of only 2% in temperature results from the published reference work that was performed with Ansys Fluent (Colella et al., 2010). The reduction in simulation time was significantly larger when using multiscale modelling than when performing multiple...

  2. Optimizing ion channel models using a parallel genetic algorithm on graphical processors.

    Science.gov (United States)

    Ben-Shalom, Roy; Aviv, Amit; Razon, Benjamin; Korngreen, Alon

    2012-01-01

    We have recently shown that we can semi-automatically constrain models of voltage-gated ion channels by combining a stochastic search algorithm with ionic currents measured using multiple voltage-clamp protocols. Although numerically successful, this approach is highly demanding computationally, with optimization on a high performance Linux cluster typically lasting several days. To solve this computational bottleneck we converted our optimization algorithm for work on a graphical processing unit (GPU) using NVIDIA's CUDA. Parallelizing the process on a Fermi graphic computing engine from NVIDIA increased the speed ∼180 times over an application running on an 80 node Linux cluster, considerably reducing simulation times. This application allows users to optimize models for ion channel kinetics on a single, inexpensive, desktop "super computer," greatly reducing the time and cost of building models relevant to neuronal physiology. We also demonstrate that the point of algorithm parallelization is crucial to its performance. We substantially reduced computing time by solving the ODEs (Ordinary Differential Equations) so as to massively reduce memory transfers to and from the GPU. This approach may be applied to speed up other data intensive applications requiring iterative solutions of ODEs.

  3. Embedded Processor Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — The Embedded Processor Laboratory provides the means to design, develop, fabricate, and test embedded computers for missile guidance electronics systems in support...

  4. A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation

    Directory of Open Access Journals (Sweden)

    Sergio Briguglio

    2003-01-01

    Full Text Available A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage and OpenMP (at the intra-node one. The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.

  5. 3-D model of a radial flow sub-watt methanol fuel processor

    Energy Technology Data Exchange (ETDEWEB)

    Holladay, J. D.; Wang, Y.

    2015-10-01

    A 3-D model is presented for a novel sub-watt packed bed reactor. The reactor uses an annular inlet flow combined with a radial flow packed bed reactor. The baseline reactor is compared to a reactor with multiple outlets and a reactor with 3 internal fins. Increasing the outlets from 1 to 4 did improve the flow distribution, but did not increase the performance in the simulation. However, inserting fins allowed a decrease in temperature with same inlet flow of approximately 35K. Or the inlet flow rate could be increased by a factor of 2.8x while maintaining >99% conversion.

  6. Array processors in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Ostlund, N.S.

    1980-01-01

    The field of attached scientific processors (''array processors'') is surveyed, and an attempt is made to indicate their present and possible future use in computational chemistry. The current commercial products from Floating Point Systems, Inc., Datawest Corporation, and CSP, Inc. are discussed.

  7. Verilog Implementation of 32-Bit CISC Processor

    Directory of Open Access Journals (Sweden)

    P.Kanaka Sirisha

    2016-04-01

    Full Text Available The Project deals with the design of the 32-Bit CISC Processor and modeling of its components using Verilog language. The Entire Processor uses 32-Bit bus to deal with all the registers and the memories. This Processor implements various arithmetic, logical, Data Transfer operations etc., using variable length instructions, which is the core property of the CISC Architecture. The Processor also supports various addressing modes to perform a 32-Bit instruction. Our Processor uses Harvard Architecture (i.e., to have a separate program and data memory and hence has different buses to negotiate with the Program Memory and Data Memory individually. This feature enhances the speed of our processor. Hence it has two different Program Counters to point to the memory locations of the Program Memory and Data Memory.Our processor has ‘Instruction Queuing’ which enables it to save the time needed to fetch the instruction and hence increases the speed of operation. ‘Interrupt Service Routine’ is provided in our Processor to make it address the Interrupts.

  8. A Parallel and Concurrent Implementation of Lin-Kernighan Heuristic (LKH-2 for Solving Traveling Salesman Problem for Multi-Core Processors using SPC3 Programming Model

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Ismail

    2011-08-01

    Full Text Available With the arrival of multi-cores, every processor has now built-in parallel computational power and that can be fully utilized only if the program in execution is written accordingly. This study is a part of an on-going research for designing of a new parallel programming model for multi-core processors. In this paper we have presented a combined parallel and concurrent implementation of Lin-Kernighan Heuristic (LKH-2 for Solving Travelling Salesman Problem (TSP using a newly developed parallel programming model, SPC3 PM, for general purpose multi-core processors. This implementation is found to be very simple, highly efficient, scalable and less time consuming in compare to the existing LKH-2 serial implementations in multi-core processing environment. We have tested our parallel implementation of LKH-2 with medium and large size TSP instances of TSBLIB. And for all these tests our proposed approach has shown much improved performance and scalability.

  9. Real-time swept source optical coherence tomography imaging of the human airway using a microelectromechanical system endoscope and digital signal processor.

    Science.gov (United States)

    Su, Jianping; Zhang, Jun; Yu, Lingfeng; G Colt, Henri; Brenner, Matthew; Chen, Zhongping

    2008-01-01

    A fast-scan-rate swept laser for optical coherence tomography (OCT) is suitable to record and analyze a 3-D image volume. However, the whole OCT system speed is limited by data streaming, processing, and storage. In this case, postprocessing is a common technique. Endoscopic clinical applications prefer onsite diagnosis, which requires a real-time technique. Parallel digital signal processors were applied to stream and process data directly from a data digitizer. A real-time system with 20-kHz axial line speed, which was limited only by our swept laser scan rate, was implemented. To couple with the system speed, an endoscope based on an improved 3-D microelectromechanical motor (diameter 1.5 mm, length 9.4 mm) was developed. In vivo 3-D imaging of the human airway was demonstrated.

  10. Benchmarking a DSP processor

    OpenAIRE

    Lennartsson, Per; Nordlander, Lars

    2002-01-01

    This Master thesis describes the benchmarking of a DSP processor. Benchmarking means measuring the performance in some way. In this report, we have focused on the number of instruction cycles needed to execute certain algorithms. The algorithms we have used in the benchmark are all very common in signal processing today. The results we have reached in this thesis have been compared to benchmarks for other processors, performed by Berkeley Design Technology, Inc. The algorithms were programm...

  11. First Cluster Algorithm Special Purpose Processor

    Science.gov (United States)

    Talapov, A. L.; Andreichenko, V. B.; Dotsenko S., Vi.; Shchur, L. N.

    We describe the architecture of the special purpose processor built to realize in hardware cluster Wolff algorithm, which is not hampered by a critical slowing down. The processor simulates two-dimensional Ising-like spin systems. With minor changes the same very effective architecture, which can be defined as a Memory Machine, can be used to study phase transitions in a wide range of models in two or three dimensions.

  12. Hardware multiplier processor

    Science.gov (United States)

    Pierce, Paul E.

    1986-01-01

    A hardware processor is disclosed which in the described embodiment is a memory mapped multiplier processor that can operate in parallel with a 16 bit microcomputer. The multiplier processor decodes the address bus to receive specific instructions so that in one access it can write and automatically perform single or double precision multiplication involving a number written to it with or without addition or subtraction with a previously stored number. It can also, on a single read command automatically round and scale a previously stored number. The multiplier processor includes two concatenated 16 bit multiplier registers, two 16 bit concatenated 16 bit multipliers, and four 16 bit product registers connected to an internal 16 bit data bus. A high level address decoder determines when the multiplier processor is being addressed and first and second low level address decoders generate control signals. In addition, certain low order address lines are used to carry uncoded control signals. First and second control circuits coupled to the decoders generate further control signals and generate a plurality of clocking pulse trains in response to the decoded and address control signals.

  13. Signal processor packaging design

    Science.gov (United States)

    McCarley, Paul L.; Phipps, Mickie A.

    1993-10-01

    The Signal Processor Packaging Design (SPPD) program was a technology development effort to demonstrate that a miniaturized, high throughput programmable processor could be fabricated to meet the stringent environment imposed by high speed kinetic energy guided interceptor and missile applications. This successful program culminated with the delivery of two very small processors, each about the size of a large pin grid array package. Rockwell International's Tactical Systems Division in Anaheim, California developed one of the processors, and the other was developed by Texas Instruments' (TI) Defense Systems and Electronics Group (DSEG) of Dallas, Texas. The SPPD program was sponsored by the Guided Interceptor Technology Branch of the Air Force Wright Laboratory's Armament Directorate (WL/MNSI) at Eglin AFB, Florida and funded by SDIO's Interceptor Technology Directorate (SDIO/TNC). These prototype processors were subjected to rigorous tests of their image processing capabilities, and both successfully demonstrated the ability to process 128 X 128 infrared images at a frame rate of over 100 Hz.

  14. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies dif

  15. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  16. The Milstar Advanced Processor

    Science.gov (United States)

    Tjia, Khiem-Hian; Heely, Stephen D.; Morphet, John P.; Wirick, Kevin S.

    The Milstar Advanced Processor (MAP) is a 'drop-in' replacement for its predecessor which preserves existing interfaces with other Milstar satellite processors and minimizes the impact of such upgrading to already-developed application software. In addition to flight software development, and hardware development that involves the application of VHSIC technology to the electrical design, the MAP project is developing two sophisticated and similar test environments. High density RAM and ROM are employed by the MAP memory array. Attention is given to the fine-pitch VHSIC design techniques and lead designs used, as well as the tole of TQM and concurrent engineering in the development of the MAP manufacturing process.

  17. Assimilation of satellite altimetry data in hydrological models for improved inland surface water information: Case studies from the "Sentinel-3 Hydrologic Altimetry Processor prototypE" project (SHAPE)

    Science.gov (United States)

    Gustafsson, David; Pimentel, Rafael; Fabry, Pierre; Bercher, Nicolas; Roca, Mónica; Garcia-Mondejar, Albert; Fernandes, Joana; Lázaro, Clara; Ambrózio, Américo; Restano, Marco; Benveniste, Jérôme

    2017-04-01

    This communication is about the Sentinel-3 Hydrologic Altimetry Processor prototypE (SHAPE) project, with a focus on the components dealing with assimilation of satellite altimetry data into hydrological models. The SHAPE research and development project started in September 2015, within the Scientific Exploitation of Operational Missions (SEOM) programme of the European Space Agency. The objectives of the project are to further develop and assess recent improvement in altimetry data, processing algorithms and methods for assimilation in hydrological models, with the overarching goal to support improved scientific use of altimetry data and improved inland water information. The objective is also to take scientific steps towards a future Inland Water dedicated processor on the Sentinel-3 ground segment. The study focuses on three main variables of interest in hydrology: river stage, river discharge and lake level. The improved altimetry data from the project is used to estimate river stage, river discharge and lake level information in a data assimilation framework using the hydrological dynamic and semi-distributed model HYPE (Hydrological Predictions for the Environment). This model has been developed by SMHI and includes data assimilation module based on the Ensemble Kalman filter method. The method will be developed and assessed for a number of case studies with available in situ reference data and satellite altimetry data based on mainly the CryoSat-2 mission on which the new processor will be run; Results will be presented from case studies on the Amazon and Danube rivers and Lake Vänern (Sweden). The production of alti-hydro products (water level time series) are improved thanks to the use of water masks. This eases the geo-selection of the CryoSat-2 altimetric measurements since there are acquired from a geodetic orbit and are thus spread along the river course in space and and time. The specific processing of data from this geodetic orbit space

  18. Human migraine models

    DEFF Research Database (Denmark)

    Iversen, Helle Klingenberg

    2001-01-01

    The need for experimental models is obvious. In animal models it is possible to study vascular responses, neurogenic inflammation, c-fos expression etc. However, the pathophysiology of migraine remains unsolved, why results from animal studies not directly can be related to the migraine attack......, which is a human experience. A set-up for investigations of experimental headache and migraine in humans, has been evaluated and headache mechanisms explored by using nitroglycerin and other headache-inducing agents. Nitric oxide (NO) or other parts of the NO activated cascade seems to be responsible...

  19. Interactive Digital Signal Processor

    Science.gov (United States)

    Mish, W. H.

    1985-01-01

    Interactive Digital Signal Processor, IDSP, consists of set of time series analysis "operators" based on various algorithms commonly used for digital signal analysis. Processing of digital signal time series to extract information usually achieved by applications of number of fairly standard operations. IDSP excellent teaching tool for demonstrating application for time series operators to artificially generated signals.

  20. Beyond processor sharing

    NARCIS (Netherlands)

    Aalto, S.; Ayesta, U.; Borst, S.C.; Misra, V.; Núñez Queija, R.

    2007-01-01

    While the (Egalitarian) Processor-Sharing (PS) discipline offers crucial insights in the performance of fair resource allocation mechanisms, it is inherently limited in analyzing and designing differentiated scheduling algorithms such as Weighted Fair Queueing and Weighted Round-Robin. The Discrimin

  1. The Central Trigger Processor (CTP)

    CERN Multimedia

    Franchini, Matteo

    2016-01-01

    The Central Trigger Processor (CTP) receives trigger information from the calorimeter and muon trigger processors, as well as from other sources of trigger. It makes the Level-1 decision (L1A) based on a trigger menu.

  2. A Domain Specific DSP Processor

    OpenAIRE

    Tell, Eric

    2001-01-01

    This thesis describes the design of a domain specific DSP processor. The thesis is divided into two parts. The first part gives some theoretical background, describes the different steps of the design process (both for DSP processors in general and for this project) and motivates the design decisions made for this processor. The second part is a nearly complete design specification. The intended use of the processor is as a platform for hardware acceleration units. Support for this has howe...

  3. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  4. Dual-core Itanium Processor

    CERN Multimedia

    2006-01-01

    Intel’s first dual-core Itanium processor, code-named "Montecito" is a major release of Intel's Itanium 2 Processor Family, which implements the Intel Itanium architecture on a dual-core processor with two cores per die (integrated circuit). Itanium 2 is much more powerful than its predecessor. It has lower power consumption and thermal dissipation.

  5. Human Factors Model

    Science.gov (United States)

    1993-01-01

    Jack is an advanced human factors software package that provides a three dimensional model for predicting how a human will interact with a given system or environment. It can be used for a broad range of computer-aided design applications. Jack was developed by the computer Graphics Research Laboratory of the University of Pennsylvania with assistance from NASA's Johnson Space Center, Ames Research Center and the Army. It is the University's first commercial product. Jack is still used for academic purposes at the University of Pennsylvania. Commercial rights were given to Transom Technologies, Inc.

  6. Dynamic Load Balancing using Graphics Processors

    Directory of Open Access Journals (Sweden)

    R Mohan

    2014-04-01

    Full Text Available To get maximum performance on the many-core graphics processors, it is important to have an even balance of the workload so that all processing units contribute equally to the task at hand. This can be hard to achieve when the cost of a task is not known beforehand and when new sub-tasks are created dynamically during execution. Both the dynamic load balancing methods using Static task assignment and work stealing using deques are compared to see which one is more suited to the highly parallel world of graphics processors. They have been evaluated on the task of simulating a computer move against the human move, in the famous four in a row game. The experiments showed that synchronization can be very expensive, and those new methods which use graphics processor features wisely might be required.

  7. New Generation Processor Architecture Research

    Institute of Scientific and Technical Information of China (English)

    Chen Hongsong(陈红松); Hu Mingzeng; Ji Zhenzhou

    2003-01-01

    With the rapid development of microelectronics and hardware,the use of ever faster micro-processors and new architecture must be continued to meet tomorrow′s computing needs. New processor microarchitectures are needed to push performance further and to use higher transistor counts effectively.At the same time,aiming at different usages,the processor has been optimized in different aspects,such as high performace,low power consumption,small chip area and high security. SOC (System on chip)and SCMP (Single Chip Multi Processor) constitute the main processor system architecture.

  8. Analysis of Reconfigurable Processors Using Petri Net

    Directory of Open Access Journals (Sweden)

    Hadis Heidari

    2013-07-01

    Full Text Available In this paper, we propose Petri net models for processing elements. The processing elements include: a general-purpose processor (GPP, a reconfigurable element (RE, and a hybrid element (combining a GPP with an RE. The models consist of many transitions and places. The model and associated analysis methods provide a promising tool for modeling and performance evaluation of reconfigurable processors. The model is demonstrated by considering a simple example. This paper describes the development of a reconfigurable processor; the developed system is based on the Petri net concept. Petri nets are becoming suitable as a formal model for hardware system design. Designers can use Petri net as a modeling language to perform high level analysis of complex processors designs processing chips. The simulation does with PIPEv4.1 simulator. The simulation results show that Petri net state spaces are bounded and safe and have not deadlock and the average of number tokens in first token is 0.9901 seconds. In these models, there are only 5% errors; also the analysis time in these models is 0.016 seconds.

  9. The Secondary Organic Aerosol Processor (SOAP v1.0) model: a unified model with different ranges of complexity based on the molecular surrogate approach

    Science.gov (United States)

    Couvidat, F.; Sartelet, K.

    2015-04-01

    In this paper the Secondary Organic Aerosol Processor (SOAP v1.0) model is presented. This model determines the partitioning of organic compounds between the gas and particle phases. It is designed to be modular with different user options depending on the computation time and the complexity required by the user. This model is based on the molecular surrogate approach, in which each surrogate compound is associated with a molecular structure to estimate some properties and parameters (hygroscopicity, absorption into the aqueous phase of particles, activity coefficients and phase separation). Each surrogate can be hydrophilic (condenses only into the aqueous phase of particles), hydrophobic (condenses only into the organic phases of particles) or both (condenses into both the aqueous and the organic phases of particles). Activity coefficients are computed with the UNIFAC (UNIversal Functional group Activity Coefficient; Fredenslund et al., 1975) thermodynamic model for short-range interactions and with the Aerosol Inorganic-Organic Mixtures Functional groups Activity Coefficients (AIOMFAC) parameterization for medium- and long-range interactions between electrolytes and organic compounds. Phase separation is determined by Gibbs energy minimization. The user can choose between an equilibrium representation and a dynamic representation of organic aerosols (OAs). In the equilibrium representation, compounds in the particle phase are assumed to be at equilibrium with the gas phase. However, recent studies show that the organic aerosol is not at equilibrium with the gas phase because the organic phases could be semi-solid (very viscous liquid phase). The condensation-evaporation of organic compounds could then be limited by the diffusion in the organic phases due to the high viscosity. An implicit dynamic representation of secondary organic aerosols (SOAs) is available in SOAP with OAs divided into layers, the first layer being at the center of the particle (slowly

  10. Stereoscopic Optical Signal Processor

    Science.gov (United States)

    Graig, Glenn D.

    1988-01-01

    Optical signal processor produces two-dimensional cross correlation of images from steroscopic video camera in real time. Cross correlation used to identify object, determines distance, or measures movement. Left and right cameras modulate beams from light source for correlation in video detector. Switch in position 1 produces information about range of object viewed by cameras. Position 2 gives information about movement. Position 3 helps to identify object.

  11. Tiled Multicore Processors

    Science.gov (United States)

    Taylor, Michael B.; Lee, Walter; Miller, Jason E.; Wentzlaff, David; Bratt, Ian; Greenwald, Ben; Hoffmann, Henry; Johnson, Paul R.; Kim, Jason S.; Psota, James; Saraf, Arvind; Shnidman, Nathan; Strumpen, Volker; Frank, Matthew I.; Amarasinghe, Saman; Agarwal, Anant

    For the last few decades Moore’s Law has continually provided exponential growth in the number of transistors on a single chip. This chapter describes a class of architectures, called tiled multicore architectures, that are designed to exploit massive quantities of on-chip resources in an efficient, scalable manner. Tiled multicore architectures combine each processor core with a switch to create a modular element called a tile. Tiles are replicated on a chip as needed to create multicores with any number of tiles. The Raw processor, a pioneering example of a tiled multicore processor, is examined in detail to explain the philosophy, design, and strengths of such architectures. Raw addresses the challenge of building a general-purpose architecture that performs well on a larger class of stream and embedded computing applications than existing microprocessors, while still running existing ILP-based sequential programs with reasonable performance. Central to achieving this goal is Raw’s ability to exploit all forms of parallelism, including ILP, DLP, TLP, and Stream parallelism. Raw approaches this challenge by implementing plenty of on-chip resources - including logic, wires, and pins - in a tiled arrangement, and exposing them through a new ISA, so that the software can take advantage of these resources for parallel applications. Compared to a traditional superscalar processor, Raw performs within a factor of 2x for sequential applications with a very low degree of ILP, about 2x-9x better for higher levels of ILP, and 10x-100x better when highly parallel applications are coded in a stream language or optimized by hand.

  12. Distributed processor allocation for launching applications in a massively connected processors complex

    Science.gov (United States)

    Pedretti, Kevin

    2008-11-18

    A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.

  13. Continuous history variable for programmable quantum processors

    CERN Document Server

    Vlasov, Alexander Yu

    2010-01-01

    In this brief note is discussed application of continuous quantum history ("trash") variable for simplification of scheme of programmable quantum processor. Similar scheme may be tested also in other models of the theory of quantum algorithms and complexity, because provides modification of a standard operation: quantum function evaluation.

  14. Noise limitations in optical linear algebra processors.

    Science.gov (United States)

    Batsell, S G; Jong, T L; Walkup, J F; Krile, T F

    1990-05-10

    A general statistical noise model is presented for optical linear algebra processors. A statistical analysis which includes device noise, the multiplication process, and the addition operation is undertaken. We focus on those processes which are architecturally independent. Finally, experimental results which verify the analytical predictions are also presented.

  15. A Systolic Array RLS Processor

    OpenAIRE

    Asai, T.; Matsumoto, T.

    2000-01-01

    This paper presents the outline of the systolic array recursive least-squares (RLS) processor prototyped primarily with the aim of broadband mobile communication applications. To execute the RLS algorithm effectively, this processor uses an orthogonal triangularization technique known in matrix algebra as QR decomposition for parallel pipelined processing. The processor board comprises 19 application-specific integrated circuit chips, each with approximately one million gates. Thirty-two bit ...

  16. AMD's 64-bit Opteron processor

    CERN Document Server

    CERN. Geneva

    2003-01-01

    This talk concentrates on issues that relate to obtaining peak performance from the Opteron processor. Compiler options, memory layout, MPI issues in multi-processor configurations and the use of a NUMA kernel will be covered. A discussion of recent benchmarking projects and results will also be included.BiographiesDavid RichDavid directs AMD's efforts in high performance computing and also in the use of Opteron processors...

  17. 基于ESL快速精确的处理器混合模型%Fast and Accurate Processor Hybrid Model Based on ESL

    Institute of Scientific and Technical Information of China (English)

    鲁超; 魏继增; 常轶松

    2012-01-01

    For RTL design cannot meet the requirement of the speed of System on Chip(SoC), this paper presents a fast and accurate processor hybrid model based on Electronic System Level(ESL). It uses the proprietary 32 bit embedded microprocessor C*CORE340 based on ESL design methodology. For this target the Instruction Set Simulator(ISS) and Cache adopt the different abstraction layers to construct. Experimental results show that the simulation speeds of hybrid model is a least 10 times faster than that of RTL model, and the less than 10% error rate simulation accuracy is gotten.%RTL设计不能满足片上系统对仿真速度的要求.为此,提出一种基于电子系统级快速精确的处理器混合模型.以32位嵌入式微处理器C*CORE340为例,采用不同的抽象层次对指令集仿真器和Cache进行构建.实验结果表明,与RTL级模型相比,该模型的仿真速度至少快10倍,仿真精度误差率低于10%.

  18. Spaceborne Processor Array

    Science.gov (United States)

    Chow, Edward T.; Schatzel, Donald V.; Whitaker, William D.; Sterling, Thomas

    2008-01-01

    A Spaceborne Processor Array in Multifunctional Structure (SPAMS) can lower the total mass of the electronic and structural overhead of spacecraft, resulting in reduced launch costs, while increasing the science return through dynamic onboard computing. SPAMS integrates the multifunctional structure (MFS) and the Gilgamesh Memory, Intelligence, and Network Device (MIND) multi-core in-memory computer architecture into a single-system super-architecture. This transforms every inch of a spacecraft into a sharable, interconnected, smart computing element to increase computing performance while simultaneously reducing mass. The MIND in-memory architecture provides a foundation for high-performance, low-power, and fault-tolerant computing. The MIND chip has an internal structure that includes memory, processing, and communication functionality. The Gilgamesh is a scalable system comprising multiple MIND chips interconnected to operate as a single, tightly coupled, parallel computer. The array of MIND components shares a global, virtual name space for program variables and tasks that are allocated at run time to the distributed physical memory and processing resources. Individual processor- memory nodes can be activated or powered down at run time to provide active power management and to configure around faults. A SPAMS system is comprised of a distributed Gilgamesh array built into MFS, interfaces into instrument and communication subsystems, a mass storage interface, and a radiation-hardened flight computer.

  19. Conceptual model of a logical system processor of selection to electrical filters for correction of harmonics in low voltage lines

    Science.gov (United States)

    Lastre, Arlys; Torriente, Ives; Méndez, Erik F.; Cordovés, Alexis

    2017-06-01

    In the present investigation, the authors propose a conceptual model for the analysis and the decision making of the corrective models to use in the mitigation of the harmonic distortion. The authors considered the setting of conventional models, and such adaptive models like the filters incorporation to networks neuronal artificial (RNA's) for the mitigating effect. In addition to the present work is a showing of the experimental model that learns by means of a flowchart denoting the need to use artificial intelligence skills for the exposition of the proposed model. The other aspect considered and analyzed are the adaptability and usage of the same, considering a local reference of the laws and lineaments of energy quality that demands the Department of Electricity and Energy Renewable (MEER) of Equator.

  20. Benchmark Comparison of Dual- and Quad-Core Processor Linux Clusters with Two Global Climate Modeling Workloads

    Science.gov (United States)

    McGalliard, James

    2008-01-01

    This viewgraph presentation details the science and systems environments that NASA High End computing program serves. Included is a discussion of the workload that is involved in the processing for the Global Climate Modeling. The Goddard Earth Observing System Model, Version 5 (GEOS-5) is a system of models integrated using the Earth System Modeling Framework (ESMF). The GEOS-5 system was used for the Benchmark tests, and the results of the tests are shown and discussed. Tests were also run for the Cubed Sphere system, results for these test are also shown.

  1. Embedded Processor Oriented Compiler Infrastructure

    Directory of Open Access Journals (Sweden)

    DJUKIC, M.

    2014-08-01

    Full Text Available In the recent years, research of special compiler techniques and algorithms for embedded processors broaden the knowledge of how to achieve better compiler performance in irregular processor architectures. However, industrial strength compilers, besides ability to generate efficient code, must also be robust, understandable, maintainable, and extensible. This raises the need for compiler infrastructure that provides means for convenient implementation of embedded processor oriented compiler techniques. Cirrus Logic Coyote 32 DSP is an example that shows how traditional compiler infrastructure is not able to cope with the problem. That is why the new compiler infrastructure was developed for this processor, based on research. in the field of embedded system software tools and experience in development of industrial strength compilers. The new infrastructure is described in this paper. Compiler generated code quality is compared with code generated by the previous compiler for the same processor architecture.

  2. Modeling human color categorization

    NARCIS (Netherlands)

    van den Broek, Egon; Schouten, Th.E.; Kisters, P.M.F.

    2008-01-01

    A unique color space segmentation method is introduced. It is founded on features of human cognition, where 11 color categories are used in processing color. In two experiments, human subjects were asked to categorize color stimuli into these 11 color categories, which resulted in markers for a Colo

  3. Modeling human color categorization

    NARCIS (Netherlands)

    van den Broek, Egon; Schouten, Th.E.; Kisters, P.M.F.

    A unique color space segmentation method is introduced. It is founded on features of human cognition, where 11 color categories are used in processing color. In two experiments, human subjects were asked to categorize color stimuli into these 11 color categories, which resulted in markers for a

  4. Integrated Environmental Modelling: human decisions, human challenges

    Science.gov (United States)

    Glynn, Pierre D.

    2015-01-01

    Integrated Environmental Modelling (IEM) is an invaluable tool for understanding the complex, dynamic ecosystems that house our natural resources and control our environments. Human behaviour affects the ways in which the science of IEM is assembled and used for meaningful societal applications. In particular, human biases and heuristics reflect adaptation and experiential learning to issues with frequent, sharply distinguished, feedbacks. Unfortunately, human behaviour is not adapted to the more diffusely experienced problems that IEM typically seeks to address. Twelve biases are identified that affect IEM (and science in general). These biases are supported by personal observations and by the findings of behavioural scientists. A process for critical analysis is proposed that addresses some human challenges of IEM and solicits explicit description of (1) represented processes and information, (2) unrepresented processes and information, and (3) accounting for, and cognizance of, potential human biases. Several other suggestions are also made that generally complement maintaining attitudes of watchful humility, open-mindedness, honesty and transparent accountability. These suggestions include (1) creating a new area of study in the behavioural biogeosciences, (2) using structured processes for engaging the modelling and stakeholder communities in IEM, and (3) using ‘red teams’ to increase resilience of IEM constructs and use.

  5. Baseband processor development for the Advanced Communications Satellite Program

    Science.gov (United States)

    Moat, D.; Sabourin, D.; Stilwell, J.; Mccallister, R.; Borota, M.

    1982-01-01

    An onboard-baseband-processor concept for a satellite-switched time-division-multiple-access (SS-TDMA) communication system was developed for NASA Lewis Research Center. The baseband processor routes and controls traffic on an individual message basis while providing significant advantages in improved link margins and system flexibility. Key technology developments required to prove the flight readiness of the baseband-processor design are being verified in a baseband-processor proof-of-concept model. These technology developments include serial MSK modems, Clos-type baseband routing switch, a single-chip CMOS maximum-likelihood convolutional decoder, and custom LSL implementation of high-speed, low-power ECL building blocks.

  6. Mathematical models of human behavior

    DEFF Research Database (Denmark)

    Møllgaard, Anders Edsberg

    data set, along with work on other behavioral data. The overall goal is to contribute to a quantitative understanding of human behavior using big data and mathematical models. Central to the thesis is the determination of the predictability of different human activities. Upper limits are derived......, thereby implying that interactions between spreading processes are driving forces of attention dynamics. Overall, the thesis contributes to a quantitative understanding of a wide range of different human behaviors by applying mathematical modeling to behavioral data. There can be no doubt......During the last 15 years there has been an explosion in human behavioral data caused by the emergence of cheap electronics and online platforms. This has spawned a whole new research field called computational social science, which has a quantitative approach to the study of human behavior. Most...

  7. Multi Microkernel Operating Systems for Multi-Core Processors

    Directory of Open Access Journals (Sweden)

    Rami Matarneh

    2009-01-01

    Full Text Available Problem statement: In the midst of the huge development in processors industry as a response to the increasing demand for high-speed processors manufacturers were able to achieve the goal of producing the required processors, but this industry disappointed hopes, because it faced problems not amenable to solution, such as complexity, hard management and large consumption of energy. These problems forced the manufacturers to stop the focus on increasing the speed of processors and go toward parallel processing to increase performance. This eventually produced multi-core processors with high-performance, if used properly. Unfortunately, until now, these processors did not use as it should be used; because of lack support of operating system and software applications. Approach: The approach based on the assumption that single-kernel operating system was not enough to manage multi-core processors to rethink the construction of multi-kernel operating system. One of these kernels serves as the master kernel and the others serve as slave kernels. Results: Theoretically, the proposed model showed that it can do much better than the existing models; because it supported single-threaded processing and multi-threaded processing at the same time, in addition, it can make better use of multi-core processors because it divided the load almost equally between the cores and the kernels which will lead to a significant improvement in the performance of the operating system. Conclusion: Software industry needed to get out of the classical framework to be able to keep pace with hardware development, this objective was achieved by re-thinking building operating systems and software in a new innovative methodologies and methods, where the current theories of operating systems were no longer capable of achieving the aspirations of future.

  8. Libera Electron Beam Position Processor

    CERN Document Server

    Ursic, Rok

    2005-01-01

    Libera is a product family delivering unprecedented possibilities for either building powerful single station solutions or architecting complex feedback systems in the field of accelerator instrumentation and controls. This paper presents functionality and field performance of its first member, the electron beam position processor. It offers superior performance with multiple measurement channels delivering simultaneously position measurements in digital format with MHz kHz and Hz bandwidths. This all-in-one product, facilitating pulsed and CW measurements, is much more than simply a high performance beam position measuring device delivering micrometer level reproducibility with sub-micrometer resolution. Rich connectivity options and innate processing power make it a powerful feedback building block. By interconnecting multiple Libera electron beam position processors one can build a low-latency high throughput orbit feedback system without adding additional hardware. Libera electron beam position processor ...

  9. Making CSB + -Trees Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  10. A natural human hand model

    NARCIS (Netherlands)

    Van Nierop, O.A.; Van der Helm, A.; Overbeeke, K.J.; Djajadiningrat, T.J.P.

    2007-01-01

    We present a skeletal linked model of the human hand that has natural motion. We show how this can be achieved by introducing a new biology-based joint axis that simulates natural joint motion and a set of constraints that reduce an estimated 150 possible motions to twelve. The model is based on obs

  11. Reconfigurable Communication Processor:A New Approach for Network Processor

    Institute of Scientific and Technical Information of China (English)

    孙华; 陈青山; 张文渊

    2003-01-01

    As the traditional RISC +ASIC/ASSP approach for network processor design can not meet the today'srequirements, this paper described an alternate approach, Reconfigurable Processing Architecture, to boost theperformance to ASIC level while reserve the programmability of the traditional RISC based system. This papercovers both the hardware architecture and the software development environment architecture.

  12. Mathematical models of human behavior

    DEFF Research Database (Denmark)

    Møllgaard, Anders Edsberg

    During the last 15 years there has been an explosion in human behavioral data caused by the emergence of cheap electronics and online platforms. This has spawned a whole new research field called computational social science, which has a quantitative approach to the study of human behavior. Most...... studies have considered data sets with just one behavioral variable such as email communication. The Social Fabric interdisciplinary research project is an attempt to collect a more complete data set on human behavior by providing 1000 smartphones with pre-installed data collection software to students...... data set, along with work on other behavioral data. The overall goal is to contribute to a quantitative understanding of human behavior using big data and mathematical models. Central to the thesis is the determination of the predictability of different human activities. Upper limits are derived...

  13. Fast, Massively Parallel Data Processors

    Science.gov (United States)

    Heaton, Robert A.; Blevins, Donald W.; Davis, ED

    1994-01-01

    Proposed fast, massively parallel data processor contains 8x16 array of processing elements with efficient interconnection scheme and options for flexible local control. Processing elements communicate with each other on "X" interconnection grid with external memory via high-capacity input/output bus. This approach to conditional operation nearly doubles speed of various arithmetic operations.

  14. ASSP Advanced Sensor Signal Processor.

    Science.gov (United States)

    1984-06-01

    transfer data sad cimeds . When a Processor receives the required data (Image) md/or oamand, that data will be operated on B-3 I I I autonomouly. The...BAN is provided by two separately controled DMA address generator chips (Am29o40). Each of these DMA chips create an 8 bit address. One DMA chip gene

  15. Cassava processors' awareness of occupational and environmental ...

    African Journals Online (AJOL)

    Cassava processors' awareness of occupational and environmental hazards ... Majority of the respondents also complained of lack of water (78.4%), lack of ... so as to reduce the problems faced by cassava processors during processing.

  16. A Bayesian sequential processor approach to spectroscopic portal system decisions

    Energy Technology Data Exchange (ETDEWEB)

    Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D

    2007-07-31

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.

  17. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... on FPGA-based processor cores: first, superpipelining enables higher-frequency system clocks, and second, predicated instructions circumvent costly pipeline stalls due to branches. To evaluate their effects, we develop Tinuso, a processor architecture optimized for FPGA implementation. We demonstrate...

  18. Flexible Bayesian Human Fecundity Models.

    Science.gov (United States)

    Kim, Sungduk; Sundaram, Rajeshwari; Buck Louis, Germaine M; Pyper, Cecilia

    2012-12-01

    Human fecundity is an issue of considerable interest for both epidemiological and clinical audiences, and is dependent upon a couple's biologic capacity for reproduction coupled with behaviors that place a couple at risk for pregnancy. Bayesian hierarchical models have been proposed to better model the conception probabilities by accounting for the acts of intercourse around the day of ovulation, i.e., during the fertile window. These models can be viewed in the framework of a generalized nonlinear model with an exponential link. However, a fixed choice of link function may not always provide the best fit, leading to potentially biased estimates for probability of conception. Motivated by this, we propose a general class of models for fecundity by relaxing the choice of the link function under the generalized nonlinear model framework. We use a sample from the Oxford Conception Study (OCS) to illustrate the utility and fit of this general class of models for estimating human conception. Our findings reinforce the need for attention to be paid to the choice of link function in modeling conception, as it may bias the estimation of conception probabilities. Various properties of the proposed models are examined and a Markov chain Monte Carlo sampling algorithm was developed for implementing the Bayesian computations. The deviance information criterion measure and logarithm of pseudo marginal likelihood are used for guiding the choice of links. The supplemental material section contains technical details of the proof of the theorem stated in the paper, and contains further simulation results and analysis.

  19. The Shigella human challenge model.

    Science.gov (United States)

    Porter, C K; Thura, N; Ranallo, R T; Riddle, M S

    2013-02-01

    Shigella is an important bacterial cause of infectious diarrhoea globally. The Shigella human challenge model has been used since 1946 for a variety of objectives including understanding disease pathogenesis, human immune responses and allowing for an early assessment of vaccine efficacy. A systematic review of the literature regarding experimental shigellosis in human subjects was conducted. Summative estimates were calculated by strain and dose. While a total of 19 studies evaluating nine strains at doses ranging from 10 to 1 × 1010 colony-forming units were identified, most studies utilized the S. sonnei strain 53G and the S. flexneri strain 2457T. Inoculum solution and pre-inoculation buffering has varied over time although diarrhoea attack rates do not appear to increase above 75-80%, and dysentery rates remain fairly constant, highlighting the need for additional dose-ranging studies. Expansion of the model to include additional strains from different serotypes will elucidate serotype and strain-specific outcome variability.

  20. 40 CFR 791.45 - Processors.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 31 2010-07-01 2010-07-01 true Processors. 791.45 Section 791.45 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL ACT (CONTINUED) DATA REIMBURSEMENT Basis for Proposed Order § 791.45 Processors. (a) Generally, processors will be...

  1. Bilinear Interpolation Image Scaling Processor for VLSI

    Directory of Open Access Journals (Sweden)

    Ms. Pawar Ashwini Dilip

    2014-05-01

    Full Text Available We introduce image scaling processor using VLSI technique. It consist of Bilinear interpolation, clamp filter and a sharpening spatial filter. Bilinear interpolation algorithm is popular due to its computational efficiency and image quality. But resultant image consist of blurring edges and aliasing artifacts after scaling. To reduce the blurring and aliasing artifacts sharpening spatial filter and clamp filters are used as pre-filter. These filters are realized by using T-model and inversed T-model convolution kernels. To reduce the memory buffer and computing resources for proposed image processor design two T-model or inversed T-model filters are combined into combined filter which requires only one line buffer memory. Also, to reduce hardware cost Reconfigurable calculation unit (RCUis invented. The VLSI architecture in this work can achieve 280 MHz with 6.08-K gate counts, and its core area is 30 378 μm2 synthesized by a 0.13-μm CMOS process

  2. Communications systems and methods for subsea processors

    Science.gov (United States)

    Gutierrez, Jose; Pereira, Luis

    2016-04-26

    A subsea processor may be located near the seabed of a drilling site and used to coordinate operations of underwater drilling components. The subsea processor may be enclosed in a single interchangeable unit that fits a receptor on an underwater drilling component, such as a blow-out preventer (BOP). The subsea processor may issue commands to control the BOP and receive measurements from sensors located throughout the BOP. A shared communications bus may interconnect the subsea processor and underwater components and the subsea processor and a surface or onshore network. The shared communications bus may be operated according to a time division multiple access (TDMA) scheme.

  3. Invasive tightly coupled processor arrays

    CERN Document Server

    LARI, VAHID

    2016-01-01

    This book introduces new massively parallel computer (MPSoC) architectures called invasive tightly coupled processor arrays. It proposes strategies, architecture designs, and programming interfaces for invasive TCPAs that allow invading and subsequently executing loop programs with strict requirements or guarantees of non-functional execution qualities such as performance, power consumption, and reliability. For the first time, such a configurable processor array architecture consisting of locally interconnected VLIW processing elements can be claimed by programs, either in full or in part, using the principle of invasive computing. Invasive TCPAs provide unprecedented energy efficiency for the parallel execution of nested loop programs by avoiding any global memory access such as GPUs and may even support loops with complex dependencies such as loop-carried dependencies that are not amenable to parallel execution on GPUs. For this purpose, the book proposes different invasion strategies for claiming a desire...

  4. An Experimental Digital Image Processor

    Science.gov (United States)

    Cok, Ronald S.

    1986-12-01

    A prototype digital image processor for enhancing photographic images has been built in the Research Laboratories at Kodak. This image processor implements a particular version of each of the following algorithms: photographic grain and noise removal, edge sharpening, multidimensional image-segmentation, image-tone reproduction adjustment, and image-color saturation adjustment. All processing, except for segmentation and analysis, is performed by massively parallel and pipelined special-purpose hardware. This hardware runs at 10 MHz and can be adjusted to handle any size digital image. The segmentation circuits run at 30 MHz. The segmentation data are used by three single-board computers for calculating the tonescale adjustment curves. The system, as a whole, has the capability of completely processing 10 million three-color pixels per second. The grain removal and edge enhancement algorithms represent the largest part of the pipelined hardware, operating at over 8 billion integer operations per second. The edge enhancement is performed by unsharp masking, and the grain removal is done using a collapsed Walsh-hadamard transform filtering technique (U.S. Patent No. 4549212). These two algo-rithms can be realized using four basic processing elements, some of which have been imple-mented as VLSI semicustom integrated circuits. These circuits implement the algorithms with a high degree of efficiency, modularity, and testability. The digital processor is controlled by a Digital Equipment Corporation (DEC) PDP 11 minicomputer and can be interfaced to electronic printing and/or electronic scanning de-vices. The processor has been used to process over a thousand diagnostic images.

  5. The UA1 trigger processor

    CERN Document Server

    Grayer, G H

    1981-01-01

    Experiment UA1 is a large multipurpose spectrometer at the CERN proton-antiproton collider. The principal trigger is formed on the basis of the energy deposition in calorimeters. A trigger decision taken in under 2.4 microseconds can avoid dead-time losses due to the bunched nature of the beam. To achieve this fast 8-bit charge to digital converters have been built followed by two identical digital processors tailored to the experiment. The outputs of groups of the 2440 photomultipliers in the calorimeters are summed to form a total of 288 input channels to the ADCs. A look-up table in RAM is used to convert the digitised photomultiplier signals to energy in one processor, and to transverse energy in the other. Each processor forms four sums from a chosen combination of input channels, and also counts the number of clusters with electromagnetic or hadronic energy above pre-determined levels. Up to twelve combinations of these conditions, together with external information, may be combined in coincidence or in...

  6. Taxonomy of Data Prefetching for Multicore Processors

    Institute of Scientific and Technical Information of China (English)

    Surendra Byna; Yong Chen; Xian-He Sun

    2009-01-01

    Data prefetching is an effective data access latency hiding technique to mask the CPU stall caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed. Many prefetching techniques have been developed for single-core processors. Recent developments in processor technology have brought multicore processors into mainstream.While some of the single-core prefetching techniques are directly applicable to multicore processors, numerous novel strategies have been proposed in the past few years to take advantage of multiple cores. This paper aims to provide a comprehensive review of the state-of-the-art prefetching techniques, and proposes a taxonomy that classifies various design concerns in developing a prefetching strategy, especially for multicore processors. We compare various existing methods through analysis as well.

  7. Token-Aware Completion Functions for Elastic Processor Verification

    Directory of Open Access Journals (Sweden)

    Sudarshan K. Srinivasan

    2009-01-01

    Full Text Available We develop a formal verification procedure to check that elastic pipelined processor designs correctly implement their instruction set architecture (ISA specifications. The notion of correctness we use is based on refinement. Refinement proofs are based on refinement maps, which—in the context of this problem—are functions that map elastic processor states to states of the ISA specification model. Data flow in elastic architectures is complicated by the insertion of any number of buffers in any place in the design, making it hard to construct refinement maps for elastic systems in a systematic manner. We introduce token-aware completion functions, which incorporate a mechanism to track the flow of data in elastic pipelines, as a highly automated and systematic approach to construct refinement maps. We demonstrate the efficiency of the overall verification procedure based on token-aware completion functions using six elastic pipelined processor models based on the DLX architecture.

  8. Processor-sharing queues and resource sharing in wireless LANs

    NARCIS (Netherlands)

    Cheung, Sing Kwong

    2007-01-01

    In the past few decades, the processor-sharing (PS) model has received considerable attention in the queueing theory community and in the field of performance evaluation of computer and communication systems. The scarce resource is simultaneously shared among all users in these systems. PS models ar

  9. Modeling of Human Joint Structures.

    Science.gov (United States)

    1982-09-01

    Radial Lateral " epicondyle Olecranon Radius Ulna Figure 3. Lateral aspect of the right elbow joint. -17- Annular Ligament This strong band encircles... elbow joint, knee joint, human joints, shoulder joint, ankle joint, joint models, hip joint, ligaments. 20. ABSTRACT (Continue on reverse side If...ligaments. -A rather extended discussion of the articulations and anatomical descriptions of the elbow , shoulder, hip, knee and ankle joints are

  10. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  11. PERFORMANCE OF PRIVATE CACHE REPLACEMENT POLICIES FOR MULTICORE PROCESSORS

    Directory of Open Access Journals (Sweden)

    Matthew Lentz

    2014-07-01

    Full Text Available Multicore processors have become ubiquitous, both in general-purpose and special-purpose applications. With the number of transistors in a chip continuing to increase, the number of cores in a processor is also expected to increase. Cache replacement policy is an important design parameter of a cache hierarchy. As most of the processor designs have become multicore, there is a need to study cache replacement policies for multi-core systems. Previous studies have focused on the shared levels of the multicore cache hierarchy. In this study, we focus on the top level of the hierarchy, which bears the brunt of the memory requests emanating from each processor core. We measure the miss rates of various cache replacement policies, as the number of cores is steadily increased from 1 to 16. The study was done by modifying the publicly available SESC simulator, which models in detail a multicore processor with a multilevel cache hierarchy. Our experimental results show that for the private L1 caches, the LRU (Least Recently Used replacement policy outperforms all of the other replacement policies. This is in contrast to what was observed in previous studies for the shared L2 cache. The results presented in this paper are useful for hardware designers to optimize their cache designs or the program codes.

  12. Efficiency of Cache Mechanism for Network Processors

    Institute of Scientific and Technical Information of China (English)

    XU Bo; CHANG Jian; HUANG Shimeng; XUE Yibo; LI Jun

    2009-01-01

    With the explosion of network bandwidth and the ever-changing requirements for diverse net-work-based applications, the traditional processing architectures, i.e., general purpose processor (GPP) and application specific integrated circuits (ASIC) cannot provide sufficient flexibility and high performance at the same time. Thus, the network processor (NP) has emerged as an altemative to meet these dual demands for today's network processing. The NP combines embedded multi-threaded cores with a dch memory hierarchy that can adapt to different networking circumstances when customized by the application developers. In to-day's NP architectures, muitithreading prevails over cache mechanism, which has achieved great success in GPP to hide memory access latencies. This paper focuses on the efficiency of the cache mechanism in an NP. Theoretical timing models of packet processing are established for evaluating cache efficiency and experi-ments are performed based on real-life network backbone traces. Testing results show that an improvement of neady 70% can be gained in throughput with assistance from the cache mechanism. Accordingly, the cache mechanism is still efficient and irreplaceable in network processing, despite the existing of multithreading.

  13. Hardwired Logic and Multithread Design in Network Processors

    Institute of Scientific and Technical Information of China (English)

    李旭东; 徐扬; 刘斌; 王小军

    2004-01-01

    High-performance network processors are expected to play an important role in future high-speed routers. This paper focuses on two representative techniques needed for high-performance network processors: hardwired logic design and multithread design. Using hardwired logic, this paper compares a single-thread design with a multithread design, and proposes general models and principles to analyze the clock frequency and the resource cost for these environments. Then, two IP header processing schemes, one in single-thread mode and the other in double-thread mode, are developed using these principles and the implementation results verified the theoretical calculation.

  14. Human Thermal Model Evaluation Using the JSC Human Thermal Database

    Science.gov (United States)

    Bue, Grant; Makinen, Janice; Cognata, Thomas

    2012-01-01

    Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.

  15. Face feature processor on mobile service robot

    Science.gov (United States)

    Ahn, Ho Seok; Park, Myoung Soo; Na, Jin Hee; Choi, Jin Young

    2005-12-01

    In recent years, many mobile service robots have been developed. These robots are different from industrial robots. Service robots were confronted to unexpected changes in the human environment. So many capabilities were needed to service mobile robot, for example, the capability to recognize people's face and voice, the capability to understand people's conversation, and the capability to express the robot's thinking etc. This research considered face detection, face tracking and face recognition from continuous camera image. For face detection module, it used CBCH algorithm using openCV library from Intel Corporation. For face tracking module, it used the fuzzy controller to control the pan-tilt camera movement smoothly with face detection result. A PCA-FX, which adds class information to PCA, was used for face recognition module. These three procedures were called face feature processor, which were implemented on mobile service robot OMR to verify.

  16. Modeling Forces on the Human Body.

    Science.gov (United States)

    Pagonis, Vasilis; Drake, Russel; Morgan, Michael; Peters, Todd; Riddle, Chris; Rollins, Karen

    1999-01-01

    Presents five models of the human body as a mechanical system which can be used in introductory physics courses: human arms as levers, humans falling from small heights, a model of the human back, collisions during football, and the rotating gymnast. Gives ideas for discussions and activities, including Interactive Physics (TM) simulations. (WRM)

  17. Functional Verification of Enhanced RISC Processor

    OpenAIRE

    SHANKER NILANGI; SOWMYA L

    2013-01-01

    This paper presents design and verification of a 32-bit enhanced RISC processor core having floating point computations integrated within the core, has been designed to reduce the cost and complexity. The designed 3 stage pipelined 32-bit RISC processor is based on the ARM7 processor architecture with single precision floating point multiplier, floating point adder/subtractor for floating point operations and 32 x 32 booths multiplier added to the integer core of ARM7. The binary representati...

  18. Digital Signal Processor For GPS Receivers

    Science.gov (United States)

    Thomas, J. B.; Meehan, T. K.; Srinivasan, J. M.

    1989-01-01

    Three innovative components combined to produce all-digital signal processor with superior characteristics: outstanding accuracy, high-dynamics tracking, versatile integration times, lower loss-of-lock signal strengths, and infrequent cycle slips. Three components are digital chip advancer, digital carrier downconverter and code correlator, and digital tracking processor. All-digital signal processor intended for use in receivers of Global Positioning System (GPS) for geodesy, geodynamics, high-dynamics tracking, and ionospheric calibration.

  19. Design Principles for Synthesizable Processor Cores

    DEFF Research Database (Denmark)

    Schleuniger, Pascal; McKee, Sally A.; Karlsson, Sven

    2012-01-01

    As FPGAs get more competitive, synthesizable processor cores become an attractive choice for embedded computing. Currently popular commercial processor cores do not fully exploit current FPGA architectures. In this paper, we propose general design principles to increase instruction throughput...... through the use of micro-benchmarks that our principles guide the design of a processor core that improves performance by an average of 38% over a similar Xilinx MicroBlaze configuration....

  20. The case for a generic implant processor.

    Science.gov (United States)

    Strydis, Christos; Gaydadjiev, Georgi N

    2008-01-01

    A more structured and streamlined design of implants is nowadays possible. In this paper we focus on implant processors located in the heart of implantable systems. We present a real and representative biomedical-application scenario where such a new processor can be employed. Based on a suitably selected processor simulator, various operational aspects of the application are being monitored. Findings on performance, cache behavior, branch prediction, power consumption, energy expenditure and instruction mixes are presented and analyzed. The suitability of such an implant processor and directions for future work are given.

  1. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie; Vega, Leticia; Adam, Niklas; Flynn, Michael; Wjee (er. Rau); Lunn, Griffin; Jackson, Andrew

    2012-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrogen and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  2. Alternative Water Processor Test Development

    Science.gov (United States)

    Pickering, Karen D.; Mitchell, Julie L.; Adam, Niklas M.; Barta, Daniel; Meyer, Caitlin E.; Pensinger, Stuart; Vega, Leticia M.; Callahan, Michael R.; Flynn, Michael; Wheeler, Ray; hide

    2013-01-01

    The Next Generation Life Support Project is developing an Alternative Water Processor (AWP) as a candidate water recovery system for long duration exploration missions. The AWP consists of biological water processor (BWP) integrated with a forward osmosis secondary treatment system (FOST). The basis of the BWP is a membrane aerated biological reactor (MABR), developed in concert with Texas Tech University. Bacteria located within the MABR metabolize organic material in wastewater, converting approximately 90% of the total organic carbon to carbon dioxide. In addition, bacteria convert a portion of the ammonia-nitrogen present in the wastewater to nitrogen gas, through a combination of nitrification and denitrification. The effluent from the BWP system is low in organic contaminants, but high in total dissolved solids. The FOST system, integrated downstream of the BWP, removes dissolved solids through a combination of concentration-driven forward osmosis and pressure driven reverse osmosis. The integrated system is expected to produce water with a total organic carbon less than 50 mg/l and dissolved solids that meet potable water requirements for spaceflight. This paper describes the test definition, the design of the BWP and FOST subsystems, and plans for integrated testing.

  3. 基于高层 LISA功耗模型的 RISC处理器热量分析与仿真方法%Thermal Analysis and Simulation for RISC Processor Based on High-level LISA Power Model

    Institute of Scientific and Technical Information of China (English)

    岳丹; 徐抒岩; 聂海涛; 王刚

    2015-01-01

    为了优化集成电路芯片的布局封装,提高芯片性能及可靠性,对处理器级别的实时片上温度调节技术进行评估,给出了一种实时计算芯片单元模块功耗和温度的仿真方法。采用高层LISA功耗模型,得到 RISC处理器上通用应用程序的实时功耗;利用芯片后端设计软件Cadence Encounter对芯片进行布局规划设计,获得RISC处理器的floorplan信息;将实时功耗、floorplan信息及芯片规格参数作为输入信息,利用 HotSpot热量分析工具,实现对RISC处理器快速低代价的热量分析仿真。实验结果表明,利用该方法可以准确分析芯片的热分布,获得反映芯片在实际运行过程中热量分布的数据,为优化集成电路芯片的布局封装、分析芯片性能及可靠性等提供最直接的温度信息。%In order to optimize layout and packaging of IC chip ,improve its performance and reliability and evaluate runtime regulation technology of operating temperature on processor‐level , a real‐time simulation method of calculating unit‐based power consumption and runtime temperature is presented .By using high‐level LISA power model ,a runtime power consumption of generic applications on RISC processor is gained .Floorplan information about the RISC processor is obtained by Cadence Encounter software .HotSpot thermal analysis tool conducts fast , low‐cost thermal analysis for RISC processor using the real‐time power consumption ,floorplan information and RISC chip′s specifications as input information .The experiment results show that the method can accurately analyze heat distribution of the RISC chips and obtains the temperature data which can reflect the heat distribution during actual operation .It provides the most direct temperature information for optimizing layout and packaging of IC chip , analyzing its performance and reliability ,etc .

  4. Modelling the scaling properties of human mobility

    Science.gov (United States)

    Song, Chaoming; Koren, Tal; Wang, Pu; Barabási, Albert-László

    2010-10-01

    Individual human trajectories are characterized by fat-tailed distributions of jump sizes and waiting times, suggesting the relevance of continuous-time random-walk (CTRW) models for human mobility. However, human traces are barely random. Given the importance of human mobility, from epidemic modelling to traffic prediction and urban planning, we need quantitative models that can account for the statistical characteristics of individual human trajectories. Here we use empirical data on human mobility, captured by mobile-phone traces, to show that the predictions of the CTRW models are in systematic conflict with the empirical results. We introduce two principles that govern human trajectories, allowing us to build a statistically self-consistent microscopic model for individual human mobility. The model accounts for the empirically observed scaling laws, but also allows us to analytically predict most of the pertinent scaling exponents.

  5. Rapid prototyping and evaluation of programmable SIMD SDR processors in LISA

    Science.gov (United States)

    Chen, Ting; Liu, Hengzhu; Zhang, Botao; Liu, Dongpei

    2013-03-01

    With the development of international wireless communication standards, there is an increase in computational requirement for baseband signal processors. Time-to-market pressure makes it impossible to completely redesign new processors for the evolving standards. Due to its high flexibility and low power, software defined radio (SDR) digital signal processors have been proposed as promising technology to replace traditional ASIC and FPGA fashions. In addition, there are large numbers of parallel data processed in computation-intensive functions, which fosters the development of single instruction multiple data (SIMD) architecture in SDR platform. So a new way must be found to prototype the SDR processors efficiently. In this paper we present a bit-and-cycle accurate model of programmable SIMD SDR processors in a machine description language LISA. LISA is a language for instruction set architecture which can gain rapid model at architectural level. In order to evaluate the availability of our proposed processor, three common baseband functions, FFT, FIR digital filter and matrix multiplication have been mapped on the SDR platform. Analytical results showed that the SDR processor achieved the maximum of 47.1% performance boost relative to the opponent processor.

  6. Efficient quantum walk on a quantum processor

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.

    2016-05-01

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

  7. Efficient quantum walk on a quantum processor.

    Science.gov (United States)

    Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L; Wang, Jingbo B; Matthews, Jonathan C F

    2016-05-05

    The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor.

  8. Human Performance Modeling for Dynamic Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Joe, Jeffrey Clark [Idaho National Laboratory; Mandelli, Diego [Idaho National Laboratory

    2015-08-01

    Part of the U.S. Department of Energy’s (DOE’s) Light Water Reac- tor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Charac- terization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk framework. In this paper, we review simulation based and non simulation based human reliability analysis (HRA) methods. This paper summarizes the founda- tional information needed to develop a feasible approach to modeling human in- teractions in RISMC simulations.

  9. Ultrafast Fourier-transform parallel processor

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, W.L.

    1980-04-01

    A new, flexible, parallel-processing architecture is developed for a high-speed, high-precision Fourier transform processor. The processor is intended for use in 2-D signal processing including spatial filtering, matched filtering and image reconstruction from projections.

  10. Adapting implicit methods to parallel processors

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D. [Univ. of Missouri, Rolla, MO (United States)

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  11. The TM3270 Media-processor

    NARCIS (Netherlands)

    van de Waerdt, J.W.

    2006-01-01

    I n this thesis, we present the TM3270 VLIW media-processor, the latest of TriMedia processors, and describe the innovations with respect to its prede- cessor: the TM3260. We describe enhancements to the load/store unit design, such as a new data prefetching technique, and architectural

  12. Multi-output programmable quantum processor

    OpenAIRE

    Yu, Yafei; Feng, Jian; Zhan, Mingsheng

    2002-01-01

    By combining telecloning and programmable quantum gate array presented by Nielsen and Chuang [Phys.Rev.Lett. 79 :321(1997)], we propose a programmable quantum processor which can be programmed to implement restricted set of operations with several identical data outputs. The outputs are approximately-transformed versions of input data. The processor successes with certain probability.

  13. 7 CFR 1215.14 - Processor.

    Science.gov (United States)

    2010-01-01

    ... AND ORDERS; MISCELLANEOUS COMMODITIES), DEPARTMENT OF AGRICULTURE POPCORN PROMOTION, RESEARCH, AND CONSUMER INFORMATION Popcorn Promotion, Research, and Consumer Information Order Definitions § 1215.14 Processor. Processor means a person engaged in the preparation of unpopped popcorn for the market who...

  14. The TM3270 Media-processor

    NARCIS (Netherlands)

    van de Waerdt, J.W.

    2006-01-01

    I n this thesis, we present the TM3270 VLIW media-processor, the latest of TriMedia processors, and describe the innovations with respect to its prede- cessor: the TM3260. We describe enhancements to the load/store unit design, such as a new data prefetching technique, and architectural enhancements

  15. The Case for a Generic Implant Processor

    NARCIS (Netherlands)

    Strydis, C.; Gaydadjiev, G.N.

    2008-01-01

    A more structured and streamlined design of implants is nowadays possible. In this paper we focus on implant processors located in the heart of implantable systems. We present a real and representative biomedical-application scenario where such a new processor can be employed. Based on a suitably se

  16. An Empirical Evaluation of XQuery Processors

    NARCIS (Netherlands)

    Manegold, S.

    2008-01-01

    This paper presents an extensive and detailed experimental evaluation of XQuery processors. The study consists of running five publicly available XQuery benchmarks --- the Michigan benchmark (MBench), XBench, XMach-1, XMark and X007 --- on six XQuery processors, three stand-alone (file-based) XQuery

  17. The Case for a Generic Implant Processor

    NARCIS (Netherlands)

    Strydis, C.; Gaydadjiev, G.N.

    2008-01-01

    A more structured and streamlined design of implants is nowadays possible. In this paper we focus on implant processors located in the heart of implantable systems. We present a real and representative biomedical-application scenario where such a new processor can be employed. Based on a suitably

  18. Neurovision processor for designing intelligent sensors

    Science.gov (United States)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  19. Humanized in vivo Model for Autoimmune Diabetes

    Science.gov (United States)

    2009-02-01

    AWARD NUMBER: W81XWH-07-1-0121 TITLE: Humanized in vivo Model for Autoimmune Diabetes PRINCIPAL INVESTIGATOR: Gerald T Nepom, M.D., Ph.D...4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Humanized in vivo Model for Autoimmune Diabetes Sb. GRANT NUMBER W81XWH-07-1-0121 Sc. PROGRAM ELEMENT...therapies. This research study entails using humanized mice manifesting type 1 diabetes (T1 D)-associated human HLA molecules to address the fate and

  20. Human mammary microenvironment better regulates the biology of human breast cancer in humanized mouse model.

    Science.gov (United States)

    Zheng, Ming-Jie; Wang, Jue; Xu, Lu; Zha, Xiao-Ming; Zhao, Yi; Ling, Li-Jun; Wang, Shui

    2015-02-01

    During the past decades, many efforts have been made in mimicking the clinical progress of human cancer in mouse models. Previously, we developed a human breast tissue-derived (HB) mouse model. Theoretically, it may mimic the interactions between "species-specific" mammary microenvironment of human origin and human breast cancer cells. However, detailed evidences are absent. The present study (in vivo, cellular, and molecular experiments) was designed to explore the regulatory role of human mammary microenvironment in the progress of human breast cancer cells. Subcutaneous (SUB), mammary fat pad (MFP), and HB mouse models were developed for in vivo comparisons. Then, the orthotopic tumor masses from three different mouse models were collected for primary culture. Finally, the biology of primary cultured human breast cancer cells was compared by cellular and molecular experiments. Results of in vivo mouse models indicated that human breast cancer cells grew better in human mammary microenvironment. Cellular and molecular experiments confirmed that primary cultured human breast cancer cells from HB mouse model showed a better proliferative and anti-apoptotic biology than those from SUB to MFP mouse models. Meanwhile, primary cultured human breast cancer cells from HB mouse model also obtained the migratory and invasive biology for "species-specific" tissue metastasis to human tissues. Comprehensive analyses suggest that "species-specific" mammary microenvironment of human origin better regulates the biology of human breast cancer cells in our humanized mouse model of breast cancer, which is more consistent with the clinical progress of human breast cancer.

  1. Enabling Future Robotic Missions with Multicore Processors

    Science.gov (United States)

    Powell, Wesley A.; Johnson, Michael A.; Wilmot, Jonathan; Some, Raphael; Gostelow, Kim P.; Reeves, Glenn; Doyle, Richard J.

    2011-01-01

    Recent commercial developments in multicore processors (e.g. Tilera, Clearspeed, HyperX) have provided an option for high performance embedded computing that rivals the performance attainable with FPGA-based reconfigurable computing architectures. Furthermore, these processors offer more straightforward and streamlined application development by allowing the use of conventional programming languages and software tools in lieu of hardware design languages such as VHDL and Verilog. With these advantages, multicore processors can significantly enhance the capabilities of future robotic space missions. This paper will discuss these benefits, along with onboard processing applications where multicore processing can offer advantages over existing or competing approaches. This paper will also discuss the key artchitecural features of current commercial multicore processors. In comparison to the current art, the features and advancements necessary for spaceflight multicore processors will be identified. These include power reduction, radiation hardening, inherent fault tolerance, and support for common spacecraft bus interfaces. Lastly, this paper will explore how multicore processors might evolve with advances in electronics technology and how avionics architectures might evolve once multicore processors are inserted into NASA robotic spacecraft.

  2. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  3. Making CSB+-Tree Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...

  4. Processor arrays with asynchronous TDM optical buses

    Science.gov (United States)

    Li, Y.; Zheng, S. Q.

    1997-04-01

    We propose a pipelined asynchronous time division multiplexing optical bus. Such a bus can use one of the two hardwared priority schemes, the linear priority scheme and the round-robin priority scheme. Our simulation results show that the performances of our proposed buses are significantly better than the performances of known pipelined synchronous time division multiplexing optical buses. We also propose a class of processor arrays connected by pipelined asynchronous time division multiplexing optical buses. We claim that our proposed processor array not only have better performance, but also have better scalabilities than the existing processor arrays connected by pipelined synchronous time division multiplexing optical buses.

  5. Genetically Modified Pig Models for Human Diseases

    Institute of Scientific and Technical Information of China (English)

    Nana Fan; Liangxue Lai

    2013-01-01

    Genetically modified animal models are important for understanding the pathogenesis of human disease and developing therapeutic strategies.Although genetically modified mice have been widely used to model human diseases,some of these mouse models do not replicate important disease symptoms or pathology.Pigs are more similar to humans than mice in anatomy,physiology,and genome.Thus,pigs are considered to be better animal models to mimic some human diseases.This review describes genetically modified pigs that have been used to model various diseases including neurological,cardiovascular,and diabetic disorders.We also discuss the development in gene modification technology that can facilitate the generation of transgenic pig models for human diseases.

  6. 3-D Human Modeling and Animation

    CERN Document Server

    Ratner, Peter

    2012-01-01

    3-D Human Modeling and Animation Third Edition All the tools and techniques you need to bring human figures to 3-D life Thanks to today's remarkable technology, artists can create and animate realistic, three-dimensional human figures that were not possible just a few years ago. This easy-to-follow book guides you through all the necessary steps to adapt your own artistic skill in figure drawing, painting, and sculpture to this exciting digital canvas. 3-D Human Modeling and Animation, Third Edition starts you off with simple modeling, then prepares you for more advanced techniques for crea

  7. Human hand modelling: kinematics, dynamics, applications

    NARCIS (Netherlands)

    Gustus, A.; Stillfried, G.; Visser, J.; Jörntell, H.; Van der Smagt, P.

    2012-01-01

    An overview of mathematical modelling of the human hand is given. We consider hand models from a specific background: rather than studying hands for surgical or similar goals, we target at providing a set of tools with which human grasping and manipulation capabilities can be studied, and hand funct

  8. Concept of a Supervector Processor: A Vector Approach to Superscalar Processor, Design and Performance Analysis

    Directory of Open Access Journals (Sweden)

    Deepak Kumar, Ranjan Kumar Behera, K. S. Pandey

    2013-07-01

    Full Text Available To maximize the available performance is always a goal in microprocessor design. In this paper a new technique has been implemented which exploits the advantage of both superscalar and vector processing technique in a proposed processor called Supervector processor. Vector processor operates on array of data called vector and can greatly improve certain task such as numerical simulation and tasks which requires huge number crunching. On other handsuperscalar processor issues multiple instructions per cyclewhich can enhance the throughput. To implement parallelism multiple vector instructions were issued and executed per cycle in superscalar fashion. Case study has been done on various benchmarks to compare the performance of proposedsupervector processor architecture with superscalar and vectorprocessor architecture. Trimaran Framework has been used in order to evaluate the performance of the proposed supervector processor scheme.

  9. Photonics and Fiber Optics Processor Lab

    Data.gov (United States)

    Federal Laboratory Consortium — The Photonics and Fiber Optics Processor Lab develops, tests and evaluates high speed fiber optic network components as well as network protocols. In addition, this...

  10. Radiation Tolerant Software Defined Video Processor Project

    Data.gov (United States)

    National Aeronautics and Space Administration — MaXentric's is proposing a radiation tolerant Software Define Video Processor, codenamed SDVP, for the problem of advanced motion imaging in the space environment....

  11. Processor-Dependent Malware... and codes

    CERN Document Server

    Desnos, Anthony; Filiol, Eric

    2010-01-01

    Malware usually target computers according to their operating system. Thus we have Windows malwares, Linux malwares and so on ... In this paper, we consider a different approach and show on a technical basis how easily malware can recognize and target systems selectively, according to the onboard processor chip. This technology is very easy to build since it does not rely on deep analysis of chip logical gates architecture. Floating Point Arithmetic (FPA) looks promising to define a set of tests to identify the processor or, more precisely, a subset of possible processors. We give results for different families of processors: AMD, Intel (Dual Core, Atom), Sparc, Digital Alpha, Cell, Atom ... As a conclusion, we propose two {\\it open problems} that are new, to the authors' knowledge.

  12. Hidden Markov Models for Human Genes

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren; Chauvin, Yves

    1997-01-01

    We analyse the sequential structure of human genomic DNA by hidden Markov models. We apply models of widely different design: conventional left-right constructs and models with a built-in periodic architecture. The models are trained on segments of DNA sequences extracted such that they cover...

  13. Critical review of programmable media processor architectures

    Science.gov (United States)

    Berg, Stefan G.; Sun, Weiyun; Kim, Donglok; Kim, Yongmin

    1998-12-01

    In the past several years, there has been a surge of new programmable mediaprocessors introduced to provide an alternative solution to ASICs and dedicated hardware circuitries in the multimedia PC and embedded consumer electronics markets. These processors attempt to combine the programmability of multimedia-enhanced general purpose processors with the performance and low cost of dedicated hardware. We have reviewed five current multimedia architectures and evaluated their strengths and weaknesses.

  14. A New Echeloned Poisson Series Processor (EPSP)

    Science.gov (United States)

    Ivanova, Tamara

    2001-07-01

    A specialized Echeloned Poisson Series Processor (EPSP) is proposed. It is a typical software for the implementation of analytical algorithms of Celestial Mechanics. EPSP is designed for manipulating long polynomial-trigonometric series with literal divisors. The coefficients of these echeloned series are the rational or floating-point numbers. The Keplerian processor and analytical generator of special celestial mechanics functions based on the EPSP are also developed.

  15. Evaluating current processors performance and machines stability

    CERN Document Server

    Esposito, R; Tortone, G; Taurino, F M

    2003-01-01

    Accurately estimate performance of currently available processors is becoming a key activity, particularly in HENP environment, where high computing power is crucial. This document describes the methods and programs, opensource or freeware, used to benchmark processors, memory and disk subsystems and network connection architectures. These tools are also useful to stress test new machines, before their acquisition or before their introduction in a production environment, where high uptimes are requested.

  16. SMART AS A CRYPTOGRAPHIC PROCESSOR

    Directory of Open Access Journals (Sweden)

    Saroja Kanchi

    2016-05-01

    Full Text Available SMaRT is a 16-bit 2.5-address RISC-type single-cycle processor, which was recently designed and successfully mapped into a FPGA chip in our ECE department. In this paper, we use SMaRT to run the well-known encryption algorithm, Data Encryption Standard. For information security purposes, encryption is a must in today’s sophisticated and ever-increasing computer communications such as ATM machines and SIM cards. For comparison and evaluation purposes, we also map the same algorithm on the HC12, a same-size but CISC-type off-the-shelf microcontroller, Our results show that compared to HC12, SMaRT code is only 14% longer in terms of the static number of instructions but about 10 times faster in terms of the number of clock cycles, and 7% smaller in terms of code size. Our results also show that 2.5- address instructions, a SMaRT selling point, amount to 45% of the whole R-type instructions resulting in significant improvement in static number of instructions hence code size as well as performance. Additionally, we see that the SMaRT short-branch range is sufficiently wide in 90% of cases in the SMaRT code. Our results also reveal that the SMaRT novel concept of locality of reference in using the MSBs of the registers in non-subroutine branch instructions stays valid with a remarkable hit rate of 95%!

  17. A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding

    CERN Document Server

    Holbrook, J K; Mahesh, K; Holbrook, Jennifer K.; Eiselt, Kurt P.; Mahesh, Kavi

    1994-01-01

    The development of models of human sentence processing has traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own task-specific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in parallel to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding is this: the human sente...

  18. Human Centered Hardware Modeling and Collaboration

    Science.gov (United States)

    Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena

    2013-01-01

    In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.

  19. Human Exposure Modeling - Databases to Support Exposure Modeling

    Science.gov (United States)

    Human exposure modeling relates pollutant concentrations in the larger environmental media to pollutant concentrations in the immediate exposure media. The models described here are available on other EPA websites.

  20. FPGA wavelet processor design using language for instruction-set architectures (LISA)

    Science.gov (United States)

    Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios

    2007-04-01

    The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.

  1. Human Operator Control Strategy Model.

    Science.gov (United States)

    1980-04-01

    fashion. HOPE reflects the two-store theory of memory current in the psycho- logical literature ( Atkinson & Shiffrin , 1968; Broadbent, 1971). Two...uncertainty. In P.M.A. Rabbit & S. Dornic (Eds.), Attention and performance V. New York: Academic Press, 1975. Atkinson , R. C., & Shiffrin , R. M. Human...48 2. The Perception Process ... ............... 50 3. The Command Memory and Command Selection Process

  2. Human Adaptive Mechatronics and Human-System Modelling

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2013-03-01

    Full Text Available Several topics in projects for mechatronics studies, which are 'Human Adaptive Mechatronics (HAM' and 'Human-System Modelling (HSM', are presented in this paper. The main research theme of the HAM project is a design strategy for a new intelligent mechatronics system, which enhances operators' skills during machine operation. Skill analyses and control system design have been addressed. In the HSM project, human modelling based on hierarchical classification of skills was studied, including the following five types of skills: social, planning, cognitive, motion and sensory-motor skills. This paper includes digests of these research topics and the outcomes concerning each type of skill. Relationships with other research activities, knowledge and information that will be helpful for readers who are trying to study assistive human-mechatronics systems are also mentioned.

  3. Ingredients of Adaptability: A Survey of Reconfigurable Processors

    Directory of Open Access Journals (Sweden)

    Anupam Chattopadhyay

    2013-01-01

    Full Text Available For a design to survive unforeseen physical effects like aging, temperature variation, and/or emergence of new application standards, adaptability needs to be supported. Adaptability, in its complete strength, is present in reconfigurable processors, which makes it an important IP in modern System-on-Chips (SoCs. Reconfigurable processors have risen to prominence as a dominant computing platform across embedded, general-purpose, and high-performance application domains during the last decade. Significant advances have been made in many areas such as, identifying the advantages of reconfigurable platforms, their modeling, implementation flow and finally towards early commercial acceptance. This paper reviews these progresses from various perspectives with particular emphasis on fundamental challenges and their solutions. Empowered with the analysis of past, the future research roadmap is proposed.

  4. Distributing and Scheduling Divisible Task on Parallel Communicating Processors

    Institute of Scientific and Technical Information of China (English)

    李国东; 张德富

    2002-01-01

    In this paper we propose a novel scheme for scheduling divisible task onparallel processors connected by system interconnection network with arbitrary topology. Thedivisible task is a computation that can be divided into arbitrary independent subtasks solvedin parallel. Our model takes into consideration communication initial time and communicationdelays between processors. Moreover, by constructing the corresponding Network SpanningTree (NST) for a network, our scheme can be applied to all kinds of network topologies. Wepresent the concept of Balanced Task Distribution Tree and use it to design the Equation SetCreation Algorithm in which the set of linear equations is created by traversing the NST inpost-order. After solving the created equations, we get the optimal task assignment scheme.Experiments confirm the applicability of our scheme in real-life situations.

  5. IDSP- INTERACTIVE DIGITAL SIGNAL PROCESSOR

    Science.gov (United States)

    Mish, W. H.

    1994-01-01

    The Interactive Digital Signal Processor, IDSP, consists of a set of time series analysis "operators" based on the various algorithms commonly used for digital signal analysis work. The processing of a digital time series to extract information is usually achieved by the application of a number of fairly standard operations. However, it is often desirable to "experiment" with various operations and combinations of operations to explore their effect on the results. IDSP is designed to provide an interactive and easy-to-use system for this type of digital time series analysis. The IDSP operators can be applied in any sensible order (even recursively), and can be applied to single time series or to simultaneous time series. IDSP is being used extensively to process data obtained from scientific instruments onboard spacecraft. It is also an excellent teaching tool for demonstrating the application of time series operators to artificially-generated signals. IDSP currently includes over 43 standard operators. Processing operators provide for Fourier transformation operations, design and application of digital filters, and Eigenvalue analysis. Additional support operators provide for data editing, display of information, graphical output, and batch operation. User-developed operators can be easily interfaced with the system to provide for expansion and experimentation. Each operator application generates one or more output files from an input file. The processing of a file can involve many operators in a complex application. IDSP maintains historical information as an integral part of each file so that the user can display the operator history of the file at any time during an interactive analysis. IDSP is written in VAX FORTRAN 77 for interactive or batch execution and has been implemented on a DEC VAX-11/780 operating under VMS. The IDSP system generates graphics output for a variety of graphics systems. The program requires the use of Versaplot and Template plotting

  6. Improved Real-time Implementation of Adaptive Gassian Mixture Model-based Object Detection Algorithm for Fixed-point DSP Processors

    Institute of Scientific and Technical Information of China (English)

    Byung-eun LEE; Thanh-binh NGUYEN; Sun-tae CHUNG

    2010-01-01

    Foreground moving object detection is an important pocess in various computer vision applicatipons such as intelligent visual sur-veillance,HCI,object-based video compression,etc.One of the most successful moving object detection algorithms is based on Adaptive Gaussian Mixture Model (AGMM).Although AGMM-based object detection shows very good performance with respect to object detection accuracy,AGMM is very complex model requiring lots of floating-point arithmetic so that it should pay for expensive computational cost.Thus,direa implementation of the AGMM-based object detec-tion for embedded DSPs without floating-point arithmetic HW support cannot satisfy the real-time processing requirement.This paper pre-sents a navel real-time implementation of adaptive Gaussian mixture model-based moving object detection algorithm for fixed-point DSPs.In the proposed implementation,in addition to changes of data types into fixed- point ones,magnification of the Gaussian distribution tech nique is introduced so that the integer and fixed-point arithmetic can be easily and consistently utilized instead of real number and floating-point arithmetic in processing of AGMM algorithm.Experimental re-sults shows that the proposed implementation have a high potential in real-time applications.

  7. Human body modeling in injury biomechanics

    NARCIS (Netherlands)

    Happee, R.; Morsink, P.L.J.; Horst, M.J. van der; Wismans, J.S.H.M.

    1999-01-01

    Mathematical modelling is widely used for crash-safety research and design. However, most occupant models used in crash simulations are based on crash dummies and thereby inherit their apparent limitations. This paper describes a mathematical model of the real human body for impact loading. A combin

  8. Modeling human operator involvement in robotic systems

    NARCIS (Netherlands)

    Wewerinke, P.H.

    1991-01-01

    A modeling approach is presented to describe complex manned robotic systems. The robotic system is modeled as a (highly) nonlinear, possibly time-varying dynamic system including any time delays in terms of optimal estimation, control and decision theory. The role of the human operator(s) is modeled

  9. Simultaneous multithreaded processor enhanced for multimedia applications

    Science.gov (United States)

    Mombers, Friederich; Thomas, Michel

    1999-12-01

    The paper proposes a new media processor architecture specifically designed to handle state-of-the-art multimedia encoding and decoding tasks. To achieve this, the architecture efficiently exploit Data-, Instruction- and Thread-Level parallelisms while continuously adapting its computational resources to reach the most appropriate parallelism level among all the concurrent encoding/decoding processes. Looking at the implementation constraints, several critical choices were adopted that solve the interconnection delay problem, lower the cache misses and pipeline stalls effects and reduce register files and memory size by adopting a clustered Simultaneous Multithreaded Architecture. We enhanced the classic model to exploit both Instruction and Data Level Parallelism through vector instructions. The vector extension is well justified for multimedia workload and improves code density, crossbars complexity, register file ports and decoding logic area while it still provides an efficient way to fully exploit a large set of functional units. An MPEG-2 encoding algorithms based on Hybrid Genetic search has been implemented that show the efficiency of the architecture to adapt its resources allocation to better fulfill the application requirements.

  10. A CNN-Specific Integrated Processor

    Directory of Open Access Journals (Sweden)

    Suleyman Malki

    2009-01-01

    Full Text Available Integrated Processors (IP are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  11. A CNN-Specific Integrated Processor

    Science.gov (United States)

    Malki, Suleyman; Spaanenburg, Lambert

    2009-12-01

    Integrated Processors (IP) are algorithm-specific cores that either by programming or by configuration can be re-used within many microelectronic systems. This paper looks at Cellular Neural Networks (CNN) to become realized as IP. First current digital implementations are reviewed, and the memoryprocessor bandwidth issues are analyzed. Then a generic view is taken on the structure of the network, and a new intra-communication protocol based on rotating wheels is proposed. It is shown that this provides for guaranteed high-performance with a minimal network interface. The resulting node is small and supports multi-level CNN designs, giving the system a 30-fold increase in capacity compared to classical designs. As it facilitates multiple operations on a single image, and single operations on multiple images, with minimal access to the external image memory, balancing the internal and external data transfer requirements optimizes the system operation. In conventional digital CNN designs, the treatment of boundary nodes requires additional logic to handle the CNN value propagation scheme. In the new architecture, only a slight modification of the existing cells is necessary to model the boundary effect. A typical prototype for visual pattern recognition will house 4096 CNN cells with a 2% overhead for making it an IP.

  12. Cognitive modelling of human temporal reasoning

    NARCIS (Netherlands)

    ter Meulen, AGB

    2003-01-01

    Modelling human reasoning characterizes the fundamental human cognitive capacity to describe our past experience and use it to form expectations as well as plan and direct our future actions. Natural language semantics analyzes dynamic forms of reasoning in which the real-time order determines the

  13. Murine models of human wound healing.

    Science.gov (United States)

    Chen, Jerry S; Longaker, Michael T; Gurtner, Geoffrey C

    2013-01-01

    In vivo wound healing experiments remain the most predictive models for studying human wound healing, allowing an accurate representation of the complete wound healing environment including various cell types, environmental cues, and paracrine interactions. Small animals are economical, easy to maintain, and allow researchers to take advantage of the numerous transgenic strains that have been developed to investigate the specific mechanisms involved in wound healing and regeneration. Here we describe three reproducible murine wound healing models that recapitulate the human wound healing process.

  14. Reduced power processor requirements for the 30-cm diameter HG ion thruster

    Science.gov (United States)

    Rawlin, V. K.

    1979-01-01

    The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.

  15. Design and Implementation of 64-Bit Execute Stage for VLIW Processor Architecture on FPGA

    Directory of Open Access Journals (Sweden)

    Manju Rani

    2012-07-01

    Full Text Available FPGA implementation of 64-bit execute unit for VLIW processor, and improve power representation have been done in this paper. VHDL is used to modelled this architecture. VLIW stands for Very Long Instruction Word. This Processor Architecture is based on parallel processing in which more than one instruction is executed in parallel. This architecture is used to increase the instruction throughput. So this is the base of the modern Superscalar Processors. Basically VLIW is a RISC Processor. The difference is it contains long instruction as compared to RISC. This stage of the pipeline executes the instruction. This is the stage where the ALU (arithmetic logic unit is located. Execute stage are synthesized and targeted for Xilinx Virtex 4 FPGA and the results calculated for 64-bit Execute stage improve the power as compared to previous work done.

  16. Animal Models of Human Placentation - A Review

    DEFF Research Database (Denmark)

    Carter, Anthony Michael

    2007-01-01

    This review examines the strengths and weaknesses of animal models of human placentation and pays particular attention to the mouse and non-human primates. Analogies can be drawn between mouse and human in placental cell types and genes controlling placental development. There are, however...... and endometrium is similar in macaques and baboons, as is the subsequent lacunar stage. The absence of interstitial trophoblast cells in the monkey is an important difference from human placentation. However, there is a strong resemblance in the way spiral arteries are invaded and transformed in the macaque...

  17. The Human-Artifact Model

    DEFF Research Database (Denmark)

    Bødker, Susanne; Klokmose, Clemens Nylandsted

    2011-01-01

    Although devices of all shapes and sizes currently dominate the technological landscape, human–computer interaction (HCI) as a field is not yet theoretically equipped to match this reality. In this article we develop the human–artifact model, which has its roots in activity theoretical HCI....... By reinterpreting the activity theoretical foundation, we present a framework that helps addressing the analysis of individual interactive artifacts while embracing that they are part of a larger ecology of artifacts. We show how the human–artifact model helps structuring the understanding of an artifact's action......-possibilities in relation to the artifact ecology surrounding it. Essential to the model is that it provides four interconnected levels of analysis and addresses the possibilities and problems at these four levels. Artifacts and their use are constantly developing, and we address development in, and of, use. The framework...

  18. Pipelining and bypassing in a RISC/DSP processor

    Science.gov (United States)

    Yu, Guojun; Yao, Qingdong; Liu, Peng; Jiang, Zhidi; Li, Fuping

    2005-03-01

    This paper proposes pipelining and bypassing unit (BPU) design method in our 32-bit RISC/DSP processor: MediaDsp3201 (briefly, MD32). MD32 is realized in 0.18μm technology, 1.8v, 200MHz working clock and can achieve 200 million/s Multiply-Accumulate (MAC) operations. It merges RISC architecture and DSP computation capability thoroughly, achieves fundamental RISC, extended DSP and single instruction multiple data (SIMD) instruction set with various addressing modes in a unified and customized DSP pipeline stage architecture. We will first describe the pipeline structure of MD32, comparing it to typical RISC-style pipeline structure. And then we will study the validity of two bypassing schemes in terms of their effectiveness in resolving pipeline data hazards: Centralized and Distributed BPU design strategy (CBPU and DBPU). A bypassing circuit chain model is given for DBPU, which register read is only placed at ID pipe stage. Considering the processor"s working clock which is decided by the pipeline time delay, the optimization of circuit that serial select with priority is also analyzed in detail since the BPU consists of a long serial path for combination logic. Finally, the performance improvement is analyzed.

  19. DESIGN OF INSTRUCTION LIST (IL PROCESSOR FOR PROCESS CONTROL

    Directory of Open Access Journals (Sweden)

    Mrs. Shilpa Rudrawar

    2012-06-01

    Full Text Available Programmable Logic Controller (PLC is a device that allows an Electro-Mechanical engineer to automate his mechanical process in an efficient manner. Safety critical high speed application requires quick response. In order to improve the speed of executing PLC instructions, the IL processor is researched. Hierarchical approach has been used so that basic units can be modeled using behavioral programming. These basic units are combined using structural programming. Hardwired control approach is used to design the control unit. The proposed IL (Instruction List processor work upon our developed IL instructions which are compatible with the programming language IL according to the norm IEC 61131-3. This can accelerate the instructions execution, ultimately improve real-time performance comparing to the traditional sequential execution of PLC program by giving quick response at such safety critical high speed application. The design is to be implemented on FPGA for verification purpose. To validate the advance of the proposed design, two ladder programs are compiled to the instruction set of proposed IL processor as well as in IL programming language.

  20. Finite element modeling of the human pelvis

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, B.

    1995-11-01

    A finite element model of the human pelvis was created using a commercial wire frame image as a template. To test the final mesh, the model`s mechanical behavior was analyzed through finite element analysis and the results were displayed graphically as stress concentrations. In the future, this grid of the pelvis will be integrated with a full leg model and used in side-impact car collision simulations.

  1. Optimal processor allocation for sort-last compositing under BSP-tree ordering

    Science.gov (United States)

    Ramakrishnan, C. R.; Silva, Claudio T.

    1999-03-01

    In this paper, we consider a parallel rendering model that exploits the fundamental distinction between rendering and compositing operations, by assigning processors from specialized pools for each of these operations. Our motivation is to support the parallelization of general scan-line rendering algorithms with minimal effort, basically by supporting a compositing back-end (i.e., a sort-last architecture) that is able to perform user-controlled image composition. Our computational model is based on organizing rendering as well as compositing processors on a BSP-tree, whose internal nodes we call the compositing tree. Many known rendering algorithms, such as volumetric ray casting and polygon rendering can be easily parallelized based on the structure of the BSP-tree. In such a framework, it is paramount to minimize the processing power devoted to compositing, by minimizing the number of processors allocated for composition as well as optimizing the individual compositing operations. In this paper, we address the problems related to the static allocation of processor resources to the compositing tree. In particular, we present an optimal algorithm to allocate compositing operations to compositing processors. We also present techniques to evaluate the compositing operations within each processor using minimum memory while promoting concurrency between computation and communication. We describe the implementation details and provide experimental evidence of the validity of our techniques in practice.

  2. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  3. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  4. Real time processor for array speckle interferometry

    Science.gov (United States)

    Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos

    1989-01-01

    The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.

  5. Making CSB+-Tree Processor Conscious

    DEFF Research Database (Denmark)

    Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe

    2005-01-01

    Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....

  6. Intrusion Detection Architecture Utilizing Graphics Processors

    Directory of Open Access Journals (Sweden)

    Branislav Madoš

    2012-12-01

    Full Text Available With the thriving technology and the great increase in the usage of computer networks, the risk of having these network to be under attacks have been increased. Number of techniques have been created and designed to help in detecting and/or preventing such attacks. One common technique is the use of Intrusion Detection Systems (IDS. Today, number of open sources and commercial IDS are available to match enterprises requirements. However, the performance of these systems is still the main concern. This paper examines perceptions of intrusion detection architecture implementation, resulting from the use of graphics processor. It discusses recent research activities, developments and problems of operating systems security. Some exploratory evidence is presented that shows capabilities of using graphical processors and intrusion detection systems. The focus is on how knowledge experienced throughout the graphics processor inclusion has played out in the design of intrusion detection architecture that is seen as an opportunity to strengthen research expertise.

  7. ETHERNET PACKET PROCESSOR FOR SOC APPLICATION

    Directory of Open Access Journals (Sweden)

    Raja Jitendra Nayaka

    2012-07-01

    Full Text Available As the demand for Internet expands significantly in numbers of users, servers, IP addresses, switches and routers, the IP based network architecture must evolve and change. The design of domain specific processors that require high performance, low power and high degree of programmability is the bottleneck in many processor based applications. This paper describes the design of ethernet packet processor for system-on-chip (SoC which performs all core packet processing functions, including segmentation and reassembly, packetization classification, route and queue management which will speedup switching/routing performance. Our design has been configured for use with multiple projects ttargeted to a commercial configurable logic device the system is designed to support 10/100/1000 links with a speed advantage. VHDL has been used to implement and simulated the required functions in FPGA.

  8. Programmable DNA-mediated multitasking processor

    CERN Document Server

    Shu, Jian-Jun; Yong, Kian-Yan; Shao, Fangwei; Lee, Kee Jin

    2015-01-01

    Because of DNA appealing features as perfect material, including minuscule size, defined structural repeat and rigidity, programmable DNA-mediated processing is a promising computing paradigm, which employs DNAs as information storing and processing substrates to tackle the computational problems. The massive parallelism of DNA hybridization exhibits transcendent potential to improve multitasking capabilities and yield a tremendous speed-up over the conventional electronic processors with stepwise signal cascade. As an example of multitasking capability, we present an in vitro programmable DNA-mediated optimal route planning processor as a functional unit embedded in contemporary navigation systems. The novel programmable DNA-mediated processor has several advantages over the existing silicon-mediated methods, such as conducting massive data storage and simultaneous processing via much fewer materials than conventional silicon devices.

  9. SWIFT Privacy: Data Processor Becomes Data Controller

    Directory of Open Access Journals (Sweden)

    Edwin Jacobs

    2007-04-01

    Full Text Available Last month, SWIFT emphasised the urgent need for a solution to compliance with US Treasury subpoenas that provides legal certainty for the financial industry as well as for SWIFT. SWIFT will continue its activities to adhere to the Safe Harbor framework of the European data privacy legislation. Safe Harbor is a framework negotiated by the EU and US in 2000 to provide a way for companies in Europe, with operations in the US, to conform to EU data privacy regulations. This seems to conclude a complex privacy case, widely covered by the US and European media. A fundamental question in this case was who is a data controller and who is a mere data processor. Both the Belgian and the European privacy authorities considered SWIFT, jointly with the banks, as a data controller whereas SWIFT had considered itself as a mere data processor that processed financial data for banks. The difference between controller and processor has far reaching consequences.

  10. Efficient SIMD optimization for media processors

    Institute of Scientific and Technical Information of China (English)

    Jian-peng ZHOU; Ce SHI

    2008-01-01

    Single instruction multiple data (SIMD) instructions are often implemented in modem media processors. Although SIMD instructions are useful in multimedia applications, most compilers do not have good support for SIMD instructions. This paper focuses on SIMD instructions generation for media processors. We present an efficient code optimization approach that is integrated into a retargetable C compiler. SIMD instructions are generated by finding and combining the same operations in programs. Experimental results for the UltraSPARC VIS instruction set show that a speedup factor up to 2.639 is obtained.

  11. 8 Bit RISC Processor Using Verilog HDL

    Directory of Open Access Journals (Sweden)

    Ramandeep Kaur

    2014-03-01

    Full Text Available RISC is a design philosophy to reduce the complexity of instruction set that in turn reduces the amount of space, cycle time, cost and other parameters taken into account during the implementation of the design. The advent of FPGA has enabled the complex logical systems to be implemented on FPGA. The intent of this paper is to design and implement 8 bit RISC processor using FPGA Spartan 3E tool. This processor design depends upon design specification, analysis and simulation. It takes into consideration very simple instruction set. The momentous components include Control unit, ALU, shift registers and accumulator register.

  12. SPROC: A multiple-processor DSP IC

    Science.gov (United States)

    Davis, R.

    1991-01-01

    A large, single-chip, multiple-processor, digital signal processing (DSP) integrated circuit (IC) fabricated in HP-Cmos34 is presented. The innovative architecture is best suited for analog and real-time systems characterized by both parallel signal data flows and concurrent logic processing. The IC is supported by a powerful development system that transforms graphical signal flow graphs into production-ready systems in minutes. Automatic compiler partitioning of tasks among four on-chip processors gives the IC the signal processing power of several conventional DSP chips.

  13. Multi-core processors - An overview

    CERN Document Server

    Venu, Balaji

    2011-01-01

    Microprocessors have revolutionized the world we live in and continuous efforts are being made to manufacture not only faster chips but also smarter ones. A number of techniques such as data level parallelism, instruction level parallelism and hyper threading (Intel's HT) already exists which have dramatically improved the performance of microprocessor cores. This paper briefs on evolution of multi-core processors followed by introducing the technology and its advantages in today's world. The paper concludes by detailing on the challenges currently faced by multi-core processors and how the industry is trying to address these issues.

  14. Mathematical human modelling for impact loading

    NARCIS (Netherlands)

    Happee, R.; Hoof, J.F.A.M. van; Lange, R. de

    2001-01-01

    Mathematical modeling of the human body is widely used for automotive crash-safety research and design. Simulations have contributed to a reduction of injury numbers by optimization of vehicle structures and restraint systems. Currently, such simulations are largely performed using occupant models b

  15. Mathematical human modelling for impact loading

    NARCIS (Netherlands)

    Happee, R.; Hoof, J.F.A.M. van; Lange, R. de

    2001-01-01

    Mathematical modeling of the human body is widely used for automotive crash-safety research and design. Simulations have contributed to a reduction of injury numbers by optimization of vehicle structures and restraint systems. Currently, such simulations are largely performed using occupant models

  16. Mathematical human body modelling for impact loading

    NARCIS (Netherlands)

    Happee, R.; Morsink, P.L.J.; Wismans, J.S.H.M.

    1999-01-01

    Mathematical modelling of the human body is widely used for automotive crash safety research and design. Simulations have contributed to a reduction of injury numbers by optimisation of vehicle structures and restraint systems. Currently such simulations are largely performed using occupant models

  17. Complex Systems and Human Performance Modeling

    Science.gov (United States)

    2013-12-01

    constitute a cognitive architecture or decomposing the work flows and resource constraints that characterize human-system interactions, the modeler...also explored the generation of so-called “ fractal ” series from simple task network models where task times are the calculated by way of a moving

  18. Models of the Human in Tantric Hinduism

    DEFF Research Database (Denmark)

    Olesen, Bjarne Wernicke; Flood, Gavin

    2018-01-01

    This research project explores the origins, developments and transformations of yogic models of the human (e.g. kundalini yoga, the cakra system and ritual sex) in the tantric goddess traditions or what might be called Śāktism of medieval India. These Śākta models of esoteric anatomy originating...

  19. Mathematical human body modelling for impact loading

    NARCIS (Netherlands)

    Happee, R.; Morsink, P.L.J.; Wismans, J.S.H.M.

    1999-01-01

    Mathematical modelling of the human body is widely used for automotive crash safety research and design. Simulations have contributed to a reduction of injury numbers by optimisation of vehicle structures and restraint systems. Currently such simulations are largely performed using occupant models b

  20. Interior Design Research: A Human Ecosystem Model.

    Science.gov (United States)

    Guerin, Denise A.

    1992-01-01

    The interior ecosystems model illustrates effects on the human organism of the interaction of the natural, behavioral, and built environment. Examples of interior lighting and household energy consumption show the model's flexibility for organizing study variables in interior design research. (SK)

  1. A Model of the Human Eye

    Science.gov (United States)

    Colicchia, G.; Wiesner, H.; Waltner, C.; Zollman, D.

    2008-01-01

    We describe a model of the human eye that incorporates a variable converging lens. The model can be easily constructed by students with low-cost materials. It shows in a comprehensible way the functionality of the eye's optical system. Images of near and far objects can be focused. Also, the defects of near and farsighted eyes can be demonstrated.

  2. Interior Design Research: A Human Ecosystem Model.

    Science.gov (United States)

    Guerin, Denise A.

    1992-01-01

    The interior ecosystems model illustrates effects on the human organism of the interaction of the natural, behavioral, and built environment. Examples of interior lighting and household energy consumption show the model's flexibility for organizing study variables in interior design research. (SK)

  3. Comparison of Processor Performance of SPECint2006 Benchmarks of some Intel Xeon Processors

    Directory of Open Access Journals (Sweden)

    Abdul Kareem PARCHUR

    2012-08-01

    Full Text Available High performance is a critical requirement to all microprocessors manufacturers. The present paper describes the comparison of performance in two main Intel Xeon series processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310. The microarchitecture of these processors is implemented using the basis of a new family of processors from Intel starting with the Pentium 4 processor. These processors can provide a performance boost for many key application areas in modern generation. The scaling of performance in two major series of Intel Xeon processors (Type A: Intel Xeon X5260, X5460, E5450 and L5320 and Type B: Intel Xeon X5140, 5130, 5120 and E5310 has been analyzed using the performance numbers of 12 CPU2006 integer benchmarks, performance numbers that exhibit significant differences in performance. The results and analysis can be used by performance engineers, scientists and developers to better understand the performance scaling in modern generation processors.

  4. Inter Processor Communication for Fault Diagnosis in Multiprocessor Systems

    Directory of Open Access Journals (Sweden)

    C. D. Malleswar

    1994-04-01

    Full Text Available In the preseJlt paper a simple technique is proposed for fault diagnosis for multiprocessor and multiple system environments, wherein all microprocessors in the system are used in part to check the health of their neighbouring processors. It involves building simple fail-safe serial communication links between processors. Processors communicate with each other over these links and each processor is made to go through certain sequences of actions intended for diagnosis, under the observation of another processor .With limited overheads, fault detection can be done by this method. Also outlined are some of the popular techniques used for health check of processor-based systems.

  5. Human Muscle Fatigue Model in Dynamic Motions

    CERN Document Server

    Ma, Ruina; Bennis, Fouad; Ma, Liang

    2012-01-01

    Human muscle fatigue is considered to be one of the main reasons for Musculoskeletal Disorder (MSD). Recent models have been introduced to define muscle fatigue for static postures. However, the main drawbacks of these models are that the dynamic effect of the human and the external load are not taken into account. In this paper, each human joint is assumed to be controlled by two muscle groups to generate motions such as push/pull. The joint torques are computed using Lagrange's formulation to evaluate the dynamic factors of the muscle fatigue model. An experiment is defined to validate this assumption and the result for one person confirms its feasibility. The evaluation of this model can predict the fatigue and MSD risk in industry production quickly.

  6. Mathematical models of human african trypanosomiasis epidemiology.

    Science.gov (United States)

    Rock, Kat S; Stone, Chris M; Hastings, Ian M; Keeling, Matt J; Torr, Steve J; Chitnis, Nakul

    2015-03-01

    Human African trypanosomiasis (HAT), commonly called sleeping sickness, is caused by Trypanosoma spp. and transmitted by tsetse flies (Glossina spp.). HAT is usually fatal if untreated and transmission occurs in foci across sub-Saharan Africa. Mathematical modelling of HAT began in the 1980s with extensions of the Ross-Macdonald malaria model and has since consisted, with a few exceptions, of similar deterministic compartmental models. These models have captured the main features of HAT epidemiology and provided insight on the effectiveness of the two main control interventions (treatment of humans and tsetse fly control) in eliminating transmission. However, most existing models have overestimated prevalence of infection and ignored transient dynamics. There is a need for properly validated models, evolving with improved data collection, that can provide quantitative predictions to help guide control and elimination strategies for HAT.

  7. Human models of acute lung injury

    Directory of Open Access Journals (Sweden)

    Alastair G. Proudfoot

    2011-03-01

    Full Text Available Acute lung injury (ALI is a syndrome that is characterised by acute inflammation and tissue injury that affects normal gas exchange in the lungs. Hallmarks of ALI include dysfunction of the alveolar-capillary membrane resulting in increased vascular permeability, an influx of inflammatory cells into the lung and a local pro-coagulant state. Patients with ALI present with severe hypoxaemia and radiological evidence of bilateral pulmonary oedema. The syndrome has a mortality rate of approximately 35% and usually requires invasive mechanical ventilation. ALI can follow direct pulmonary insults, such as pneumonia, or occur indirectly as a result of blood-borne insults, commonly severe bacterial sepsis. Although animal models of ALI have been developed, none of them fully recapitulate the human disease. The differences between the human syndrome and the phenotype observed in animal models might, in part, explain why interventions that are successful in models have failed to translate into novel therapies. Improved animal models and the development of human in vivo and ex vivo models are therefore required. In this article, we consider the clinical features of ALI, discuss the limitations of current animal models and highlight how emerging human models of ALI might help to answer outstanding questions about this syndrome.

  8. Mathematical modeling of the human knee joint

    Energy Technology Data Exchange (ETDEWEB)

    Ricafort, Juliet [Univ. of Southern California, Los Angeles, CA (United States). Dept. of Biomedical Engineering

    1996-05-01

    A model was developed to determine the forces exerted by several flexor and extensor muscles of the human knee under static conditions. The following muscles were studied: the gastrocnemius, biceps femoris, semitendinosus, semimembranosus, and the set of quadricep muscles. The tibia and fibula were each modeled as rigid bodies; muscles were modeled by their functional lines of action in space. Assumptions based on previous data were used to resolve the indeterminacy.

  9. Conceptual Data Modelling of Modern Human Migration

    Directory of Open Access Journals (Sweden)

    Kosta Sotiroski

    2012-12-01

    Full Text Available The processes of human migrations have been present for ages, since the very beginnings of human history on the planet Earth. Nowadays, these are amplified to a large scale due to the modern means of communication, transportation, information and knowledge exchange, as well as the complex processes of globalization. Knowing the social, demographic, ethnical and educational structure of the migrants, as well as their geographical trajectory and temporal dynamics of their spatial moving across territories, countries and continents, is of a crucial meaning for both national governments and international policies. There is an emphasized need for identifying, acquiring, organizing, storing, retrieving and analyzing data related to human migration processes. The relational databases provide an ultimate solution, whilst the E-R diagram represents a common graphical tool for conceptual data modelling and relational database design. Within the paper we develop and propose a logical data model of the modern human migration.

  10. Preclinical and human surrogate models of itch

    DEFF Research Database (Denmark)

    Hoeck, Emil August; Marker, Jens Broch; Gazerani, Parisa;

    2016-01-01

    Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch, are lim...... currently applied in animals and humans. This article is protected by copyright. All rights reserved.......Pruritus, or simply itch, is a debilitating symptom that significantly decreases the quality of life in a wide range of clinical conditions. While histamine remains the most studied mediator of itch in humans, treatment options for chronic itch, in particular antihistamine-resistant itch......, are limited. Relevant preclinical and human surrogate models of non-histaminergic itch are needed to accelerate the development of novel antipruritics and diagnostic tools. Advances in basic itch research have facilitated the development of diverse models of itch and associated dysesthesiae. While...

  11. Biomolecular simulation on thousands of processors

    Science.gov (United States)

    Phillips, James Christopher

    Classical molecular dynamics simulation is a generally applicable method for the study of biomolecular aggregates of proteins, lipids, and nucleic acids. As experimental techniques have revealed the structures of larger and more complex biomolecular machines, the time required to complete even a single meaningful simulation of such systems has become prohibitive. We have developed the program NAMD to simulate systems of 50,000--500,000 atoms efficiently with full electrostatics on parallel computers with 1000 and more processors. NAMD's scalability is achieved through latency tolerant adaptive message-driven execution and measurement-based load balancing. NAMD is implemented in C++ and uses object-oriented design and threads to shield the basic algorithms from the necessary complexity of high-performance parallel execution. Apolipoprotein A-I is the primary protein constituent of high density lipoprotein particles, which transport cholesterol in the bloodstream. In collaboration with A. Jonas, we have constructed and simulated models of the nascent discoidal form of these particles, providing theoretical insight to the debate regarding the lipid-bound structure of the protein. Recently, S. Sligar and coworkers have created 10 nm phospholipid bilayer nanoparticles comprising a small lipid bilayer disk solubilized by synthetic membrane scaffold proteins derived from apolipoprotein A-I. Membrane proteins may be embedded in the water-soluble disks, with various medical and technological applications. We are working to develop variant scaffold proteins that produce disks of greater size, stability, and homogeneity. Our simulations have demonstrated a significant deviation from idealized cylindrical structure, and are being used in the interpretation of small angle x-ray scattering data.

  12. Space Station Water Processor Process Pump

    Science.gov (United States)

    Parker, David

    1995-01-01

    This report presents the results of the development program conducted under contract NAS8-38250-12 related to the International Space Station (ISS) Water Processor (WP) Process Pump. The results of the Process Pumps evaluation conducted on this program indicates that further development is required in order to achieve the performance and life requirements for the ISSWP.

  13. Fuel processors for fuel cell APU applications

    Science.gov (United States)

    Aicher, T.; Lenz, B.; Gschnell, F.; Groos, U.; Federici, F.; Caprile, L.; Parodi, L.

    The conversion of liquid hydrocarbons to a hydrogen rich product gas is a central process step in fuel processors for auxiliary power units (APUs) for vehicles of all kinds. The selection of the reforming process depends on the fuel and the type of the fuel cell. For vehicle power trains, liquid hydrocarbons like gasoline, kerosene, and diesel are utilized and, therefore, they will also be the fuel for the respective APU systems. The fuel cells commonly envisioned for mobile APU applications are molten carbonate fuel cells (MCFC), solid oxide fuel cells (SOFC), and proton exchange membrane fuel cells (PEMFC). Since high-temperature fuel cells, e.g. MCFCs or SOFCs, can be supplied with a feed gas that contains carbon monoxide (CO) their fuel processor does not require reactors for CO reduction and removal. For PEMFCs on the other hand, CO concentrations in the feed gas must not exceed 50 ppm, better 20 ppm, which requires additional reactors downstream of the reforming reactor. This paper gives an overview of the current state of the fuel processor development for APU applications and APU system developments. Furthermore, it will present the latest developments at Fraunhofer ISE regarding fuel processors for high-temperature fuel cell APU systems on board of ships and aircrafts.

  14. A Demo Processor as an Educational Tool

    NARCIS (Netherlands)

    van Moergestel, L.; van Nieuwland, K.; Vermond, L.; Meyer, John-Jules Charles

    2014-01-01

    Explaining the workings of a processor can be done in several ways. Just a written explanation, some pictures, a simulator program or a real hardware demo. The research presented here is based on the idea that a slowly working hardware demo could be a nice tool to explain to IT students the inner

  15. Quantum Algorithm Processor For Finding Exact Divisors

    OpenAIRE

    Burger, John Robert

    2005-01-01

    Wiring diagrams are given for a quantum algorithm processor in CMOS to compute, in parallel, all divisors of an n-bit integer. Lines required in a wiring diagram are proportional to n. Execution time is proportional to the square of n.

  16. Focal-plane sensor-processor chips

    CERN Document Server

    Zarándy, Ákos

    2011-01-01

    Focal-Plane Sensor-Processor Chips explores both the implementation and application of state-of-the-art vision chips. Presenting an overview of focal plane chip technology, the text discusses smart imagers and cellular wave computers, along with numerous examples of current vision chips.

  17. Microarchitecture of the Godson-2 Processor

    Institute of Scientific and Technical Information of China (English)

    Wei-Wu Hu; Fu-Xin Zhang; Zu-Song Li

    2005-01-01

    The Godson project is the first attempt to design high performance general-purpose microprocessors in China.This paper introduces the microarchitecture of the Godson-2 processor which is a 64-bit, 4-issue, out-of-order execution RISC processor that implements the 64-bit MIPS-like instruction set. The adoption of the aggressive out-of-order execution techniques (such as register mapping, branch prediction, and dynamic scheduling) and cache techniques (such as non-blocking cache, load speculation, dynamic memory disambiguation) helps the Godson-2 processor to achieve high performance even at not so high frequency. The Godson-2 processor has been physically implemented on a 6-metal 0.18μm CMOS technology based on the automatic placing and routing flow with the help of some crafted library cells and macros. The area of the chip is 6,700 micrometers by 6,200 micrometers and the clock cycle at typical corner is 2.3ns.

  18. Practical guide to energy management for processors

    CERN Document Server

    Consortium, Energywise

    2012-01-01

    Do you know how best to manage and reduce your energy consumption? This book gives comprehensive guidance on effective energy management for organisations in the polymer processing industry. This book is one of three which support the ENERGYWISE Plastics Project eLearning platform for European plastics processors to increase their knowledge and understanding of energy management. Topics covered include: Understanding Energy,

  19. Globe hosts launch of new processor

    CERN Multimedia

    2006-01-01

    Launch of the quadecore processor chip at the Globe. On 14 November, in a series of major media events around the world, the chip-maker Intel launched its new 'quadcore' processor. For the regions of Europe, the Middle East and Africa, the day-long launch event took place in CERN's Globe of Science and Innovation, with over 30 journalists in attendance, coming from as far away as Johannesburg and Dubai. CERN was a significant choice for the event: the first tests of this new generation of processor in Europe had been made at CERN over the preceding months, as part of CERN openlab, a research partnership with leading IT companies such as Intel, HP and Oracle. The event also provided the opportunity for the journalists to visit ATLAS and the CERN Computer Centre. The strategy of putting multiple processor cores on the same chip, which has been pursued by Intel and other chip-makers in the last few years, represents an important departure from the more traditional improvements in the sheer speed of such chips. ...

  20. CGRP in human models of primary headaches

    DEFF Research Database (Denmark)

    Ashina, Håkan; Schytz, Henrik Winther; Ashina, Messoud

    2017-01-01

    experiments are likely due to assay variation; therefore, proper validation and standardization of an assay is needed. To what extent CGRP is involved in tension-type headache and cluster headache is unknown. CONCLUSION: Human models of primary headaches have elucidated the role of CGRP in headache......OBJECTIVE: To review the role of CGRP in human models of primary headaches and to discuss methodological aspects and future directions. DISCUSSION: Provocation experiments demonstrated a heterogeneous CGRP migraine response in migraine patients. Conflicting CGRP plasma results in the provocation...... pathophysiology and sparked great interest in developing new treatment strategies using CGRP antagonists and antibodies. Future studies applying more refined human experimental models should identify biomarkers of CGRP-induced primary headache and reveal whether CGRP provocation experiments could be used...

  1. Human Adaptive Mechatronics and Human-System Modelling

    Directory of Open Access Journals (Sweden)

    Satoshi Suzuki

    2013-03-01

    Full Text Available Several topics in projects for mechatronics studies, which are ʹHuman Adaptive Mechatronics (HAMʹ and ʹHuman‐System Modelling (HSMʹ, are presented in this paper. The main research theme of the HAM project is a design strategy for a new intelligent mechatronics system, which enhances operatorsʹ skills during machine operation. Skill analyses and control system design have been addressed. In the HSM project, human modelling based on hierarchical classification of skills was studied, including the following five types of skills: social, planning, cognitive, motion and sensory‐motor skills. This paper includes digests of these research topics and the outcomes concerning each type of skill. Relationships with other research activities, knowledge and information that will be helpful for readers who are trying to study assistive human‐mechatronics systems are also mentioned.

  2. Animal and human models to understand ageing.

    Science.gov (United States)

    Lees, Hayley; Walters, Hannah; Cox, Lynne S

    2016-11-01

    Human ageing is the gradual decline in organ and tissue function with increasing chronological time, leading eventually to loss of function and death. To study the processes involved over research-relevant timescales requires the use of accessible model systems that share significant similarities with humans. In this review, we assess the usefulness of various models, including unicellular yeasts, invertebrate worms and flies, mice and primates including humans, and highlight the benefits and possible drawbacks of each model system in its ability to illuminate human ageing mechanisms. We describe the strong evolutionary conservation of molecular pathways that govern cell responses to extracellular and intracellular signals and which are strongly implicated in ageing. Such pathways centre around insulin-like growth factor signalling and integration of stress and nutritional signals through mTOR kinase. The process of cellular senescence is evaluated as a possible underlying cause for many of the frailties and diseases of human ageing. Also considered is ageing arising from systemic changes that cannot be modelled in lower organisms and instead require studies either in small mammals or in primates. We also touch briefly on novel therapeutic options arising from a better understanding of the biology of ageing. Copyright © 2016. Published by Elsevier Ireland Ltd.

  3. Cyclic Redundancy Checking (CRC) Accelerator for Embedded Processor Datapaths

    National Research Council Canada - National Science Library

    Abdul Rehman Buzdar; Liguo Sun; Rao Kashif; Muhammad Waqar Azhar; Muhammad Imran Khan

    2017-01-01

    We present the integration of a multimode Cyclic Redundancy Checking (CRC) accelerator unit with an embedded processor datapath to enhance the processor performance in terms of execution time and energy efficiency...

  4. Area and Energy Efficient Viterbi Accelerator for Embedded Processor Datapaths

    National Research Council Canada - National Science Library

    Abdul Rehman Buzdar; Liguo Sun; Muhammad Waqar Azhar; Muhammad Imran Khan; Rao Kashif

    2017-01-01

    .... We present the integration of a mixed hardware/software viterbi accelerator unit with an embedded processor datapath to enhance the processor performance in terms of execution time and energy efficiency...

  5. A Bayesian joint probability post-processor for reducing errors and quantifying uncertainty in monthly streamflow predictions

    OpenAIRE

    P. Pokhrel; Robertson, D E; Q. J. Wang

    2013-01-01

    Hydrologic model predictions are often biased and subject to heteroscedastic errors originating from various sources including data, model structure and parameter calibration. Statistical post-processors are applied to reduce such errors and quantify uncertainty in the predictions. In this study, we investigate the use of a statistical post-processor based on the Bayesian joint probability (BJP) modelling approach to reduce errors and quantify uncertainty in streamflow predi...

  6. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S.; Sedukhin, S. [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I.

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  7. A stochastic model of human gait dynamics

    Science.gov (United States)

    Ashkenazy, Yosef; M. Hausdorff, Jeffrey; Ch. Ivanov, Plamen; Eugene Stanley, H.

    2002-12-01

    We present a stochastic model of gait rhythm dynamics, based on transitions between different “neural centers”, that reproduces distinctive statistical properties of normal human walking. By tuning one model parameter, the transition (hopping) range, the model can describe alterations in gait dynamics from childhood to adulthood-including a decrease in the correlation and volatility exponents with maturation. The model also generates time series with multifractal spectra whose broadness depends only on this parameter. Moreover, we find that the volatility exponent increases monotonically as a function of the width of the multifractal spectrum, suggesting the possibility of a change in multifractality with maturation.

  8. Analisis Model Pengukuran Human Capital dalam Organisasi

    Directory of Open Access Journals (Sweden)

    Cecep Hidayat

    2013-11-01

    Full Text Available Measurement of human capital is not an easy to do because it is dynamic and always changing in accordance with the changing circumstances. Determination of dimensions and indicators of measurement needs to consider various factors such as situations and also the research scopes. This article has objectives to review the concepts, dimensions and measurement models of human capital. The research method used was literature study with a major reference source from current journal articles that discuss the measurement of human capital. Results of the study showed that basically the definition set forth in any dimension containing either explicitly or implicitly. In addition, the result indicated that there are three main categories of equality among researchers regarding the definition of human capital which emphasizes on: economic value/productivity, education, and abilities/competencies. The results also showed that the use of definitions, dimensions, and indicators for measurement of human capital depends on the situation, the scope of research, and the size of the organization. The conclusion of the study indicated that the measurement model and determination of dimensions and indicators of human capital measurement will determine the effectiveness of the measurement, and will have an impact on organizational performance.

  9. Engineering large animal models of human disease.

    Science.gov (United States)

    Whitelaw, C Bruce A; Sheets, Timothy P; Lillico, Simon G; Telugu, Bhanu P

    2016-01-01

    The recent development of gene editing tools and methodology for use in livestock enables the production of new animal disease models. These tools facilitate site-specific mutation of the genome, allowing animals carrying known human disease mutations to be produced. In this review, we describe the various gene editing tools and how they can be used for a range of large animal models of diseases. This genomic technology is in its infancy but the expectation is that through the use of gene editing tools we will see a dramatic increase in animal model resources available for both the study of human disease and the translation of this knowledge into the clinic. Comparative pathology will be central to the productive use of these animal models and the successful translation of new therapeutic strategies.

  10. An Alternative Water Processor for Long Duration Space Missions

    Science.gov (United States)

    Barta, Daniel J.; Pickering, Karen D.; Meyer, Caitlin; Pennsinger, Stuart; Vega, Leticia; Flynn, Michael; Jackson, Andrew; Wheeler, Raymond

    2014-01-01

    A new wastewater recovery system has been developed that combines novel biological and physicochemical components for recycling wastewater on long duration human space missions. Functionally, this Alternative Water Processor (AWP) would replace the Urine Processing Assembly on the International Space Station and reduce or eliminate the need for the multi-filtration beds of the Water Processing Assembly (WPA). At its center are two unique game changing technologies: 1) a biological water processor (BWP) to mineralize organic forms of carbon and nitrogen and 2) an advanced membrane processor (Forward Osmosis Secondary Treatment) for removal of solids and inorganic ions. The AWP is designed for recycling larger quantities of wastewater from multiple sources expected during future exploration missions, including urine, hygiene (hand wash, shower, oral and shave) and laundry. The BWP utilizes a single-stage membrane-aerated biological reactor for simultaneous nitrification and denitrification. The Forward Osmosis Secondary Treatment (FOST) system uses a combination of forward osmosis (FO) and reverse osmosis (RO), is resistant to biofouling and can easily tolerate wastewaters high in non-volatile organics and solids associated with shower and/or hand washing. The BWP has been operated continuously for over 300 days. After startup, the mature biological system averaged 85% organic carbon removal and 44% nitrogen removal, close to stoichiometric maximum based on available carbon. To date, the FOST has averaged 93% water recovery, with a maximum of 98%. If the wastewater is slighty acidified, ammonia rejection is optimal. This paper will provide a description of the technology and summarize results from ground-based testing using real wastewater

  11. 49 CFR 234.275 - Processor-based systems.

    Science.gov (United States)

    2010-10-01

    ..., DEPARTMENT OF TRANSPORTATION GRADE CROSSING SIGNAL SYSTEM SAFETY AND STATE ACTION PLANS Maintenance, Inspection, and Testing Requirements for Processor-Based Systems § 234.275 Processor-based systems. (a... 49 Transportation 4 2010-10-01 2010-10-01 false Processor-based systems. 234.275 Section...

  12. A lock circuit for a multi-core processor

    DEFF Research Database (Denmark)

    2015-01-01

    An integrated circuit comprising a multiple processor cores and a lock circuit that comprises a queue register with respective bits set or reset via respective, connections dedicated to respective processor cores, whereby the queue register identifies those among the multiple processor cores that...

  13. Bayesian Modeling of a Human MMORPG Player

    CERN Document Server

    Synnaeve, Gabriel

    2010-01-01

    This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.

  14. Bayesian Modeling of a Human MMORPG Player

    Science.gov (United States)

    Synnaeve, Gabriel; Bessière, Pierre

    2011-03-01

    This paper describes an application of Bayesian programming to the control of an autonomous avatar in a multiplayer role-playing game (the example is based on World of Warcraft). We model a particular task, which consists of choosing what to do and to select which target in a situation where allies and foes are present. We explain the model in Bayesian programming and show how we could learn the conditional probabilities from data gathered during human-played sessions.

  15. Human ex vivo wound healing model.

    Science.gov (United States)

    Stojadinovic, Olivera; Tomic-Canic, Marjana

    2013-01-01

    Wound healing is a spatially and temporally regulated process that progresses through sequential, yet overlapping phases and aims to restore barrier breach. To study this complex process scientists use various in vivo and in vitro models. Here we provide step-by-step instructions on how to perform and employ an ex vivo wound healing model to assess epithelization during wound healing in human skin.

  16. Modelling dengue epidemic spreading with human mobility

    Science.gov (United States)

    Barmak, D. H.; Dorso, C. O.; Otero, M.

    2016-04-01

    We explored the effect of human mobility on the spatio-temporal dynamics of Dengue with a stochastic model that takes into account the epidemiological dynamics of the infected mosquitoes and humans, with different mobility patterns of the human population. We observed that human mobility strongly affects the spread of infection by increasing the final size and by changing the morphology of the epidemic outbreaks. When the spreading of the disease is driven only by mosquito dispersal (flight), a main central focus expands diffusively. On the contrary, when human mobility is taken into account, multiple foci appear throughout the evolution of the outbreaks. These secondary foci generated throughout the outbreaks could be of little importance according to their mass or size compared with the largest main focus. However, the coalescence of these foci with the main one generates an effect, through which the latter develops a size greater than the one obtained in the case driven only by mosquito dispersal. This increase in growth rate due to human mobility and the coalescence of the foci are particularly relevant in temperate cities such as the city of Buenos Aires, since they give more possibilities to the outbreak to grow before the arrival of the low-temperature season. The findings of this work indicate that human mobility could be the main driving force in the dynamics of vector epidemics.

  17. AN EFFECTIVE HUMAN LEG MODELING METHOD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital medicine is a new concept in medical field, and the need for digital human body is increasing these years. This paper used Free Form Deformation (FFD) to model the motion of human leg. It presented the motion equations of knee joint on the basis of anatomic structure and motion characters, then transmitted the deformation to the mesh of leg through a simplified FFD that only used two-order B-spline basis function. The experiments prove that this method can simulate the bend of leg and the deformation of muscles fairly well. Compared with the method of curved patches, this method is more convenient and effective. Further more, those equations can be easily applied to other joint models of human body.

  18. Modeling human craniofacial disorders in Xenopus.

    Science.gov (United States)

    Dubey, Aditi; Saint-Jeannet, Jean-Pierre

    2017-03-01

    Craniofacial disorders are among the most common human birth defects and present an enormous health care and social burden. The development of animal models has been instrumental to investigate fundamental questions in craniofacial biology and this knowledge is critical to understand the etiology and pathogenesis of these disorders. The vast majority of craniofacial disorders arise from abnormal development of the neural crest, a multipotent and migratory cell population. Therefore, defining the pathogenesis of these conditions starts with a deep understanding of the mechanisms that preside over neural crest formation and its role in craniofacial development. This review discusses several studies using Xenopus embryos to model human craniofacial conditions, and emphasizes the strength of this system to inform important biological processes as they relate to human craniofacial development and disease.

  19. Advanced Avionics and Processor Systems for a Flexible Space Exploration Architecture

    Science.gov (United States)

    Keys, Andrew S.; Adams, James H.; Smith, Leigh M.; Johnson, Michael A.; Cressler, John D.

    2010-01-01

    The Advanced Avionics and Processor Systems (AAPS) project, formerly known as the Radiation Hardened Electronics for Space Environments (RHESE) project, endeavors to develop advanced avionic and processor technologies anticipated to be used by NASA s currently evolving space exploration architectures. The AAPS project is a part of the Exploration Technology Development Program, which funds an entire suite of technologies that are aimed at enabling NASA s ability to explore beyond low earth orbit. NASA s Marshall Space Flight Center (MSFC) manages the AAPS project. AAPS uses a broad-scoped approach to developing avionic and processor systems. Investment areas include advanced electronic designs and technologies capable of providing environmental hardness, reconfigurable computing techniques, software tools for radiation effects assessment, and radiation environment modeling tools. Near-term emphasis within the multiple AAPS tasks focuses on developing prototype components using semiconductor processes and materials (such as Silicon-Germanium (SiGe)) to enhance a device s tolerance to radiation events and low temperature environments. As the SiGe technology will culminate in a delivered prototype this fiscal year, the project emphasis shifts its focus to developing low-power, high efficiency total processor hardening techniques. In addition to processor development, the project endeavors to demonstrate techniques applicable to reconfigurable computing and partially reconfigurable Field Programmable Gate Arrays (FPGAs). This capability enables avionic architectures the ability to develop FPGA-based, radiation tolerant processor boards that can serve in multiple physical locations throughout the spacecraft and perform multiple functions during the course of the mission. The individual tasks that comprise AAPS are diverse, yet united in the common endeavor to develop electronics capable of operating within the harsh environment of space. Specifically, the AAPS tasks for

  20. An Outline Course on Human Performance Modeling

    Science.gov (United States)

    2006-01-01

    complementary or competing tasks: Dario SaIvucci, ??? 46. Bonnie Johns, David Kieras 47. ecological interface design 48. More into modeling human... alarcon 70. Ben Knott 71. Evelyn Rozanski 7.Pete Khooshabeh Optional: If ou would like to be on a mailin list for further seminars lease enter our email

  1. Modeling human muscle disease in zebrafish

    OpenAIRE

    Guyon, Jeffrey R.; Steffen, Leta S; Howell, Melanie H.; Pusack, Timothy J; Lawrence, Chris; Kunkel, Louis M

    2007-01-01

    Modeling human muscle disease in zebrafish correspondence: Corresponding author. Children's Hospital Boston, Enders Bldg, Rm 570, 300 Longwood Ave Boston, MA 02115. Tel.: +1 617 355 7576. (Kunkel, Louis M.) (Kunkel, Louis M.) Program in Genomics and Howard Hughes Medical Institute at Children's Hospital Boston - Boston--> , MA 02115--> - UNITED STATES (Guyon, Jeffrey R.) Program in Genomics a...

  2. Modeling and Simulating Virtual Anatomical Humans

    NARCIS (Netherlands)

    Madehkhaksar, Forough; Luo, Zhiping; Pronost, Nicolas; Egges, Arjan

    2014-01-01

    This chapter presents human musculoskeletal modeling and simulation as a challenging field that lies between biomechanics and computer animation. One of the main goals of computer animation research is to develop algorithms and systems that produce plausible motion. On the other hand, the main chall

  3. Future of human models for crash analysis

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Hoof, J.F.A.M. van; Lange, R. de

    2001-01-01

    In the crash safety field mathematical models can be applied in practically all area's of research and development including: reconstruction of actual accidents, design (CAD) of the crash response of vehicles, safety devices and roadside facilities and in support of human impact biomechanical

  4. Future of human models for crash analysis

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Hoof, J.F.A.M. van; Lange, R. de

    2001-01-01

    In the crash safety field mathematical models can be applied in practically all area's of research and development including: reconstruction of actual accidents, design (CAD) of the crash response of vehicles, safety devices and roadside facilities and in support of human impact biomechanical studie

  5. Scheduling Algorithm: Tasks Scheduling Algorithm for Multiple Processors with Dynamic Reassignment

    Directory of Open Access Journals (Sweden)

    Pradeep Kumar Yadav

    2008-01-01

    Full Text Available Distributed computing systems [DCSs] offer the potential for improved performance and resource sharing. To make the best use of the computational power available, it is essential to assign the tasks dynamically to that processor whose characteristics are most appropriate for the execution of the tasks in distributed processing system. We have developed a mathematical model for allocating “M” tasks of distributed program to “N” multiple processors (M>N that minimizes the total cost of the program. Relocating the tasks from one processor to another at certain points during the course of execution of the program that contributes to the total cost of the running program has been taken into account. Phasewise execution cost [EC], intertask communication cost [ITCT], residence cost [RC] of each task on different processors, and relocation cost [REC] for each task have been considered while preparing a dynamic tasks allocation model. The present model is suitable for arbitrary number of phases and processors with random program structure.

  6. Finite difference programs and array processors. [High-speed floating point processing by coupling host computer to programable array processor

    Energy Technology Data Exchange (ETDEWEB)

    Rudy, T.E.

    1977-08-01

    An alternative to maxi computers for high-speed floating-point processing capabilities is the coupling of a host computer to a programable array processor. This paper compares the performance of two finite difference programs on various computers and their expected performance on commercially available array processors. The significance of balancing array processor computation, host-array processor control traffic, and data transfer operations is emphasized. 3 figures, 1 table.

  7. Cache Energy Optimization Techniques For Modern Processors

    Energy Technology Data Exchange (ETDEWEB)

    Mittal, Sparsh [ORNL

    2013-01-01

    Modern multicore processors are employing large last-level caches, for example Intel's E7-8800 processor uses 24MB L3 cache. Further, with each CMOS technology generation, leakage energy has been dramatically increasing and hence, leakage energy is expected to become a major source of energy dissipation, especially in last-level caches (LLCs). The conventional schemes of cache energy saving either aim at saving dynamic energy or are based on properties specific to first-level caches, and thus these schemes have limited utility for last-level caches. Further, several other techniques require offline profiling or per-application tuning and hence are not suitable for product systems. In this book, we present novel cache leakage energy saving schemes for single-core and multicore systems; desktop, QoS, real-time and server systems. Also, we present cache energy saving techniques for caches designed with both conventional SRAM devices and emerging non-volatile devices such as STT-RAM (spin-torque transfer RAM). We present software-controlled, hardware-assisted techniques which use dynamic cache reconfiguration to configure the cache to the most energy efficient configuration while keeping the performance loss bounded. To profile and test a large number of potential configurations, we utilize low-overhead, micro-architecture components, which can be easily integrated into modern processor chips. We adopt a system-wide approach to save energy to ensure that cache reconfiguration does not increase energy consumption of other components of the processor. We have compared our techniques with state-of-the-art techniques and have found that our techniques outperform them in terms of energy efficiency and other relevant metrics. The techniques presented in this book have important applications in improving energy-efficiency of higher-end embedded, desktop, QoS, real-time, server processors and multitasking systems. This book is intended to be a valuable guide for both

  8. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  9. Model of human visual-motion sensing

    Science.gov (United States)

    Watson, A. B.; Ahumada, A. J., Jr.

    1985-01-01

    A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.

  10. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  11. Multiple core computer processor with globally-accessible local memories

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John; Donofrio, David; Oliker, Leonid

    2016-09-20

    A multi-core computer processor including a plurality of processor cores interconnected in a Network-on-Chip (NoC) architecture, a plurality of caches, each of the plurality of caches being associated with one and only one of the plurality of processor cores, and a plurality of memories, each of the plurality of memories being associated with a different set of at least one of the plurality of processor cores and each of the plurality of memories being configured to be visible in a global memory address space such that the plurality of memories are visible to two or more of the plurality of processor cores.

  12. Space Station crew safety - Human factors model

    Science.gov (United States)

    Cohen, M. M.; Junge, M. K.

    1984-01-01

    A model of the various human factors issues and interactions that might affect crew safety is developed. The first step addressed systematically the central question: How is this Space Station different from all other spacecraft? A wide range of possible issue was identified and researched. Five major topics of human factors issues that interacted with crew safety resulted: Protocols, Critical Habitability, Work Related Issues, Crew Incapacitation and Personal Choice. Second, an interaction model was developed that would show some degree of cause and effect between objective environmental or operational conditions and the creation of potential safety hazards. The intermediary steps between these two extremes of causality were the effects on human performance and the results of degraded performance. The model contains three milestones: stressor, human performance (degraded) and safety hazard threshold. Between these milestones are two countermeasure intervention points. The first opportunity for intervention is the countermeasure against stress. If this countermeasure fails, performance degrades. The second opportunity for intervention is the countermeasure against error. If this second countermeasure fails, the threshold of a potential safety hazard may be crossed.

  13. Design of Processors with Reconfigurable Microarchitecture

    Directory of Open Access Journals (Sweden)

    Andrey Mokhov

    2014-01-01

    Full Text Available Energy becomes a dominating factor for a wide spectrum of computations: from intensive data processing in “big data” companies resulting in large electricity bills, to infrastructure monitoring with wireless sensors relying on energy harvesting. In this context it is essential for a computation system to be adaptable to the power supply and the service demand, which often vary dramatically during runtime. In this paper we present an approach to building processors with reconfigurable microarchitecture capable of changing the way they fetch and execute instructions depending on energy availability and application requirements. We show how to use Conditional Partial Order Graphs to formally specify the microarchitecture of such a processor, explore the design possibilities for its instruction set, and synthesise the instruction decoder using correct-by-construction techniques. The paper is focused on the design methodology, which is evaluated by implementing a power-proportional version of Intel 8051 microprocessor.

  14. Multifunction nonlinear signal processor - Deconvolution and correlation

    Science.gov (United States)

    Javidi, Bahram; Horner, Joseph L.

    1989-08-01

    A multifuncional nonlinear optical signal processor is described that allows different types of operations, such as image deconvolution and nonlinear correlation. In this technique, the joint power spectrum of the input signal is thresholded with varying nonlinearity to produce different specific operations. In image deconvolution, the joint power spectrum is modified and hard-clip thresholded to remove the amplitude distortion effects and to restore the correct phase of the original image. In optical correlation, the Fourier transform interference intensity is thresholded to provide higher correlation peak intensity and a better-defined correlation spot. Various types of correlation signals can be produced simply by varying the severity of the nonlinearity, without the need for synthesis of specific matched filter. An analysis of the nonlinear processor for image deconvolution is presented.

  15. Development of a realistic human airway model.

    Science.gov (United States)

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained.

  16. Design of a fluid energy single vessel powder processor for pharmaceutical use.

    Science.gov (United States)

    Kay, G R; Staniforth, J N; Tobyn, M J; Horrill, M D; Newnes, L B; MacGregor, S A; Li, M; Atherton, G; Lamming, R C; Hajee, D W

    1999-04-30

    This study introduces a motionless novel single vessel powder processor designed to carry out all of the unit operations in the preparation of powders for tableting. The processor used controllable fluid dynamics to provide the energy for each unit operation. The vessel design was evaluated using a computational fluid dynamics model which indicated the flow necessary for the intended processing operations to take place. The processor performance was evaluated experimentally for two unit processes: particle size reduction and dry powder mixing. The processor was found capable of reducing the size of lactose granules from a median particle diameter of 459 microm to a median particle diameter of 182 microm within 5 min under optimal process conditions. It was found that a formulation containing lactose granules (373 microm median particle diameter) and a model drug, sodium chloride (30 microm), could be mixed to an improved degree of homogeneity in comparison with equivalent powders blended using a conventional turbulent tumbling technique. It was concluded that a processor having controllable fluid dynamics offered the potential to perform multi-task processing of powders.

  17. Communication Efficient Multi-processor FFT

    Science.gov (United States)

    Lennart Johnsson, S.; Jacquemin, Michel; Krawitz, Robert L.

    1992-10-01

    Computing the fast Fourier transform on a distributed memory architecture by a direct pipelined radix-2, a bi-section, or a multisection algorithm, all yield the same communications requirement, if communication for all FFT stages can be performed concurrently, the input data is in normal order, and the data allocation is consecutive. With a cyclic data allocation, or bit-reversed input data and a consecutive allocation, multi-sectioning offers a reduced communications requirement by approximately a factor of two. For a consecutive data allocation, normal input order, a decimation-in-time FFT requires that P/ N + d-2 twiddle factors be stored for P elements distributed evenly over N processors, and the axis that is subject to transformation be distributed over 2 d processors. No communication of twiddle factors is required. The same storage requirements hold for a decimation-in-frequency FFT, bit-reversed input order, and consecutive data allocation. The opposite combination of FFT type and data ordering requires a factor of log 2N more storage for N processors. The peak performance for a Connection Machine system CM-200 implementation is 12.9 Gflops/s in 32-bit precision, and 10.7 Gflops/s in 64-bit precision for unordered transforms local to each processor. The corresponding execution rates for ordered transforms are 11.1 Gflops/s and 8.5 Gflops/s, respectively. For distributed one- and two-dimensional transforms the peak performance for unordered transforms exceeds 5 Gflops/s in 32-bit precision and 3 Gflops/s in 64-bit precision. Three-dimensional transforms execute at a slightly lower rate. Distributed ordered transforms execute at a rate of about {1}/{2}to {2}/{3} of the unordered transforms.

  18. Breadboard Signal Processor for Arraying DSN Antennas

    Science.gov (United States)

    Jongeling, Andre; Sigman, Elliott; Chandra, Kumar; Trinh, Joseph; Soriano, Melissa; Navarro, Robert; Rogstad, Stephen; Goodhart, Charles; Proctor, Robert; Jourdan, Michael; hide

    2008-01-01

    A recently developed breadboard version of an advanced signal processor for arraying many antennas in NASA s Deep Space Network (DSN) can accept inputs in a 500-MHz-wide frequency band from six antennas. The next breadboard version is expected to accept inputs from 16 antennas, and a following developed version is expected to be designed according to an architecture that will be scalable to accept inputs from as many as 400 antennas. These and similar signal processors could also be used for combining multiple wide-band signals in non-DSN applications, including very-long-baseline interferometry and telecommunications. This signal processor performs functions of a wide-band FX correlator and a beam-forming signal combiner. [The term "FX" signifies that the digital samples of two given signals are fast Fourier transformed (F), then the fast Fourier transforms of the two signals are multiplied (X) prior to accumulation.] In this processor, the signals from the various antennas are broken up into channels in the frequency domain (see figure). In each frequency channel, the data from each antenna are correlated against the data from each other antenna; this is done for all antenna baselines (that is, for all antenna pairs). The results of the correlations are used to obtain calibration data to align the antenna signals in both phase and delay. Data from the various antenna frequency channels are also combined and calibration corrections are applied. The frequency-domain data thus combined are then synthesized back to the time domain for passing on to a telemetry receiver

  19. A post-processor for Gurmukhi OCR

    Indian Academy of Sciences (India)

    G S Lehal; Chandan Singh

    2002-02-01

    A post-processing system for OCR of Gurmukhi script has been developed. Statistical information of Punjabi language syllable combinations, corpora look-up and certain heuristics based on Punjabi grammar rules have been combined to design the post-processor. An improvement of 3% in recognition rate, from 94.35% to 97.34%, has been reported on clean images using the post-processing techniques.

  20. Modules for Pipelined Mixed Radix FFT Processors

    Directory of Open Access Journals (Sweden)

    Anatolij Sergiyenko

    2016-01-01

    Full Text Available A set of soft IP cores for the Winograd r-point fast Fourier transform (FFT is considered. The cores are designed by the method of spatial SDF mapping into the hardware, which provides the minimized hardware volume at the cost of slowdown of the algorithm by r times. Their clock frequency is equal to the data sampling frequency. The cores are intended for the high-speed pipelined FFT processors, which are implemented in FPGA.

  1. High-pressure coal fuel processor development

    Energy Technology Data Exchange (ETDEWEB)

    Greenhalgh, M.L.

    1992-11-01

    The objective of Subtask 1.1 Engine Feasibility was to conduct research needed to establish the technical feasibility of ignition and stable combustion of directly injected, 3,000 psi, low-Btu gas with glow plug ignition assist at diesel engine compression ratios. This objective was accomplished by designing, fabricating, testing and analyzing the combustion performance of synthesized low-Btu coal gas in a single-cylinder test engine combustion rig located at the Caterpillar Technical Center engine lab in Mossville, Illinois. The objective of Subtask 1.2 Fuel Processor Feasibility was to conduct research needed to establish the technical feasibility of air-blown, fixed-bed, high-pressure coal fuel processing at up to 3,000 psi operating pressure, incorporating in-bed sulfur and particulate capture. This objective was accomplished by designing, fabricating, testing and analyzing the performance of bench-scale processors located at Coal Technology Corporation (subcontractor) facilities in Bristol, Virginia. These two subtasks were carried out at widely separated locations and will be discussed in separate sections of this report. They were, however, independent in that the composition of the synthetic coal gas used to fuel the combustion rig was adjusted to reflect the range of exit gas compositions being produced on the fuel processor rig. Two major conclusions resulted from this task. First, direct injected, ignition assisted Diesel cycle engine combustion systems can be suitably modified to efficiently utilize these low-Btu gas fuels. Second, high pressure gasification of selected run-of-the-mine coals in batch-loaded fuel processors is feasible. These two findings, taken together, significantly reduce the perceived technical risks associated with the further development of the proposed coal gas fueled Diesel cycle power plant concept.

  2. Intelligent trigger processor for the crystal box

    CERN Document Server

    Sanders, G H; Cooper, M D; Hart, G W; Hoffman, C M; Hogan, G E; Hughes, E B; Matis, H S; Rolfe, J; Sandberg, V D; Williams, R A; Wilson, S; Zeman, H

    1981-01-01

    A large solid angle angular modular NaI(Tl) detector with 432 phototubes and 88 trigger scintillators is being used to search simultaneously for three lepton flavor-changing decays of the muon. A beam of up to 10/sup 6/ muons stopping per second with a 6% duty factor would yield up to 1000 triggers per second from random triple coincidences. A reduction of the trigger rate to 10 Hz is required from a hardwired primary trigger processor. Further reduction to <1 Hz is achieved by a microprocessor-based secondary trigger processor. The primary trigger hardware imposes voter coincidence logic, stringent timing requirements, and a non-adjacency requirement in the trigger scintillators defined by hardwired circuits. Sophisticated geometric requirements are imposed by a PROM-based matrix logic, and energy and vector-momentum cuts are imposed by a hardwired processor using LSI flash ADC's and digital arithmetic logic. The secondary trigger employs four satellite microprocessors to do a sparse data scan, multiplex ...

  3. Software-Reconfigurable Processors for Spacecraft

    Science.gov (United States)

    Farrington, Allen; Gray, Andrew; Bell, Bryan; Stanton, Valerie; Chong, Yong; Peters, Kenneth; Lee, Clement; Srinivasan, Jeffrey

    2005-01-01

    A report presents an overview of an architecture for a software-reconfigurable network data processor for a spacecraft engaged in scientific exploration. When executed on suitable electronic hardware, the software performs the functions of a physical layer (in effect, acts as a software radio in that it performs modulation, demodulation, pulse-shaping, error correction, coding, and decoding), a data-link layer, a network layer, a transport layer, and application-layer processing of scientific data. The software-reconfigurable network processor is undergoing development to enable rapid prototyping and rapid implementation of communication, navigation, and scientific signal-processing functions; to provide a long-lived communication infrastructure; and to provide greatly improved scientific-instrumentation and scientific-data-processing functions by enabling science-driven in-flight reconfiguration of computing resources devoted to these functions. This development is an extension of terrestrial radio and network developments (e.g., in the cellular-telephone industry) implemented in software running on such hardware as field-programmable gate arrays, digital signal processors, traditional digital circuits, and mixed-signal application-specific integrated circuits (ASICs).

  4. Issue Mechanism for Embedded Simultaneous Multithreading Processor

    Science.gov (United States)

    Zang, Chengjie; Imai, Shigeki; Frank, Steven; Kimura, Shinji

    Simultaneous Multithreading (SMT) technology enhances instruction throughput by issuing multiple instructions from multiple threads within one clock cycle. For in-order pipeline to each thread, SMT processors can provide large number of issued instructions close to or surpass than using out-of-order pipeline. In this work, we show an efficient issue logic for predicated instruction sequence with the parallel flag in each instruction, where the predicate register based issue control is adopted and the continuous instructions with the parallel flag of ‘0’ are executed in parallel. The flag is pre-defined by a compiler. Instructions from different threads are issued based on the round-robin order. We also introduce an Instruction Queue skip mechanism for thread if the queue is empty. Using this kind of issue logic, we designed a 6 threads, 7-stage, in-order pipeline processor. Based on this processor, we compare round-robin issue policy (RR(T1-Tn)) with other policies: thread one always has the highest priority (PR(T1)) and thread one or thread n has the highest priority in turn (PR(T1-Tn)). The results show that RR(T1-Tn) policy outperforms others and PR(T1-Tn) is almost the same to RR(T1-Tn) from the point of view of the issued instructions per cycle.

  5. Efficient searching and sorting applications using an associative array processor

    Science.gov (United States)

    Pace, W.; Quinn, M. J.

    1978-01-01

    The purpose of this paper is to describe a method of searching and sorting data by using some of the unique capabilities of an associative array processor. To understand the application, the associative array processor is described in detail. In particular, the content addressable memory and flip network are discussed because these two unique elements give the associative array processor the power to rapidly sort and search. A simple alphanumeric sorting example is explained in hardware and software terms. The hardware used to explain the application is the STARAN (Goodyear Aerospace Corporation) associative array processor. The software used is the APPLE (Array Processor Programming Language) programming language. Some applications of the array processor are discussed. This summary tries to differentiate between the techniques of the sequential machine and the associative array processor.

  6. Testing and operating a multiprocessor chip with processor redundancy

    Energy Technology Data Exchange (ETDEWEB)

    Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J

    2014-10-21

    A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.

  7. Model of the Human Sleep Wake System

    CERN Document Server

    Rogers, Lisa

    2012-01-01

    A model and analysis of the human sleep/wake system is presented. The model is derived using the known neuronal groups, and their various projections, involved with sleep and wake. Inherent in the derivation is the existence of a slow time scale associated with homeostatic regulation, and a faster time scale associated with the dynamics within the sleep phase. A significant feature of the model is that it does not contain a periodic forcing term, common in other models, reflecting the fact that sleep/wake is not dependent upon a diurnal stimulus. Once derived, the model is analyzed using a linearized stability analysis. We then use experimental data from normal sleep-wake systems and orexin knockout systems to verify the physiological validity of the equations.

  8. Towards a Systematic Exploration of the Optimization Space for Many-Core Processors

    NARCIS (Netherlands)

    Fang, J.

    2014-01-01

    The architecture diversity of many-core processors - with their different types of cores, and memory hierarchies - makes the old model of reprogramming every application for every platform infeasible. Therefore, inter-platform portability has become a desirable feature of programming models. While

  9. Towards a Systematic Exploration of the Optimization Space for Many-Core Processors

    NARCIS (Netherlands)

    Fang, J.

    2014-01-01

    The architecture diversity of many-core processors - with their different types of cores, and memory hierarchies - makes the old model of reprogramming every application for every platform infeasible. Therefore, inter-platform portability has become a desirable feature of programming models. While f

  10. Extending and implementing the Self-adaptive Virtual Processor for distributed memory architectures

    NARCIS (Netherlands)

    van Tol, M.W.; Koivisto, J.

    2011-01-01

    Many-core architectures of the future are likely to have distributed memory organizations and need fine grained concurrency management to be used effectively. The Self-adaptive Virtual Processor (SVP) is an abstract concurrent programming model which can provide this, but the model and its current i

  11. Extending and implementing the Self-adaptive Virtual Processor for distributed memory architectures

    NARCIS (Netherlands)

    van Tol, M.W.; Koivisto, J.

    2011-01-01

    Many-core architectures of the future are likely to have distributed memory organizations and need fine grained concurrency management to be used effectively. The Self-adaptive Virtual Processor (SVP) is an abstract concurrent programming model which can provide this, but the model and its current

  12. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  13. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  14. Merged ozone profiles from four MIPAS processors

    Science.gov (United States)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv

  15. Comparative Analysis of LEON 3 and NIOS II Processor Development Environment: Case Study Rijindael’s Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Meghana Hasamnis

    2012-06-01

    Full Text Available Embedded system design is becoming complex day by day, combined with reduced time-to-market deadlines. Due to the constraints and complexity in the design of embedded systems, it incorporates hardware / software co-design methodology. An embedded system is a combination of hardware and software parts integrated together on a common platform. A soft-core processor which is a hardware description language (HDL model of a specific processor (CPU can be customized for any application and can be synthesized for FPGA target. This paper gives a comparative analysis of the development environment for embedded systems using LEON 3 and NIOS II Processor, both soft core processors. LEON3 is an open source processor and NIOS II is a commercial processor. Case study under consideration is Rijindael’s Encryption Algorithm (AES. It is a standard encryption algorithm used to encrypt huge bulk of data and for security. Using the co-design methodology the algorithm is implemented on two different platforms. One using the open source and other using the commercial processor and the comparative results of the two different platforms is stated in terms of its performance parameters. The algorithm is partitioned in hardware and software parts and integrated on a common platform.

  16. Human Factors Engineering Program Review Model

    Science.gov (United States)

    2004-02-01

    AA NUREG -0711,Rev. 2 Human Factors Engineering Program Review Model 20081009191 I i m To] Bi U.S. Nuclear Regulatory Commission Office of...Material As of November 1999, you may electronically access NUREG -series publications and other NRC records at NRC’s Public Electronic Reading Room at...http://www.nrc.qov/readinq-rm.html. Publicly released records include, to name a few, NUREG -series publications; Federal Register notices; applicant

  17. Human Plague Risk: Spatial-Temporal Models

    Science.gov (United States)

    Pinzon, Jorge E.

    2010-01-01

    This chpater reviews the use of spatial-temporal models in identifying potential risks of plague outbreaks into the human population. Using earth observations by satellites remote sensing there has been a systematic analysis and mapping of the close coupling between the vectors of the disease and climate variability. The overall result is that incidence of plague is correlated to positive El Nino/Southem Oscillation (ENSO).

  18. Power estimation on functional level for programmable processors

    Directory of Open Access Journals (Sweden)

    M. Schneider

    2004-01-01

    Full Text Available In diesem Beitrag werden verschiedene Ansätze zur Verlustleistungsschätzung von programmierbaren Prozessoren vorgestellt und bezüglich ihrer Übertragbarkeit auf moderne Prozessor-Architekturen wie beispielsweise Very Long Instruction Word (VLIW-Architekturen bewertet. Besonderes Augenmerk liegt hierbei auf dem Konzept der sogenannten Functional-Level Power Analysis (FLPA. Dieser Ansatz basiert auf der Einteilung der Prozessor-Architektur in funktionale Blöcke wie beispielsweise Processing-Unit, Clock-Netzwerk, interner Speicher und andere. Die Verlustleistungsaufnahme dieser Bl¨ocke wird parameterabhängig durch arithmetische Modellfunktionen beschrieben. Durch automatisierte Analyse von Assemblercodes des zu schätzenden Systems mittels eines Parsers können die Eingangsparameter wie beispielsweise der erzielte Parallelitätsgrad oder die Art des Speicherzugriffs gewonnen werden. Dieser Ansatz wird am Beispiel zweier moderner digitaler Signalprozessoren durch eine Vielzahl von Basis-Algorithmen der digitalen Signalverarbeitung evaluiert. Die ermittelten Schätzwerte für die einzelnen Algorithmen werden dabei mit physikalisch gemessenen Werten verglichen. Es ergibt sich ein sehr kleiner maximaler Schätzfehler von 3%. In this contribution different approaches for power estimation for programmable processors are presented and evaluated concerning their capability to be applied to modern digital signal processor architectures like e.g. Very Long InstructionWord (VLIW -architectures. Special emphasis will be laid on the concept of so-called Functional-Level Power Analysis (FLPA. This approach is based on the separation of the processor architecture into functional blocks like e.g. processing unit, clock network, internal memory and others. The power consumption of these blocks is described by parameter dependent arithmetic model functions. By application of a parser based automized analysis of assembler codes of the systems to be estimated

  19. MODELING HUMAN RELIABILITY ANALYSIS USING MIDAS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Donald D. Dudenhoeffer; Bruce P. Hallbert; Brian F. Gore

    2006-05-01

    This paper summarizes an emerging collaboration between Idaho National Laboratory and NASA Ames Research Center regarding the utilization of high-fidelity MIDAS simulations for modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error with novel control room equipment and configurations, (ii) the investigative determination of risk significance in recreating past event scenarios involving control room operating crews, and (iii) the certification of novel staffing levels in control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of risk in next generation control rooms.

  20. Behavior genetic modeling of human fertility

    DEFF Research Database (Denmark)

    Rodgers, J L; Kohler, H P; Kyvik, K O

    2001-01-01

    Try) and number of children (NumCh). Behavior genetic models were fitted using structural equation modeling and DF analysis. A consistent medium-level additive genetic influence was found for NumCh, equal across genders; a stronger genetic influence was identified for FirstTry, greater for females than for males......Behavior genetic designs and analysis can be used to address issues of central importance to demography. We use this methodology to document genetic influence on human fertility. Our data come from Danish twin pairs born from 1953 to 1959, measured on age at first attempt to get pregnant (First...

  1. Genetically engineered mouse models and human osteosarcoma

    Directory of Open Access Journals (Sweden)

    Ng Alvin JM

    2012-10-01

    Full Text Available Abstract Osteosarcoma is the most common form of bone cancer. Pivotal insight into the genes involved in human osteosarcoma has been provided by the study of rare familial cancer predisposition syndromes. Three kindreds stand out as predisposing to the development of osteosarcoma: Li-Fraumeni syndrome, familial retinoblastoma and RecQ helicase disorders, which include Rothmund-Thomson Syndrome in particular. These disorders have highlighted the important roles of P53 and RB respectively, in the development of osteosarcoma. The association of OS with RECQL4 mutations is apparent but the relevance of this to OS is uncertain as mutations in RECQL4 are not found in sporadic OS. Application of the knowledge or mutations of P53 and RB in familial and sporadic OS has enabled the development of tractable, highly penetrant murine models of OS. These models share many of the cardinal features associated with human osteosarcoma including, importantly, a high incidence of spontaneous metastasis. The recent development of these models has been a significant advance for efforts to improve our understanding of the genetics of human OS and, more critically, to provide a high-throughput genetically modifiable platform for preclinical evaluation of new therapeutics.

  2. Modeling Oxygen Transport in the Human Placenta

    Science.gov (United States)

    Serov, Alexander; Filoche, Marcel; Salafia, Carolyn; Grebenkov, Denis

    Efficient functioning of the human placenta is crucial for the favorable pregnancy outcome. We construct a 3D model of oxygen transport in the placenta based on its histological cross-sections. The model accounts for both diffusion and convention of oxygen in the intervillous space and allows one to estimate oxygen uptake of a placentone. We demonstrate the existence of an optimal villi density maximizing the uptake and explain it as a trade-off between the incoming oxygen flow and the absorbing villous surface. Calculations performed for arbitrary shapes of fetal villi show that only two geometrical characteristics - villi density and the effective villi radius - are required to predict fetal oxygen uptake. Two combinations of physiological parameters that determine oxygen uptake are also identified: maximal oxygen inflow of a placentone and the Damköhler number. An automatic image analysis method is developed and applied to 22 healthy placental cross-sections demonstrating that villi density of a healthy human placenta lies within 10% of the optimal value, while overall geometry efficiency is rather low (around 30-40%). In a perspective, the model can constitute the base of a reliable tool of post partum oxygen exchange efficiency assessment in the human placenta. Also affiliated with Department of Chemistry and Biochemistry, UCLA, Los Angeles, CA 90095, USA.

  3. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  4. Modelling the evolution of human trail systems

    Science.gov (United States)

    Helbing, Dirk; Keltsch, Joachim; Molnár, Péter

    1997-07-01

    Many human social phenomena, such as cooperation, the growth of settlements, traffic dynamics and pedestrian movement, appear to be accessible to mathematical descriptions that invoke self-organization. Here we develop a model of pedestrian motion to explore the evolution of trails in urban green spaces such as parks. Our aim is to address such questions as what the topological structures of these trail systems are, and whether optimal path systems can be predicted for urban planning. We use an `active walker' model that takes into account pedestrian motion and orientation and the concomitant feedbacks with the surrounding environment. Such models have previously been applied to the study of complex structure formation in physical, chemical and biological systems. We find that our model is able to reproduce many of the observed large-scale spatial features of trail systems.

  5. Optimization of experimental human leukemia models (review

    Directory of Open Access Journals (Sweden)

    D. D. Pankov

    2012-01-01

    Full Text Available Actual problem of assessing immunotherapy prospects including antigenpecific cell therapy using animal models was covered in this review.Describe the various groups of currently existing animal models and methods of their creating – from different immunodeficient mice to severalvariants of tumor cells engraftment in them. The review addresses the possibility of tumor stem cells studying using mouse models for the leukemia treatment with adoptive cell therapy including WT1. Also issues of human leukemia cells migration and proliferation in a mice withdifferent immunodeficiency degree are discussed. To assess the potential immunotherapy efficacy comparison of immunodeficient mouse model with clinical situation in oncology patients after chemotherapy is proposed.

  6. Optimization of experimental human leukemia models (review

    Directory of Open Access Journals (Sweden)

    D. D. Pankov

    2014-07-01

    Full Text Available Actual problem of assessing immunotherapy prospects including antigenpecific cell therapy using animal models was covered in this review.Describe the various groups of currently existing animal models and methods of their creating – from different immunodeficient mice to severalvariants of tumor cells engraftment in them. The review addresses the possibility of tumor stem cells studying using mouse models for the leukemia treatment with adoptive cell therapy including WT1. Also issues of human leukemia cells migration and proliferation in a mice withdifferent immunodeficiency degree are discussed. To assess the potential immunotherapy efficacy comparison of immunodeficient mouse model with clinical situation in oncology patients after chemotherapy is proposed.

  7. Modeling human influenza infection in the laboratory

    Directory of Open Access Journals (Sweden)

    Radigan KA

    2015-08-01

    Full Text Available Kathryn A Radigan,1 Alexander V Misharin,2 Monica Chi,1 GR Scott Budinger11Division of Pulmonary and Critical Care Medicine, 2Division of Rheumatology, Northwestern University Feinberg School of Medicine, Chicago, IL, USAAbstract: Influenza is the leading cause of death from an infectious cause. Because of its clinical importance, many investigators use animal models to understand the biologic mechanisms of influenza A virus replication, the immune response to the virus, and the efficacy of novel therapies. This review will focus on the biosafety, biosecurity, and ethical concerns that must be considered in pursuing influenza research, in addition to focusing on the two animal models – mice and ferrets – most frequently used by researchers as models of human influenza infection.Keywords: mice, ferret, influenza, animal model, biosafety

  8. Towards the formal specification of the requirements and design of a processor interface unit

    Science.gov (United States)

    Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.

    1993-01-01

    Work to formally specify the requirements and design of a Processor Interface Unit (PIU), a single-chip subsystem providing memory interface, bus interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system, is described. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance free operation, or both. The approaches that were developed for modeling the PIU requirements and for composition of the PIU subcomponents at high levels of abstraction are described. These approaches were used to specify and verify a nontrivial subset of the PIU behavior. The PIU specification in Higher Order Logic (HOL) is documented in a companion NASA contractor report entitled 'Towards the Formal Specification of the Requirements and Design of a Processor Interfacs Unit - HOL Listings.' The subsequent verification approach and HOL listings are documented in NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit' and NASA contractor report entitled 'Towards the Formal Verification of the Requirements and Design of a Processor Interface Unit - HOL Listings.'

  9. PERFORMANCE EVALUATION OF OR1200 PROCESSOR WITH EVOLUTIONARY PARALLEL HPRC USING GEP

    Directory of Open Access Journals (Sweden)

    R. Maheswari

    2012-04-01

    Full Text Available In this fast computing era, most of the embedded system requires more computing power to complete the complex function/ task at the lesser amount of time. One way to achieve this is by boosting up the processor performance which allows processor core to run faster. This paper presents a novel technique of increasing the performance by parallel HPRC (High Performance Reconfigurable Computing in the CPU/DSP (Digital Signal Processor unit of OR1200 (Open Reduced Instruction Set Computer (RISC 1200 using Gene Expression Programming (GEP an evolutionary programming model. OR1200 is a soft-core RISC processor of the Intellectual Property cores that can efficiently run any modern operating system. In the manufacturing process of OR1200 a parallel HPRC is placed internally in the Integer Execution Pipeline unit of the CPU/DSP core to increase the performance. The GEP Parallel HPRC is activated /deactivated by triggering the signals i HPRC_Gene_Start ii HPRC_Gene_End. A Verilog HDL(Hardware Description language functional code for Gene Expression Programming parallel HPRC is developed and synthesised using XILINX ISE in the former part of the work and a CoreMark processor core benchmark is used to test the performance of the OR1200 soft core in the later part of the work. The result of the implementation ensures the overall speed-up increased to 20.59% by GEP based parallel HPRC in the execution unit of OR1200.

  10. Cooperative Computing Techniques for a Deeply Fused and Heterogeneous Many-Core Processor Architecture

    Institute of Scientific and Technical Information of China (English)

    郑方; 李宏亮; 吕晖; 过锋; 许晓红; 谢向辉

    2015-01-01

    Due to advances in semiconductor techniques, many-core processors have been widely used in high performance computing. However, many applications still cannot be carried out efficiently due to the memory wall, which has become a bottleneck in many-core processors. In this paper, we present a novel heterogeneous many-core processor architecture named deeply fused many-core (DFMC) for high performance computing systems. DFMC integrates management processing ele-ments (MPEs) and computing processing elements (CPEs), which are heterogeneous processor cores for different application features with a unified ISA (instruction set architecture), a unified execution model, and share-memory that supports cache coherence. The DFMC processor can alleviate the memory wall problem by combining a series of cooperative computing techniques of CPEs, such as multi-pattern data stream transfer, efficient register-level communication mechanism, and fast hardware synchronization technique. These techniques are able to improve on-chip data reuse and optimize memory access performance. This paper illustrates an implementation of a full system prototype based on FPGA with four MPEs and 256 CPEs. Our experimental results show that the effect of the cooperative computing techniques of CPEs is significant, with DGEMM (double-precision matrix multiplication) achieving an efficiency of 94%, FFT (fast Fourier transform) obtaining a performance of 207 GFLOPS and FDTD (finite-difference time-domain) obtaining a performance of 27 GFLOPS.

  11. Fast space-filling molecular graphics using dynamic partitioning among parallel processors.

    Science.gov (United States)

    Gertner, B J; Whitnell, R M; Wilson, K R

    1991-09-01

    We present a novel algorithm for the efficient generation of high-quality space-filling molecular graphics that is particularly appropriate for the creation of the large number of images needed in the animation of molecular dynamics. Each atom of the molecule is represented by a sphere of an appropriate radius, and the image of the sphere is constructed pixel-by-pixel using a generalization of the lighting model proposed by Porter (Comp. Graphics 1978, 12, 282). The edges of the spheres are antialiased, and intersections between spheres are handled through a simple blending algorithm that provides very smooth edges. We have implemented this algorithm on a multiprocessor computer using a procedure that dynamically repartitions the effort among the processors based on the CPU time used by each processor to create the previous image. This dynamic reallocation among processors automatically maximizes efficiency in the face of both the changing nature of the image from frame to frame and the shifting demands of the other programs running simultaneously on the same processors. We present data showing the efficiency of this multiprocessing algorithm as the number of processors is increased. The combination of the graphics and multiprocessor algorithms allows the fast generation of many high-quality images.

  12. Modeling learned categorical perception in human vision.

    Science.gov (United States)

    Casey, Matthew C; Sowden, Paul T

    2012-09-01

    A long standing debate in cognitive neuroscience has been the extent to which perceptual processing is influenced by prior knowledge and experience with a task. A converging body of evidence now supports the view that a task does influence perceptual processing, leaving us with the challenge of understanding the locus of, and mechanisms underpinning, these influences. An exemplar of this influence is learned categorical perception (CP), in which there is superior perceptual discrimination of stimuli that are placed in different categories. Psychophysical experiments on humans have attempted to determine whether early cortical stages of visual analysis change as a result of learning a categorization task. However, while some results indicate that changes in visual analysis occur, the extent to which earlier stages of processing are changed is still unclear. To explore this issue, we develop a biologically motivated neural model of hierarchical vision processes consisting of a number of interconnected modules representing key stages of visual analysis, with each module learning to exhibit desired local properties through competition. With this system level model, we evaluate whether a CP effect can be generated with task influence to only the later stages of visual analysis. Our model demonstrates that task learning in just the later stages is sufficient for the model to exhibit the CP effect, demonstrating the existence of a mechanism that requires only a high-level of task influence. However, the effect generalizes more widely than is found with human participants, suggesting that changes to earlier stages of analysis may also be involved in the human CP effect, even if these are not fundamental to the development of CP. The model prompts a hybrid account of task-based influences on perception that involves both modifications to the use of the outputs from early perceptual analysis along with the possibility of changes to the nature of that early analysis itself

  13. Human embryonic stem cell lines model experimental human cytomegalovirus latency.

    Science.gov (United States)

    Penkert, Rhiannon R; Kalejta, Robert F

    2013-05-28

    Herpesviruses are highly successful pathogens that persist for the lifetime of their hosts primarily because of their ability to establish and maintain latent infections from which the virus is capable of productively reactivating. Human cytomegalovirus (HCMV), a betaherpesvirus, establishes latency in CD34(+) hematopoietic progenitor cells during natural infections in the body. Experimental infection of CD34(+) cells ex vivo has demonstrated that expression of the viral gene products that drive productive infection is silenced by an intrinsic immune defense mediated by Daxx and histone deacetylases through heterochromatinization of the viral genome during the establishment of latency. Additional mechanistic details about the establishment, let alone maintenance and reactivation, of HCMV latency remain scarce. This is partly due to the technical challenges of CD34(+) cell culture, most notably, the difficulty in preventing spontaneous differentiation that drives reactivation and renders them permissive for productive infection. Here we demonstrate that HCMV can establish, maintain, and reactivate in vitro from experimental latency in cultures of human embryonic stem cells (ESCs), for which spurious differentiation can be prevented or controlled. Furthermore, we show that known molecular aspects of HCMV latency are faithfully recapitulated in these cells. In total, we present ESCs as a novel, tractable model for studies of HCMV latency.

  14. Computational Models to Synthesize Human Walking

    Institute of Scientific and Technical Information of China (English)

    Lei Ren; David Howard; Laurence Kenney

    2006-01-01

    The synthesis of human walking is of great interest in biomechanics and biomimetic engineering due to its predictive capabilities and potential applications in clinical biomechanics, rehabilitation engineering and biomimetic robotics. In this paper,the various methods that have been used to synthesize humanwalking are reviewed from an engineering viewpoint. This involves a wide spectrum of approaches, from simple passive walking theories to large-scale computational models integrating the nervous, muscular and skeletal systems. These methods are roughly categorized under four headings: models inspired by the concept of a CPG (Central Pattern Generator), methods based on the principles of control engineering, predictive gait simulation using optimisation, and models inspired by passive walking theory. The shortcomings and advantages of these methods are examined, and future directions are discussed in the context of providing insights into the neural control objectives driving gait and improving the stability of the predicted gaits. Future advancements are likely to be motivated by improved understanding of neural control strategies and the subtle complexities of the musculoskeletal system during human locomotion. It is only a matter of time before predictive gait models become a practical and valuable tool in clinical diagnosis, rehabilitation engineering and robotics.

  15. A dynamic model of human physiology

    Science.gov (United States)

    Green, Melissa; Kaplan, Carolyn; Oran, Elaine; Boris, Jay

    2010-11-01

    To study the systems-level transport in the human body, we develop the Computational Man (CMAN): a set of one-dimensional unsteady elastic flow simulations created to model a variety of coupled physiological systems including the circulatory, respiratory, excretory, and lymphatic systems. The model systems are collapsed from three spatial dimensions and time to one spatial dimension and time by assuming axisymmetric vessel geometry and a parabolic velocity profile across the cylindrical vessels. To model the actions of a beating heart or expanding lungs, the flow is driven by user-defined changes to the equilibrium areas of the elastic vessels. The equations are then iteratively solved for pressure, area, and average velocity. The model is augmented with valves and contractions to resemble the biological structure of the different systems. CMAN will be used to track material transport throughout the human body for diagnostic and predictive purposes. Parameters will be adjustable to match those of individual patients. Validation of CMAN has used both higher-dimensional simulations of similar geometries and benchmark measurement from medical literature.

  16. Modeling and simulation of the human eye

    Science.gov (United States)

    Duran, R.; Ventura, L.; Nonato, L.; Bruno, O.

    2007-02-01

    The computational modeling of the human eye has been wide studied for different sectors of the scientific and technological community. One of the main reasons for this increasing interest is the possibility to reproduce eye optic properties by means of computational simulations, becoming possible the development of efficient devices to treat and to correct the problems of the vision. This work explores this aspect still little investigated of the modeling of the visual system, considering a computational sketch that make possible the use of real data in the modeling and simulation of the human visual system. This new approach makes possible the individual inquiry of the optic system, assisting in the construction of new techniques used to infer vital data in medical investigations. Using corneal topography to collect real data from patients, a computational model of cornea is constructed and a set of simulations were build to ensure the correctness of the system and to investigate the effect of corneal abnormalities in retinal image formation, such as Plcido Discs, Point Spread Function, Wave front and the projection of a real image and it's visualization on retina.

  17. A high-speed digital signal processor for atmospheric radar, part 7.3A

    Science.gov (United States)

    Brosnahan, J. W.; Woodard, D. M.

    1984-01-01

    The Model SP-320 device is a monolithic realization of a complex general purpose signal processor, incorporating such features as a 32-bit ALU, a 16-bit x 16-bit combinatorial multiplier, and a 16-bit barrel shifter. The SP-320 is designed to operate as a slave processor to a host general purpose computer in applications such as coherent integration of a radar return signal in multiple ranges, or dedicated FFT processing. Presently available is an I/O module conforming to the Intel Multichannel interface standard; other I/O modules will be designed to meet specific user requirements. The main processor board includes input and output FIFO (First In First Out) memories, both with depths of 4096 W, to permit asynchronous operation between the source of data and the host computer. This design permits burst data rates in excess of 5 MW/s.

  18. Zebrafish Models for Human Acute Organophosphorus Poisoning.

    Science.gov (United States)

    Faria, Melissa; Garcia-Reyero, Natàlia; Padrós, Francesc; Babin, Patrick J; Sebastián, David; Cachot, Jérôme; Prats, Eva; Arick Ii, Mark; Rial, Eduardo; Knoll-Gellida, Anja; Mathieu, Guilaine; Le Bihanic, Florane; Escalon, B Lynn; Zorzano, Antonio; Soares, Amadeu M V M; Raldúa, Demetrio

    2015-10-22

    Terrorist use of organophosphorus-based nerve agents and toxic industrial chemicals against civilian populations constitutes a real threat, as demonstrated by the terrorist attacks in Japan in the 1990 s or, even more recently, in the Syrian civil war. Thus, development of more effective countermeasures against acute organophosphorus poisoning is urgently needed. Here, we have generated and validated zebrafish models for mild, moderate and severe acute organophosphorus poisoning by exposing zebrafish larvae to different concentrations of the prototypic organophosphorus compound chlorpyrifos-oxon. Our results show that zebrafish models mimic most of the pathophysiological mechanisms behind this toxidrome in humans, including acetylcholinesterase inhibition, N-methyl-D-aspartate receptor activation, and calcium dysregulation as well as inflammatory and immune responses. The suitability of the zebrafish larvae to in vivo high-throughput screenings of small molecule libraries makes these models a valuable tool for identifying new drugs for multifunctional drug therapy against acute organophosphorus poisoning.

  19. HLA-Modeler: Automated Homology Modeling of Human Leukocyte Antigens

    Directory of Open Access Journals (Sweden)

    Shinji Amari

    2013-01-01

    Full Text Available The three-dimensional (3D structures of human leukocyte antigen (HLA molecules are indispensable for the studies on the functions at molecular level. We have developed a homology modeling system named HLA-modeler specialized in the HLA molecules. Segment matching algorithm is employed for modeling and the optimization of the model is carried out by use of the PFROSST force field considering the implicit solvent model. In order to efficiently construct the homology models, HLA-modeler uses a local database of the 3D structures of HLA molecules. The structure of the antigenic peptide-binding site is important for the function and the 3D structure is highly conserved between various alleles. HLA-modeler optimizes the use of this structural motif. The leave-one-out cross-validation using the crystal structures of class I and class II HLA molecules has demonstrated that the rmsds of nonhydrogen atoms of the sites between homology models and crystal structures are less than 1.0 Å in most cases. The results have indicated that the 3D structures of the antigenic peptide-binding sites can be reproduced by HLA-modeler at the level almost corresponding to the crystal structures.

  20. HLA-Modeler: Automated Homology Modeling of Human Leukocyte Antigens.

    Science.gov (United States)

    Amari, Shinji; Kataoka, Ryoichi; Ikegami, Takashi; Hirayama, Noriaki

    2013-01-01

    The three-dimensional (3D) structures of human leukocyte antigen (HLA) molecules are indispensable for the studies on the functions at molecular level. We have developed a homology modeling system named HLA-modeler specialized in the HLA molecules. Segment matching algorithm is employed for modeling and the optimization of the model is carried out by use of the PFROSST force field considering the implicit solvent model. In order to efficiently construct the homology models, HLA-modeler uses a local database of the 3D structures of HLA molecules. The structure of the antigenic peptide-binding site is important for the function and the 3D structure is highly conserved between various alleles. HLA-modeler optimizes the use of this structural motif. The leave-one-out cross-validation using the crystal structures of class I and class II HLA molecules has demonstrated that the rmsds of nonhydrogen atoms of the sites between homology models and crystal structures are less than 1.0 Å in most cases. The results have indicated that the 3D structures of the antigenic peptide-binding sites can be reproduced by HLA-modeler at the level almost corresponding to the crystal structures.

  1. Model human heart or brain signals

    CERN Document Server

    Tuncay, Caglar

    2008-01-01

    A new model is suggested and used to mimic various spatial or temporal designs in biological or non biological formations where the focus is on the normal or irregular electrical signals coming from human heart (ECG) or brain (EEG). The electrical activities in several muscles (EMG) or neurons or other organs of human or various animals, such as lobster pyloric neuron, guinea pig inferior olivary neuron, sepia giant axon and mouse neocortical pyramidal neuron and some spatial formations are also considered (in Appendix). In the biological applications, several elements (cells or tissues) in an organ are taken as various entries in a representative lattice (mesh) where the entries are connected to each other in terms of some molecular diffusions or electrical potential differences. The biological elements evolve in time (with the given tissue or organ) in terms of the mentioned connections (interactions) besides some individual feedings. The anatomical diversity of the species (or organs) is handled in terms o...

  2. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, J. A.

    1985-01-01

    A model for the quantitative assessment of human spatial habitability is presented in the space station context. The visual aspect assesses how interior spaces appear to the inhabitants. This aspect concerns criteria such as sensed spaciousness and the affective (emotional) connotations of settings' appearances. The kinesthetic aspect evaluates the available space in terms of its suitability to accommodate human movement patterns, as well as the postural and anthrometric changes due to microgravity. Finally, social logic concerns how the volume and geometry of available space either affirms or contravenes established social and organizational expectations for spatial arrangements. Here, the criteria include privacy, status, social power, and proxemics (the uses of space as a medium of social communication).

  3. Optical linear algebra processors - Architectures and algorithms

    Science.gov (United States)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  4. Message-Driven Processor Architecture Version 11

    Science.gov (United States)

    1988-08-18

    UNCLASSIFIED . $CUUIT. v A$SIf9CAYON Or IMIS SAGE ’Whlken Dese E,...’lld) __ REPO_Or T CU NT PAGE ateREAD INSTRUCTIONS REPORT DOCUmtNTATION PAGE...fields instead of 2. This reflects the change in machine topology from 2D to 3D . Also, the NNR is no longer set to zero on a reset; it is left to...an X field, a Y field and a Z field indicating the position of the node in the 3D network grid. Its value identifies the processor on the network and

  5. Optical linear algebra processors - Architectures and algorithms

    Science.gov (United States)

    Casasent, David

    1986-01-01

    Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.

  6. The quantitative modelling of human spatial habitability

    Science.gov (United States)

    Wise, James A.

    1988-01-01

    A theoretical model for evaluating human spatial habitability (HuSH) in the proposed U.S. Space Station is developed. Optimizing the fitness of the space station environment for human occupancy will help reduce environmental stress due to long-term isolation and confinement in its small habitable volume. The development of tools that operationalize the behavioral bases of spatial volume for visual kinesthetic, and social logic considerations is suggested. This report further calls for systematic scientific investigations of how much real and how much perceived volume people need in order to function normally and with minimal stress in space-based settings. The theoretical model presented in this report can be applied to any size or shape interior, at any scale of consideration, for the Space Station as a whole to an individual enclosure or work station. Using as a point of departure the Isovist model developed by Dr. Michael Benedikt of the U. of Texas, the report suggests that spatial habitability can become as amenable to careful assessment as engineering and life support concerns.

  7. MODELING HUMAN COMPREHENSION OF DATA VISUALIZATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie; Wilson, Andrew T.

    2017-09-01

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need for cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.

  8. The ISS Water Processor Catalytic Reactor as a Post Processor for Advanced Water Reclamation Systems

    Science.gov (United States)

    Nalette, Tim; Snowdon, Doug; Pickering, Karen D.; Callahan, Michael

    2007-01-01

    Advanced water processors being developed for NASA s Exploration Initiative rely on phase change technologies and/or biological processes as the primary means of water reclamation. As a result of the phase change, volatile compounds will also be transported into the distillate product stream. The catalytic reactor assembly used in the International Space Station (ISS) water processor assembly, referred to as Volatile Removal Assembly (VRA), has demonstrated high efficiency oxidation of many of these volatile contaminants, such as low molecular weight alcohols and acetic acid, and is considered a viable post treatment system for all advanced water processors. To support this investigation, two ersatz solutions were defined to be used for further evaluation of the VRA. The first solution was developed as part of an internal research and development project at Hamilton Sundstrand (HS) and is based primarily on ISS experience related to the development of the VRA. The second ersatz solution was defined by NASA in support of a study contract to Hamilton Sundstrand to evaluate the VRA as a potential post processor for the Cascade Distillation system being developed by Honeywell. This second ersatz solution contains several low molecular weight alcohols, organic acids, and several inorganic species. A range of residence times, oxygen concentrations and operating temperatures have been studied with both ersatz solutions to provide addition performance capability of the VRA catalyst.

  9. Retargetable Code Generation based on Structural Processor Descriptions

    OpenAIRE

    Leupers, Rainer; Marwedel, Peter

    1998-01-01

    Design automation for embedded systems comprising both hardware and software components demands for code generators integrated into electronic CAD systems. These code generators provide the necessary link between software synthesis tools in HW/SW codesign systems and embedded processors. General-purpose compilers for standard processors are often insufficient, because they do not provide flexibility with respect to different target processors and also suffer from inferior code quality....

  10. User microprogrammable processors for high data rate telemetry preprocessing

    Science.gov (United States)

    Pugsley, J. H.; Ogrady, E. P.

    1973-01-01

    The use of microprogrammable processors for the preprocessing of high data rate satellite telemetry is investigated. The following topics are discussed along with supporting studies: (1) evaluation of commercial microprogrammable minicomputers for telemetry preprocessing tasks; (2) microinstruction sets for telemetry preprocessing; and (3) the use of multiple minicomputers to achieve high data processing. The simulation of small microprogrammed processors is discussed along with examples of microprogrammed processors.

  11. Mouse Model of Human Hereditary Pancreatitis

    Science.gov (United States)

    2016-09-01

    models that recapitulate the human disease . Therefore, we introduced mutations in the endogenous mouse T7 cationic trypsinogen gene and obtained several...ACCOMPLISHMENTS: What were the major goals of the project? Our original proposal had three specific aims. Aim 1. Identify and biochemically characterize...pancreatitis in mutant mice which do not develop spontaneous disease (strains T7-D23del-Cre, T7-D23del-Neo, T7-K24R-Cre and T7- K24R-Neo), will be

  12. Liver immune-pathogenesis and therapy of human liver tropic virus infection in humanized mouse models

    OpenAIRE

    Bility, Moses T.; Li, Feng; Cheng, Liang; Su, Lishan

    2013-01-01

    Hepatitis B virus (HBV) and hepatitis C virus (HCV) infect and replicate primarily in human hepatocytes. Few reliable and easy accessible animal models are available for studying the immune system’s contribution to the liver disease progression during hepatitis virus infection. Humanized mouse models reconstituted with human hematopoietic stem cells (HSCs) have been developed to study human immunology, human immunodeficiency virus 1 infection, and immunopathogenesis. However, a humanized mous...

  13. MPC Related Computational Capabilities of ARMv7A Processors

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2015-01-01

    In recent years, the mass market of mobile devices has pushed the demand for increasingly fast but cheap processors. ARM, the world leader in this sector, has developed the Cortex-A series of processors with focus on computationally intensive applications. If properly programmed, these processors...... are powerful enough to solve the complex optimization problems arising in MPC in real-time, while keeping the traditional low-cost and low-power consumption. This makes these processors ideal candidates for use in embedded MPC. In this paper, we investigate the floating-point capabilities of Cortex A7, A9...

  14. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body mod

  15. A modular approach to numerical human body modeling

    NARCIS (Netherlands)

    Forbes, P.A.; Griotto, G.; Rooij, L. van

    2007-01-01

    The choice of a human body model for a simulated automotive impact scenario must take into account both accurate model response and computational efficiency as key factors. This study presents a "modular numerical human body modeling" approach which allows the creation of a customized human body

  16. Generative models: Human embryonic stem cells and multiple modeling relations.

    Science.gov (United States)

    Fagan, Melinda Bonnie

    2016-04-01

    Model organisms are at once scientific models and concrete living things. It is widely assumed by philosophers of science that (1) model organisms function much like other kinds of models, and (2) that insofar as their scientific role is distinctive, it is in virtue of representing a wide range of biological species and providing a basis for generalizations about those targets. This paper uses the case of human embryonic stem cells (hESC) to challenge both assumptions. I first argue that hESC can be considered model organisms, analogous to classic examples such as Escherichia coli and Drosophila melanogaster. I then discuss four contrasts between the epistemic role of hESC in practice, and the assumptions about model organisms noted above. These contrasts motivate an alternative view of model organisms as a network of systems related constructively and developmentally to one another. I conclude by relating this result to other accounts of model organisms in recent philosophy of science. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Human factors engineering program review model

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    The staff of the Nuclear Regulatory Commission is performing nuclear power plant design certification reviews based on a design process plan that describes the human factors engineering (HFE) program elements that are necessary and sufficient to develop an acceptable detailed design specification and an acceptable implemented design. There are two principal reasons for this approach. First, the initial design certification applications submitted for staff review did not include detailed design information. Second, since human performance literature and industry experiences have shown that many significant human factors issues arise early in the design process, review of the design process activities and results is important to the evaluation of an overall design. However, current regulations and guidance documents do not address the criteria for design process review. Therefore, the HFE Program Review Model (HFE PRM) was developed as a basis for performing design certification reviews that include design process evaluations as well as review of the final design. A central tenet of the HFE PRM is that the HFE aspects of the plant should be developed, designed, and evaluated on the basis of a structured top-down system analysis using accepted HFE principles. The HFE PRM consists of ten component elements. Each element in divided into four sections: Background, Objective, Applicant Submittals, and Review Criteria. This report describes the development of the HFE PRM and gives a detailed description of each HFE review element.

  18. Conditional Lineage Ablation to Model Human Diseases

    Science.gov (United States)

    Lee, Paul; Morley, Gregory; Huang, Qian; Fischer, Avi; Seiler, Stephanie; Horner, James W.; Factor, Stephen; Vaidya, Dhananjay; Jalife, Jose; Fishman, Glenn I.

    1998-09-01

    Cell loss contributes to the pathogenesis of many inherited and acquired human diseases. We have developed a system to conditionally ablate cells of any lineage and developmental stage in the mouse by regulated expression of the diphtheria toxin A (DTA) gene by using tetracycline-responsive promoters. As an example of this approach, we targeted expression of DTA to the hearts of adult mice to model structural abnormalities commonly observed in human cardiomyopathies. Induction of DTA expression resulted in cell loss, fibrosis, and chamber dilatation. As in many human cardiomyopathies, transgenic mice developed spontaneous arrhythmias in vivo, and programmed electrical stimulation of isolated-perfused transgenic hearts demonstrated a strikingly high incidence of spontaneous and inducible ventricular tachycardia. Affected mice showed marked perturbations of cardiac gap junction channel expression and localization, including a subset with disorganized epicardial activation patterns as revealed by optical action potential mapping. These studies provide important insights into mechanisms of arrhythmogenesis and suggest that conditional lineage ablation may have wide applicability for studies of disease pathogenesis.

  19. A human neurodevelopmental model for Williams syndrome.

    Science.gov (United States)

    Chailangkarn, Thanathom; Trujillo, Cleber A; Freitas, Beatriz C; Hrvoj-Mihic, Branka; Herai, Roberto H; Yu, Diana X; Brown, Timothy T; Marchetto, Maria C; Bardy, Cedric; McHenry, Lauren; Stefanacci, Lisa; Järvinen, Anna; Searcy, Yvonne M; DeWitt, Michelle; Wong, Wenny; Lai, Philip; Ard, M Colin; Hanson, Kari L; Romero, Sarah; Jacobs, Bob; Dale, Anders M; Dai, Li; Korenberg, Julie R; Gage, Fred H; Bellugi, Ursula; Halgren, Eric; Semendeferi, Katerina; Muotri, Alysson R

    2016-08-18

    Williams syndrome is a genetic neurodevelopmental disorder characterized by an uncommon hypersociability and a mosaic of retained and compromised linguistic and cognitive abilities. Nearly all clinically diagnosed individuals with Williams syndrome lack precisely the same set of genes, with breakpoints in chromosome band 7q11.23 (refs 1-5). The contribution of specific genes to the neuroanatomical and functional alterations, leading to behavioural pathologies in humans, remains largely unexplored. Here we investigate neural progenitor cells and cortical neurons derived from Williams syndrome and typically developing induced pluripotent stem cells. Neural progenitor cells in Williams syndrome have an increased doubling time and apoptosis compared with typically developing neural progenitor cells. Using an individual with atypical Williams syndrome, we narrowed this cellular phenotype to a single gene candidate, frizzled 9 (FZD9). At the neuronal stage, layer V/VI cortical neurons derived from Williams syndrome were characterized by longer total dendrites, increased numbers of spines and synapses, aberrant calcium oscillation and altered network connectivity. Morphometric alterations observed in neurons from Williams syndrome were validated after Golgi staining of post-mortem layer V/VI cortical neurons. This model of human induced pluripotent stem cells fills the current knowledge gap in the cellular biology of Williams syndrome and could lead to further insights into the molecular mechanism underlying the disorder and the human social brain.

  20. Dynamically Reconfigurable Processor for Floating Point Arithmetic

    Directory of Open Access Journals (Sweden)

    S. Anbumani,

    2014-01-01

    Full Text Available Recently, development of embedded processors is toward miniaturization and energy saving for ecology. On the other hand, high performance arithmetic circuits are required in a lot of application in science and technology. Dynamically reconfigurable processors have been developed to meet these requests. They can change circuit configuration according to instructions in program instantly during operations.This paper describes, a dynamically reconfigurable circuit for floating-point arithmetic is proposed. The arithmetic circuit consists of two single precision floating-point arithmetic circuits. It performs double precision floating-point arithmetic by reconfiguration. Dynamic reconfiguration changes circuit construction at one clock cycle during operation without stopping circuits. It enables reconfiguration of circuits in a few nano seconds. The proposed circuit is reconfigured in two modes. In first mode it performs one double precision floating-point arithmetic or else the circuit will perform two parallel operations of single precision floating-point arithmetic. The new system design reduces implementation area by reconfiguring common parts of each operation. It also increases the processing speed with a very little number of clocks.

  1. Speed Scaling on Parallel Processors with Migration

    CERN Document Server

    Angel, Eric; Kacem, Fadi; Letsios, Dimitrios

    2011-01-01

    We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works), on parallel speed-scaled processors so as to minimize the total energy consumption. We consider that both preemption and migration of jobs are allowed. An exact polynomial-time algorithm has been proposed for this problem, which is based on the Ellipsoid algorithm. Here, we formulate the problem as a convex program and we propose a simpler polynomial-time combinatorial algorithm which is based on a reduction to the maximum flow problem. Our algorithm runs in $O(nf(n)logP)$ time, where $n$ is the number of jobs, $P$ is the range of all possible values of processors' speeds divided by the desired accuracy and $f(n)$ is the complexity of computing a maximum flow in a layered graph with O(n) vertices. Independently, Albers et al. \\cite{AAG11} proposed an $O(n^2f(n))$-time algorithm exploiting the same relation with the maximum flow problem. We extend our algorithm to the multiprocessor speed scal...

  2. Coordinated Energy Management in Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Indrani Paul

    2014-01-01

    Full Text Available This paper examines energy management in a heterogeneous processor consisting of an integrated CPU–GPU for high-performance computing (HPC applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types – a new and less understood problem. We examine the intra-node CPU–GPU frequency sensitivity of HPC applications on tightly coupled CPU–GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU–GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power- and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED2 product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.

  3. Broadband monitoring simulation with massively parallel processors

    Science.gov (United States)

    Trubetskov, Mikhail; Amotchkina, Tatiana; Tikhonravov, Alexander

    2011-09-01

    Modern efficient optimization techniques, namely needle optimization and gradual evolution, enable one to design optical coatings of any type. Even more, these techniques allow obtaining multiple solutions with close spectral characteristics. It is important, therefore, to develop software tools that can allow one to choose a practically optimal solution from a wide variety of possible theoretical designs. A practically optimal solution provides the highest production yield when optical coating is manufactured. Computational manufacturing is a low-cost tool for choosing a practically optimal solution. The theory of probability predicts that reliable production yield estimations require many hundreds or even thousands of computational manufacturing experiments. As a result reliable estimation of the production yield may require too much computational time. The most time-consuming operation is calculation of the discrepancy function used by a broadband monitoring algorithm. This function is formed by a sum of terms over wavelength grid. These terms can be computed simultaneously in different threads of computations which opens great opportunities for parallelization of computations. Multi-core and multi-processor systems can provide accelerations up to several times. Additional potential for further acceleration of computations is connected with using Graphics Processing Units (GPU). A modern GPU consists of hundreds of massively parallel processors and is capable to perform floating-point operations efficiently.

  4. The ATLAS Level-1 Central Trigger Processor

    CERN Document Server

    Pauly, T; Ellis, Nick; Farthouat, P; Gällnö, P; Haller, J; Krasznahorkay, A; Maeno, T; Pessoa-Lima, H; Resurreccion-Arcas, I; Schuler, G; De Seixas, J M; Spiwoks, R; Torga-Teixeira, R; Wengler, T; 14th IEEE-NPSS Real Time Conference 2005

    2005-01-01

    ATLAS is a multi-purpose particle physics detector at CERN’s Large Hadron Collider where two pulsed beams of protons are brought to collision at very high energy. There are collisions every 25 ns, corresponding to a rate of 40 MHz. A three-level trigger system reduces this rate to about 200 Hz while keeping bunch crossings which potentially contain interesting processes. The Level-1 trigger, implemented in electronics and firmware, makes an initial selection in under 2.5 us with an output rate of less than 100 kHz. A key element of this is the Central Trigger Processor (CTP) which combines trigger information from the calorimeter and muon trigger processors to make the final Level-1 accept decision in under 100 ns on the basis of lists of selection criteria, implemented as a trigger menu. Timing and trigger signals are fanned out to all sub-detectors, while busy signals from all sub-detector read-out systems are collected and fed into the CTP in order to throttle the generation of Level-1 triggers.

  5. Integrated Human Futures Modeling in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aamir, Munaf Syed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bernard, Michael Lewis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beyeler, Walter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fellner, Karen Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hayden, Nancy Kay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jeffers, Robert Fredric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Keller, Elizabeth James Kistin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Michael David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silver, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tidwell, Vincent C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Engelke, Peter [Atlantic Council, Washington, D.C. (United States); Burrow, Mat [Atlantic Council, Washington, D.C. (United States); Keith, Bruce [United States Military Academy, West Point, NY (United States)

    2016-01-01

    The Integrated Human Futures Project provides a set of analytical and quantitative modeling and simulation tools that help explore the links among human social, economic, and ecological conditions, human resilience, conflict, and peace, and allows users to simulate tradeoffs and consequences associated with different future development and mitigation scenarios. In the current study, we integrate five distinct modeling platforms to simulate the potential risk of social unrest in Egypt resulting from the Grand Ethiopian Renaissance Dam (GERD) on the Blue Nile in Ethiopia. The five platforms simulate hydrology, agriculture, economy, human ecology, and human psychology/behavior, and show how impacts derived from development initiatives in one sector (e.g., hydrology) might ripple through to affect other sectors and how development and security concerns may be triggered across the region. This approach evaluates potential consequences, intended and unintended, associated with strategic policy actions that span the development-security nexus at the national, regional, and international levels. Model results are not intended to provide explicit predictions, but rather to provide system-level insight for policy makers into the dynamics among these interacting sectors, and to demonstrate an approach to evaluating short- and long-term policy trade-offs across different policy domains and stakeholders. The GERD project is critical to government-planned development efforts in Ethiopia but is expected to reduce downstream freshwater availability in the Nile Basin, fueling fears of negative social and economic impacts that could threaten stability and security in Egypt. We tested these hypotheses and came to the following preliminary conclusions. First, the GERD will have an important short-term impact on water availability, food production, and hydropower production in Egypt, depending on the short- term reservoir fill rate. Second, the GERD will have a very small impact on

  6. Algorithms, hardware, and software for a digital signal processor microcomputer-based speech processor in a multielectrode cochlear implant system.

    Science.gov (United States)

    Morris, L R; Barszczewski, P

    1989-06-01

    Software and hardware have been developed to create a powerful, inexpensive, compact digital signal processing system which in real-time extracts a low-bit rate linear predictive coding (LPC) speech system model. The model parameters derived include accurate spectral envelope, formant, pitch, and amplitude information. The system is based on the Texas Instruments TMS320 family, and the most compact realization requires only three chips (TMS320E17, A/D-D/A, op-amp), consuming a total of less than 0.5 W. The processor is part of programmable cochlear implant system under development by a multiuniversity Canadian team, but also has other applications in aids to the hearing handicapped.

  7. SVM Model for Identification of human GPCRs

    CERN Document Server

    Shrivastava, Sonal; Malik, M M

    2010-01-01

    G-protein coupled receptors (GPCRs) constitute a broad class of cell-surface receptors in eukaryotes and they possess seven transmembrane a-helical domains. GPCRs are usually classified into several functionally distinct families that play a key role in cellular signalling and regulation of basic physiological processes. We can develop statistical models based on these common features that can be used to classify proteins, to predict new members, and to study the sequence-function relationship of this protein function group. In this study, SVM based classification model has been developed for the identification of human gpcr sequences. Sequences of Level 1 subfamilies of Class A rhodopsin is considered as case study. In the present study, an attempt has been made to classify GPCRs on the basis of species. The present study classifies human gpcr sequences with rest of the species available in GPCRDB. Classification is based on specific information derived from the n-terminal and extracellular loops of the sequ...

  8. Simple models of human brain functional networks.

    Science.gov (United States)

    Vértes, Petra E; Alexander-Bloch, Aaron F; Gogtay, Nitin; Giedd, Jay N; Rapoport, Judith L; Bullmore, Edward T

    2012-04-10

    Human brain functional networks are embedded in anatomical space and have topological properties--small-worldness, modularity, fat-tailed degree distributions--that are comparable to many other complex networks. Although a sophisticated set of measures is available to describe the topology of brain networks, the selection pressures that drive their formation remain largely unknown. Here we consider generative models for the probability of a functional connection (an edge) between two cortical regions (nodes) separated by some Euclidean distance in anatomical space. In particular, we propose a model in which the embedded topology of brain networks emerges from two competing factors: a distance penalty based on the cost of maintaining long-range connections; and a topological term that favors links between regions sharing similar input. We show that, together, these two biologically plausible factors are sufficient to capture an impressive range of topological properties of functional brain networks. Model parameters estimated in one set of functional MRI (fMRI) data on normal volunteers provided a good fit to networks estimated in a second independent sample of fMRI data. Furthermore, slightly detuned model parameters also generated a reasonable simulation of the abnormal properties of brain functional networks in people with schizophrenia. We therefore anticipate that many aspects of brain network organization, in health and disease, may be parsimoniously explained by an economical clustering rule for the probability of functional connectivity between different brain areas.

  9. Delayed Random Walks: Modeling Human Posture Control

    Science.gov (United States)

    Ohira, Toru

    1998-03-01

    We consider a phenomenological description of a noisy trajectory which appears on a stabiliogram platform during human postural sway. We hypothesize that this trajectory arises due to a mixture of uncontrollable noise and a corrective delayed feedback to an upright position. Based on this hypothesis, we model the process with a biased random walk whose transition probability depends on its position at a fixed time delay in the past, which we call a delayed random walk. We first introduce a very simple model (T. Ohira and J. G. Milton, Phys.Rev.E. 52), 3277, (1995), which can nevertheless capture the rough qualitative features of the two--point mean square displacement of experimental data with reasonable estimation of delay time. Then, we discuss two approaches toward better capturing and understanding of the experimental data. The first approach is an extension of the model to include a spatial displacement threshold from the upright position below which no or only weak corrective feedback motion takes place. This can be incorporated into an extended delayed random walk model. Numerical simulations show that this extended model can better capture the three scaling region which appears in the two--point mean square displacement. The other approach studied the autocorrelation function of the experimental data, which shows oscillatory behavior. We recently investigated a delayed random walk model whose autocorrelation function has analytically tractable oscillatory behavior (T. Ohira, Phys.Rev.E. 55), R1255, (1997). We discuss how this analytical understanding and its application to delay estimation (T. Ohira and R. Sawatari, Phys.Rev.E. 55), R2077, (1997) could possibly be used to further understand the postural sway data.

  10. Automatic Modeling of Virtual Humans and Body Clothing

    Institute of Scientific and Technical Information of China (English)

    Nadia Magnenat-Thalmann; Hyewon Seo; Frederic Cordier

    2004-01-01

    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics.These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.

  11. Evaluation of laser diode based optical switches for optical processors

    Science.gov (United States)

    Swanson, Paul D.; Parker, Michael A.; Libby, Stuart I.

    1993-07-01

    Three optical switching elements have been designed, fabricated, and tested for use in an integrated, optical signal processor. The first, an optical NOR logic gate, uses gain quenching as a means of allowing one (or more) light beam(s) to control the output light. This technique, along with the use of a two pad bistable output laser, is used in demonstrating the feasibility of the second device, an all optical RS flip flop. The third device consists of a broad area orthogonal model switching laser, whose corollary outputs correspond to the sign of the voltage difference between its two high impedance electrical inputs. This device also has possible memory applications if bistable mode switching within the broad area laser can be achieved.

  12. Damage 90: A post processor for crack initiation

    Science.gov (United States)

    Lemaitre, Jean; Doghri, Issam

    1994-05-01

    A post processor is fully described which allows the calculation of the crack initiation conditions from the history of strain components taken as the output of a finite element calculation. It is based upon damage mechanics using coupled strain damage constitutive equations for linear isotropic elasticity, perfect plasticity and a unified kinetic law of damage evolution. The localization of damage allows this coupling to be considered only for the damaging point for which the input strain history is taken from a classical structure calculation in elasticity or elastoplasticity. The listing of the code, a `friendly' code, with less than 600 FORTRAN instructions is given and some examples show its ability to model ductile failure in one or multi dimensions, brittle failure, low and high cycle fatigue with the non-linear accumulation, and multi-axial fatigue.

  13. Directions in parallel processor architecture, and GPUs too

    CERN Document Server

    CERN. Geneva

    2014-01-01

    Modern computing is power-limited in every domain of computing. Performance increments extracted from instruction-level parallelism (ILP) are no longer power-efficient; they haven't been for some time. Thread-level parallelism (TLP) is a more easily exploited form of parallelism, at the expense of programmer effort to expose it in the program. In this talk, I will introduce you to disparate topics in parallel processor architecture that will impact programming models (and you) in both the near and far future. About the speaker Olivier is a senior GPU (SM) architect at NVIDIA and an active participant in the concurrency working group of the ISO C++ committee. He has also worked on very large diesel engines as a mechanical engineer, and taught at McGill University (Canada) as a faculty instructor.

  14. The Application of Humanized Mouse Models for the Study of Human Exclusive Viruses.

    Science.gov (United States)

    Vahedi, Fatemeh; Giles, Elizabeth C; Ashkar, Ali A

    2017-01-01

    The symbiosis between humans and viruses has allowed human tropic pathogens to evolve intricate means of modulating the human immune response to ensure its survival among the human population. In doing so, these viruses have developed profound mechanisms that mesh closely with our human biology. The establishment of this intimate relationship has created a species-specific barrier to infection, restricting the virus-associated pathologies to humans. This specificity diminishes the utility of traditional animal models. Humanized mice offer a model unique to all other means of study, providing an in vivo platform for the careful examination of human tropic viruses and their interaction with human cells and tissues. These types of animal models have provided a reliable medium for the study of human-virus interactions, a relationship that could otherwise not be investigated without questionable relevance to humans.

  15. A Fast Scheme to Investigate Thermal-Aware Scheduling Policy for Multicore Processors

    Science.gov (United States)

    He, Liqiang; Narisu, Cha

    With more cores integrated into one single chip, the overall power consumption from the multiple concurrent running programs increases dramatically in a CMP processor which causes the thermal problem becomes more and more severer than the traditional superscalar processor. To leverage the thermal problem of a multicore processor, two kinds of orthogonal technique can be exploited. One is the commonly used Dynamic Thermal Management technique. The other is the thermal aware thread scheduling policy. For the latter one, some general ideas have been proposed by academic and industry researchers. The difficult to investigate the effectiveness of a thread scheduling policy is the huge search space coming from the different possible mapping combinations for a given multi-program workload. In this paper, we extend a simple thermal model originally used in a single core processor to a multicore environment and propose a fast scheme to search or compare the thermal effectiveness of different scheduling policies using the new model. The experiment results show that the proposed scheme can predict the thermal characteristics of the different scheduling policies with a reasonable accuracy and help researchers to fast investigate the performances of the policies without detailed time consuming simulations.

  16. Modeling the human prothrombinase complex components

    Science.gov (United States)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  17. Animal models for human craniofacial malformations.

    Science.gov (United States)

    Johnston, M C; Bronsky, P T

    1991-01-01

    Holoprosencephaly malformations, of which the fetal alcohol syndrome appears to be a mild form, can result from medial anterior neural plate deficiencies as demonstrated in an ethanol treated animal model. These malformations are associated with more medial positioning of the nasal placodes and resulting underdevelopment or absence of the medial nasal prominences (MNPs) and their derivatives. Malformations seen in the human retinoic acid syndrome (RAS) can be produced by administration of the drug 13-cis-retinoic acid in animals. Primary effects on neural crest cells account for most of these RAS malformations. Many of the malformations seen in the RAS are similar to those of hemifacial microsomia, suggesting similar neural crest involvement. Excessive cell death, apparently limited to trigeminal ganglion neuroblasts of placodal origin, follows 13-cis retinoic acid administration at the time of ganglion formation and leads to malformations virtually identical to those of the Treacher Collins syndrome (TCS). Secondary effects on neural crest cells in the area of the ganglion appear to be responsible for the TCS malformations. Malformations of the DiGeorge Syndrome are similar to those of the RAS and can be produced in mice by ethanol administration or by "knocking out" a homeobox gene (box 1.5). Human and animal studies indicate that cleft lips of multifactorial etiology may be generically susceptible because of small MNP)s or other MNP developmental alterations, such as those found in A/J mice, that make prominence contact more difficult. Experimental maternal hypoxia in mice indicates that cigarette smoking may increase the incidence of cleft lip by interfering with morphogenetic movements. Other human cleft lips may result from the action of a single major gene coding for TGF-alpha variants. A study with mouse palatal shelves in culture and other information suggest that a fusion problem may be involved.

  18. Molecular processors: from qubits to fuzzy logic.

    Science.gov (United States)

    Gentili, Pier Luigi

    2011-03-14

    Single molecules or their assemblies are information processing devices. Herein it is demonstrated how it is possible to process different types of logic through molecules. As long as decoherent effects are maintained far away from a pure quantum mechanical system, quantum logic can be processed. If the collapse of superimposed or entangled wavefunctions is unavoidable, molecules can still be used to process either crisp (binary or multi-valued) or fuzzy logic. The way for implementing fuzzy inference engines is declared and it is supported by the examples of molecular fuzzy logic systems devised so far. Fuzzy logic is drawing attention in the field of artificial intelligence, because it models human reasoning quite well. This ability may be due to some structural analogies between a fuzzy logic system and the human nervous system. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Liver immune-pathogenesis and therapy of human liver tropic virus infection in humanized mouse models.

    Science.gov (United States)

    Bility, Moses T; Li, Feng; Cheng, Liang; Su, Lishan

    2013-08-01

    Hepatitis B virus (HBV) and hepatitis C virus (HCV) infect and replicate primarily in human hepatocytes. Few reliable and easy accessible animal models are available for studying the immune system's contribution to the liver disease progression during hepatitis virus infection. Humanized mouse models reconstituted with human hematopoietic stem cells (HSCs) have been developed to study human immunology, human immunodeficiency virus 1 infection, and immunopathogenesis. However, a humanized mouse model engrafted with both human immune and human liver cells is needed to study infection and immunopathogenesis of HBV/HCV infection in vivo. We have recently developed the humanized mouse model with both human immune and human liver cells (AFC8-hu HSC/Hep) to study immunopathogenesis and therapy of HCV infection in vivo. In this review, we summarize the current models of HBV/HCV infection and their limitations in immunopathogenesis. We will then present our recent findings of HCV infection and immunopathogenesis in the AFC8-hu HSC/Hep mouse, which supports HCV infection, human T-cell response and associated liver pathogenesis. Inoculation of humanized mice with primary HCV isolates resulted in long-term HCV infection. HCV infection induced elevated infiltration of human immune cells in the livers of HCV-infected humanized mice. HCV infection also induced HCV-specific T-cell immune response in lymphoid tissues of humanized mice. Additionally, HCV infection induced liver fibrosis in humanized mice. Anti-human alpha smooth muscle actin (αSMA) staining showed elevated human hepatic stellate cell activation in HCV-infected humanized mice. We discuss the limitation and future improvements of the AFC8-hu HSC/Hep mouse model and its application in evaluating novel therapeutics, as well as studying both HCV and HBV infection, human immune responses, and associated human liver fibrosis and cancer.

  20. Modeling and remodeling of human extraction sockets.

    Science.gov (United States)

    Trombelli, Leonardo; Farina, Roberto; Marzola, Andrea; Bozzi, Leopoldo; Liljenberg, Birgitta; Lindhe, Jan

    2008-07-01

    The available studies on extraction wound repair in humans are affected by significant limitations and have failed to evaluate tissue alterations occurring in all compartments of the hard tissue defect. To monitor during a 6-month period the healing of human extraction sockets and include a semi-quantitative analysis of tissues and cell populations involved in various stages of the processes of modeling/remodeling. Twenty-seven biopsies, representative of the early (2-4 weeks, n=10), intermediate (6-8 weeks, n=6), and late phase (12-24 weeks, n=11) of healing, were collected and analysed. Granulation tissue that was present in comparatively large amounts in the early healing phase of socket healing, was in the interval between the early and intermediate observation phase replaced with provisional matrix and woven bone. The density of vascular structures and macrophages slowly decreased from 2 to 4 weeks over time. The presence of osteoblasts peaked at 6-8 weeks and remained almost stable thereafter; a small number of osteoclasts were present in a few specimens at each observation interval. The present findings demonstrated that great variability exists in man with respect to hard tissue formation within extraction sockets. Thus, whereas a provisional connective tissue consistently forms within the first weeks of healing, the interval during which mineralized bone is laid down is much less predictable.

  1. In Vitro Model of Human Choroidal Neovascular

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Choroidal capillary endothelia cell (CEC) plays a critical role in the development of choroidal neovascularization which is one of the major causes of blindness. An effective method for CEC cultivation was proposed.The isolation of human choroidal CECs using micro dissection followed by the use of superparamagnetic beads (Dynabeads) coated with the CD 31, which selectively binds to the endothelial cell surface. Cells bound to beads were isolated using a magnetic particle concentrator. The CECs were planted into type Ⅳ collagen coated 24 well plates. The results show that the primary cultured CEC is induced to tube formation in collagen Ⅳ coated environment, which can be presented as an in vitro model of choroidal neovascularization.

  2. Molecular Modeling of Prion Transmission to Humans

    Directory of Open Access Journals (Sweden)

    Etienne Levavasseur

    2014-10-01

    Full Text Available Using different prion strains, such as the variant Creutzfeldt-Jakob disease agent and the atypical bovine spongiform encephalopathy agents, and using transgenic mice expressing human or bovine prion protein, we assessed the reliability of protein misfolding cyclic amplification (PMCA to model interspecies and genetic barriers to prion transmission. We compared our PMCA results with in vivo transmission data characterized by attack rates, i.e., the percentage of inoculated mice that developed the disease. Using 19 seed/substrate combinations, we observed that a significant PMCA amplification was only obtained when the mouse line used as substrate is susceptible to the corresponding strain. Our results suggest that PMCA provides a useful tool to study genetic barriers to transmission and to study the zoonotic potential of emerging prion strains.

  3. GA103 a microprogrammable processor for online filtering

    CERN Document Server

    Calzas, A; Danon, G

    1981-01-01

    GA103 is a 16 bit microprogrammable processor, which emulates the PDP 11 instruction set. It is based on the Am2900 slices. It allows user- implemented microinstructions and addition of hardwired processors. It will perform online filtering tasks in the NA14 experiment at CERN, based on the reconstruction of transverse momentum of photons detected in a lead glass calorimeter. (3 refs).

  4. Expert System Constant False Alarm Rate (CFAR) Processor

    Science.gov (United States)

    2006-09-01

    Processor has been developed on a Sun Sparc Station 4/470 using a commercial-off-the-shelf software development package called G2 by Gensym Corporation...size of the training data set. A prototype expert system CFAR Processor has been presented which applies artificial intelligence to CFAR detection

  5. Digital Signal Processor System for AC Power Drivers

    OpenAIRE

    Ovidiu Neamtu

    2009-01-01

    DSP (Digital Signal Processor) is the bestsolution for motor control systems to make possible thedevelopment of advanced motor drive systems. The motorcontrol processor calculates the required motor windingvoltage magnitude and frequency to operate the motor atthe desired speed. A PWM (Pulse Width Modulation)circuit controls the on and off duty cycle of the powerinverter switches to vary the magnitude of the motorvoltages.

  6. Digital Signal Processor System for AC Power Drivers

    Directory of Open Access Journals (Sweden)

    Ovidiu Neamtu

    2009-10-01

    Full Text Available DSP (Digital Signal Processor is the bestsolution for motor control systems to make possible thedevelopment of advanced motor drive systems. The motorcontrol processor calculates the required motor windingvoltage magnitude and frequency to operate the motor atthe desired speed. A PWM (Pulse Width Modulationcircuit controls the on and off duty cycle of the powerinverter switches to vary the magnitude of the motorvoltages.

  7. High speed matrix processors using floating point representation

    Energy Technology Data Exchange (ETDEWEB)

    Birkner, D.A.

    1980-01-01

    The author describes the architecture of a high-speed matrix processor which uses a floating-point format for data representation. It is shown how multipliers and other LSI devices are used in the design to obtain the high speed of the processor.

  8. Temporal Partitioning and Multi-Processor Scheduling for Reconfigurable Architectures

    DEFF Research Database (Denmark)

    Popp, Andreas; Le Moullec, Yannick; Koch, Peter

    This poster presentation outlines a proposed framework for handling mapping of signal processing applications to heterogeneous reconfigurable architectures. The methodology consists of an extension to traditional multi-processor scheduling by creating a separate HW track for generation of groups...... of tasks that are handled similarly to SW processes in a traditional multi-processor scheduling context....

  9. Message Passing on a Time-predictable Multicore Processor

    DEFF Research Database (Denmark)

    Sørensen, Rasmus Bo; Puffitsch, Wolfgang; Schoeberl, Martin

    2015-01-01

    Real-time systems need time-predictable computing platforms. For a multicore processor to be time-predictable, communication between processor cores needs to be time-predictable as well. This paper presents a time-predictable message-passing library for such a platform. We show how to build up...

  10. A Simple and Affordable TTL Processor for the Classroom

    Science.gov (United States)

    Feinberg, Dave

    2007-01-01

    This paper presents a simple 4 bit computer processor design that may be built using TTL chips for less than $65. In addition to describing the processor itself in detail, we discuss our experience using the laboratory kit and its associated machine instruction set to teach computer architecture to high school students. (Contains 3 figures and 5…

  11. Bibliographic Pattern Matching Using the ICL Distributed Array Processor.

    Science.gov (United States)

    Carroll, David M.; And Others

    1988-01-01

    Describes the use of a highly parallel array processor for pattern matching operations in a bibliographic retrieval system. The discussion covers the hardware and software features of the processor, the pattern matching algorithm used, and the results of experimental tests of the system. (37 references) (Author/CLB)

  12. Designing a dataflow processor using CλaSH

    NARCIS (Netherlands)

    Niedermeier, Anja; Wester, Rinse; Rovers, Kenneth; Baaij, Christiaan; Kuper, Jan; Smit, Gerard

    2010-01-01

    In this paper we show how a simple dataflow processor can be fully implemented using CλaSH, a high level HDL based on the functional programming language Haskell. The processor was described using Haskell, the CλaSH compiler was then used to translate the design into a fully synthesisable VHDL code.

  13. EARLY EXPERIENCE WITH A HYBRID PROCESSOR: K-MEANS CLUSTERING

    Energy Technology Data Exchange (ETDEWEB)

    M. GOKHALE; ET AL

    2001-02-01

    We discuss hardware/software coprocessing on a hybrid processor for a compute- and data-intensive hyper-spectral imaging algorithm, K-Means Clustering. The experiments are performed on the Altera Excalibur board using the soft IP core 32-bit NIOS RISC processor. In our experiments, we compare performance of the sequential algorithm with two different accelerated versions. We consider granularity and synchronization issues when mapping an algorithm to a hybrid processor. Our results show that on the Excalibur NIOS, a 15% speedup can be achieved over the sequential algorithm on images with 8 spectral bands where the pixels are divided into 8 categories. Speedup is limited by the communication cost of transferring data from external memory through the NIOS processor to the customized circuits. Our results indicate that future hybrid processors must either (1) have a clock rate 10X the speed of the configurable logic circuits or (2) include dual port memories that both the processor and configurable logic can access. If either of these conditions is met, the hybrid processor will show a factor of 10 speedup over the sequential algorithm. Such systems will combine the convenience of conventional processors with the speed of configurable logic.

  14. Evaluation of the Intel Sandy Bridge-EP server processor

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; CERN. Geneva. IT Department

    2012-01-01

    In this paper we report on a set of benchmark results recently obtained by CERN openlab when comparing an 8-core “Sandy Bridge-EP” processor with Intel’s previous microarchitecture, the “Westmere-EP”. The Intel marketing names for these processors are “Xeon E5-2600 processor series” and “Xeon 5600 processor series”, respectively. Both processors are produced in a 32nm process, and both platforms are dual-socket servers. Multiple benchmarks were used to get a good understanding of the performance of the new processor. We used both industry-standard benchmarks, such as SPEC2006, and specific High Energy Physics benchmarks, representing both simulation of physics detectors and data analysis of physics events. Before summarizing the results we must stress the fact that benchmarking of modern processors is a very complex affair. One has to control (at least) the following features: processor frequency, overclocking via Turbo mode, the number of physical cores in use, the use of logical cores ...

  15. Signal Processor for Spring8 Linac BPM

    CERN Document Server

    Yanagida, K; Dewa, H; Hanaki, H; Hori, T; Kobayashi, T; Mizuno, A; Sasaki, S; Suzuki, S; Takashima, T; Taniushi, T; Tomizawa, H

    2001-01-01

    A signal processor of the single shot BPM system consists of a narrow-band BPF unit, a detector unit, a P/H circuit, an S/H IC and a 16-bit ADC. The BPF unit extracts a pure 2856MHz RF signal component from a BPM and makes the pulse width longer than 100ns. The detector unit that includes a demodulating logarithmic amplifier is used to detect an S-band RF amplitude. A wide dynamic range of beam current has been achieved; 0.01 ~ 3.5nC for below 100ns input pulse width, or 0.06 ~ 20mA for above 100ns input pulse width. The maximum acquisition rate with a VME system has been achieved up to 1kHz.

  16. Scaling the ion trap quantum processor.

    Science.gov (United States)

    Monroe, C; Kim, J

    2013-03-08

    Trapped atomic ions are standards for quantum information processing, serving as quantum memories, hosts of quantum gates in quantum computers and simulators, and nodes of quantum communication networks. Quantum bits based on trapped ions enjoy a rare combination of attributes: They have exquisite coherence properties, they can be prepared and measured with nearly 100% efficiency, and they are readily entangled with each other through the Coulomb interaction or remote photonic interconnects. The outstanding challenge is the scaling of trapped ions to hundreds or thousands of qubits and beyond, at which scale quantum processors can outperform their classical counterparts in certain applications. We review the latest progress and prospects in that effort, with the promise of advanced architectures and new technologies, such as microfabricated ion traps and integrated photonics.

  17. Water Processor and Oxygen Generation Assembly

    Science.gov (United States)

    Bedard, John

    1997-01-01

    This report documents the results of the tasks which initiated efforts on design issues relating to the Water Processor (WP) and the Oxygen Generation Assembly (OGA) Flight Hardware for the International Space Station. This report fulfills the Statement of Work deliverables requirement for contract H-29387D. The following lists the tasks required by contract H-29387D: (1) HSSSI shall coordinate a detailed review of WP/OGA Flight Hardware program requirements with personnel from MSFC to identify requirements that can be eliminated without affecting the technical integrity of the WP/OGA Hardware; (2) HSSSI shall conduct the technical interchanges with personnel from MSFC to resolve design issues related to WP/OGA Flight Hardware; (3) HSSSI will initiate discussions with Zellwegger Analytics, Inc. to address design issues related to WP and PCWQM interfaces.

  18. Building custom processors with Handel-C

    CERN Document Server

    Lokier, J

    1999-01-01

    Triggering and data acquisition for the ATLAS LHC experiment requires state of the art computer hardware. Amongst other things, specialised processors may be required. To build these economically we are looking at reconfigurable computing, and a high-level hardware description language: Handel-C. We had previously implemented a specialised network hardware application in AHDL-a hardware description at the level of gates, flip-flops and state machines. As a feasibility study, we have rewritten the application in Handel-C -a language similar to C, except that it can be translated into hardware. There were problems to solve: high data throughput with complex pipelines; timing constraints; I/O interfaces to external devices; difficulties with the Altera devices. We gained valuable experience, wrote useful support tools, and discovered clean new ways to make the most of the language in the high-speed domain. (0 refs).

  19. Conversion via software of a simd processor into a mimd processor

    Energy Technology Data Exchange (ETDEWEB)

    Guzman, A.; Gerzso, M.; Norkin, K.B.; Vilenkin, S.Y.

    1983-01-01

    A method is described which takes a pure LISP program and automatically decomposes it via automatic parallelization into several parts, one for each processor of an SIMD architecture. Each of these parts is a different execution flow, i.e., a different program. The execution of these different programs by an SIMD architecture is examined. The method has been developed in some detail for the PS-2000, an SIMD Soviet multiprocessor, making it behave like AHR, a Mexican MIMD multi-microprocessor. Both the PS-2000 and AHR execute a pure LISP program in parallel; its decomposition into >n> pieces, their synchronization, scheduling, etc., are performed by the system (hardware and software). In order to achieve simultaneous execution of different programs in an SIMD processor, the method uses a scheme of node scheduling and node exportation. 14 references.

  20. Design of a Human Reliability Assessment model for structural engineering

    NARCIS (Netherlands)

    De Haan, J.; Terwel, K.C.; Al-Jibouri, S.H.S.

    2013-01-01

    It is generally accepted that humans are the “weakest link” in structural design and construction processes. Despite this, few models are available to quantify human error within engineering processes. This paper demonstrates the use of a quantitative Human Reliability Assessment model within struct

  1. Analysis of rear end impact using mathematical human modelling

    NARCIS (Netherlands)

    Happee, R.; Meijer, R.; Horst, M.J. van der; Ono, K.; Yamazaki, K.

    2000-01-01

    At TNO an omni-directional mathematical human body model has been developed. Until now this human model has been validated for frontal and lateral loading using response data of volunteer and post mortem human subject (PMHS) sled tests. For rearward loading it has been validated for high speed impac

  2. Floating-point processor for INTEL 8080A microprocessor systems

    Energy Technology Data Exchange (ETDEWEB)

    Bairstow, R.; Barlow, J.; Jires, M.; Waters, M.

    1982-03-01

    An A.M.D. 9511 Floating Point Processor has been interfaced to the Rutherford Laboratory Bubble Chamber Group's microcomputers. These computers are based on the INTEL 8080A microprocessor. The interface uses a memory mapped I/O technique to ensure rapid transfer of arguments between processors. The A.M.D. 9511 acts as a slave processor to the INTEL 8080A system. The 8080 processor is held in WAIT status until completion of the A.M.D. operation. A software Macro Processor has been written to effectively extend the basic INTEL 8080A instruction set to include the full range of A.M.D. 9511 instructions.

  3. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  4. An Efficient Graph-Coloring Algorithm for Processor Allocation

    Directory of Open Access Journals (Sweden)

    Mohammed Hasan Mahafzah

    2013-06-01

    Full Text Available This paper develops an efficient exact graph-coloring algorithm based on Maximum Independent Set (MIS for allocating processors in distributed systems. This technique represents the allocated processors in specific time in a fully connected graph and prevents each processor in multiprocessor system to be assigned to more than one process at a time. This research uses a sequential technique to distribute processes among processors. Moreover, the proposed method has been constructed by modifying the FMIS algorithm. The proposed algorithm has been programmed in Visual C++ and implemented on an Intel core i7. The experiments show that the proposed algorithm gets better performance in terms of CPU utilization, and minimum time for of graph coloring, comparing with the latest FMIS algorithm. The proposed algorithm can be developed to detect defected processor in the system.

  5. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  6. PERFORMANCE EVALUATION OF DIRECT PROCESSOR ACCESS FOR NON DEDICATED SERVER

    Directory of Open Access Journals (Sweden)

    P. S. BALAMURUGAN

    2010-10-01

    Full Text Available The objective of the paper is to design a co processor for a desktop machine which enables the machine to act as non dedicated server, such that the co processor will act as a server processor and the multi-core processor to act as desktop processor. By implementing this methodology a client machine can be made to act as a non dedicated server and a client machine. These type of machine can be used in autonomy networks. This design will lead to design of a cost effective server and machine which can parallel act as a non dedicated server and a client machine or it can be made to switch and act as client or server.

  7. Explore the Performance of the ARM Processor Using JPEG

    Directory of Open Access Journals (Sweden)

    A.D. Jadhav

    2010-01-01

    Full Text Available Recently, the evolution of embedded systems has shown a strong trend towards application- specific, single- chip solutions. The ARM processor core is a leading RISC processor architecture in the embedded domain. The ARM family of processors supports a unique feature of code size reduction. In this paper it is illustrated using an embedded platform trying to design an image encoder, more specifically a JPEG encoder using ARM7TDMI processor. Here gray scale image is used and it is coded by using keil software and same procedure is repeated by using MATLAB software for compare the results with standard one. Successfully putting a new application of JPEG on ARM7 processor.

  8. Acoustooptic linear algebra processors - Architectures, algorithms, and applications

    Science.gov (United States)

    Casasent, D.

    1984-01-01

    Architectures, algorithms, and applications for systolic processors are described with attention to the realization of parallel algorithms on various optical systolic array processors. Systolic processors for matrices with special structure and matrices of general structure, and the realization of matrix-vector, matrix-matrix, and triple-matrix products and such architectures are described. Parallel algorithms for direct and indirect solutions to systems of linear algebraic equations and their implementation on optical systolic processors are detailed with attention to the pipelining and flow of data and operations. Parallel algorithms and their optical realization for LU and QR matrix decomposition are specifically detailed. These represent the fundamental operations necessary in the implementation of least squares, eigenvalue, and SVD solutions. Specific applications (e.g., the solution of partial differential equations, adaptive noise cancellation, and optimal control) are described to typify the use of matrix processors in modern advanced signal processing.

  9. Integrated Human Futures Modeling in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Passell, Howard D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aamir, Munaf Syed [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bernard, Michael Lewis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beyeler, Walter E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Fellner, Karen Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hayden, Nancy Kay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Jeffers, Robert Fredric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Keller, Elizabeth James Kistin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Malczynski, Leonard A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Michael David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silver, Emily [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tidwell, Vincent C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Villa, Daniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Engelke, Peter [Atlantic Council, Washington, D.C. (United States); Burrow, Mat [Atlantic Council, Washington, D.C. (United States); Keith, Bruce [United States Military Academy, West Point, NY (United States)

    2016-01-01

    The Integrated Human Futures Project provides a set of analytical and quantitative modeling and simulation tools that help explore the links among human social, economic, and ecological conditions, human resilience, conflict, and peace, and allows users to simulate tradeoffs and consequences associated with different future development and mitigation scenarios. In the current study, we integrate five distinct modeling platforms to simulate the potential risk of social unrest in Egypt resulting from the Grand Ethiopian Renaissance Dam (GERD) on the Blue Nile in Ethiopia. The five platforms simulate hydrology, agriculture, economy, human ecology, and human psychology/behavior, and show how impacts derived from development initiatives in one sector (e.g., hydrology) might ripple through to affect other sectors and how development and security concerns may be triggered across the region. This approach evaluates potential consequences, intended and unintended, associated with strategic policy actions that span the development-security nexus at the national, regional, and international levels. Model results are not intended to provide explicit predictions, but rather to provide system-level insight for policy makers into the dynamics among these interacting sectors, and to demonstrate an approach to evaluating short- and long-term policy trade-offs across different policy domains and stakeholders. The GERD project is critical to government-planned development efforts in Ethiopia but is expected to reduce downstream freshwater availability in the Nile Basin, fueling fears of negative social and economic impacts that could threaten stability and security in Egypt. We tested these hypotheses and came to the following preliminary conclusions. First, the GERD will have an important short-term impact on water availability, food production, and hydropower production in Egypt, depending on the short- term reservoir fill rate. Second, the GERD will have a very small impact on

  10. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L.; Camp, William J.

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  11. Bayesian Processor of Ensemble for Precipitation Forecasting: A Development Plan

    Science.gov (United States)

    Toth, Z.; Krzysztofowicz, R.

    2006-05-01

    The Bayesian Processor of Ensemble (BPE) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It is a generalization of the Bayesian Processor of Output (BPO) developed by Krzysztofowicz and Maranzano for processing single values of multiple predictors into a posterior distribution function of a predictand. The BPE processes an ensemble of a predictand generated by multiple integrations of a numerical weather prediction (NWP) model, and optimally fuses the ensemble with climatic data in order to quantify uncertainty about the predictand. As is well known, Bayes theorem provides the optimal theoretical framework for fusing information from different sources and for obtaining the posterior distribution function of a predictand. Using a family of such distribution functions, a given raw ensemble can be mapped into a posterior ensemble, which is well calibrated, has maximum informativeness, and preserves the spatio-temporal and cross-variate dependence structure of the NWP output fields. The challenge is to develop and test the BPE suitable for operational forecasting. This talk will present the basic design components of the BPE, along with a discussion of the climatic and training data to be used in its potential application at the National Centers for Environmental Prediction (NCEP). The technique will be tested first on quasi-normally distributed variates and next on precipitation variates. For reasons of economy, the BPE will be applied on the relatively coarse resolution grid corresponding to the ensemble output, and then the posterior ensemble will be downscaled to finer grids such as that of the National Digital Forecast Database (NDFD).

  12. Genomic responses in mouse models poorly mimic human inflammatory diseases.

    Science.gov (United States)

    Seok, Junhee; Warren, H Shaw; Cuenca, Alex G; Mindrinos, Michael N; Baker, Henry V; Xu, Weihong; Richards, Daniel R; McDonald-Smith, Grace P; Gao, Hong; Hennessy, Laura; Finnerty, Celeste C; López, Cecilia M; Honari, Shari; Moore, Ernest E; Minei, Joseph P; Cuschieri, Joseph; Bankey, Paul E; Johnson, Jeffrey L; Sperry, Jason; Nathens, Avery B; Billiar, Timothy R; West, Michael A; Jeschke, Marc G; Klein, Matthew B; Gamelli, Richard L; Gibran, Nicole S; Brownstein, Bernard H; Miller-Graziano, Carol; Calvano, Steve E; Mason, Philip H; Cobb, J Perren; Rahme, Laurence G; Lowry, Stephen F; Maier, Ronald V; Moldawer, Lyle L; Herndon, David N; Davis, Ronald W; Xiao, Wenzhong; Tompkins, Ronald G

    2013-02-26

    A cornerstone of modern biomedical research is the use of mouse models to explore basic pathophysiological mechanisms, evaluate new therapeutic approaches, and make go or no-go decisions to carry new drug candidates forward into clinical trials. Systematic studies evaluating how well murine models mimic human inflammatory diseases are nonexistent. Here, we show that, although acute inflammatory stresses from different etiologies result in highly similar genomic responses in humans, the responses in corresponding mouse models correlate poorly with the human conditions and also, one another. Among genes changed significantly in humans, the murine orthologs are close to random in matching their human counterparts (e.g., R(2) between 0.0 and 0.1). In addition to improvements in the current animal model systems, our study supports higher priority for translational medical research to focus on the more complex human conditions rather than relying on mouse models to study human inflammatory diseases.

  13. Genomic responses in mouse models poorly mimic human inflammatory diseases

    Science.gov (United States)

    Seok, Junhee; Warren, H. Shaw; Cuenca, Alex G.; Mindrinos, Michael N.; Baker, Henry V.; Xu, Weihong; Richards, Daniel R.; McDonald-Smith, Grace P.; Gao, Hong; Hennessy, Laura; Finnerty, Celeste C.; López, Cecilia M.; Honari, Shari; Moore, Ernest E.; Minei, Joseph P.; Cuschieri, Joseph; Bankey, Paul E.; Johnson, Jeffrey L.; Sperry, Jason; Nathens, Avery B.; Billiar, Timothy R.; West, Michael A.; Jeschke, Marc G.; Klein, Matthew B.; Gamelli, Richard L.; Gibran, Nicole S.; Brownstein, Bernard H.; Miller-Graziano, Carol; Calvano, Steve E.; Mason, Philip H.; Cobb, J. Perren; Rahme, Laurence G.; Lowry, Stephen F.; Maier, Ronald V.; Moldawer, Lyle L.; Herndon, David N.; Davis, Ronald W.; Xiao, Wenzhong; Tompkins, Ronald G.; Abouhamze, Amer; Balis, Ulysses G. J.; Camp, David G.; De, Asit K.; Harbrecht, Brian G.; Hayden, Douglas L.; Kaushal, Amit; O’Keefe, Grant E.; Kotz, Kenneth T.; Qian, Weijun; Schoenfeld, David A.; Shapiro, Michael B.; Silver, Geoffrey M.; Smith, Richard D.; Storey, John D.; Tibshirani, Robert; Toner, Mehmet; Wilhelmy, Julie; Wispelwey, Bram; Wong, Wing H

    2013-01-01

    A cornerstone of modern biomedical research is the use of mouse models to explore basic pathophysiological mechanisms, evaluate new therapeutic approaches, and make go or no-go decisions to carry new drug candidates forward into clinical trials. Systematic studies evaluating how well murine models mimic human inflammatory diseases are nonexistent. Here, we show that, although acute inflammatory stresses from different etiologies result in highly similar genomic responses in humans, the responses in corresponding mouse models correlate poorly with the human conditions and also, one another. Among genes changed significantly in humans, the murine orthologs are close to random in matching their human counterparts (e.g., R2 between 0.0 and 0.1). In addition to improvements in the current animal model systems, our study supports higher priority for translational medical research to focus on the more complex human conditions rather than relying on mouse models to study human inflammatory diseases. PMID:23401516

  14. Modeling bursts and heavy tails in human dynamics

    OpenAIRE

    Vazquez, A.; Oliveira, J. Gama; Dezso, Z.; Goh, K. -I.; Kondor, I.; Barabasi, A. -L.

    2005-01-01

    Current models of human dynamics, used from risk assessment to communications, assume that human actions are randomly distributed in time and thus well approximated by Poisson processes. We provide direct evidence that for five human activity patterns the timing of individual human actions follow non-Poisson statistics, characterized by bursts of rapidly occurring events separated by long periods of inactivity. We show that the bursty nature of human behavior is a consequence of a decision ba...

  15. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  16. A long term model of circulation. [human body

    Science.gov (United States)

    White, R. J.

    1974-01-01

    A quantitative approach to modeling human physiological function, with a view toward ultimate application to long duration space flight experiments, was undertaken. Data was obtained on the effect of weightlessness on certain aspects of human physiological function during 1-3 month periods. Modifications in the Guyton model are reviewed. Design considerations for bilateral interface models are discussed. Construction of a functioning whole body model was studied, as well as the testing of the model versus available data.

  17. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    Science.gov (United States)

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  18. Impact of device level faults in a digital avionic processor

    Science.gov (United States)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  19. First Results of an "Artificial Retina" Processor Prototype

    Science.gov (United States)

    Cenci, Riccardo; Bedeschi, Franco; Marino, Pietro; Morello, Michael J.; Ninci, Daniele; Piucci, Alessio; Punzi, Giovanni; Ristori, Luciano; Spinella, Franco; Stracka, Simone; Tonelli, Diego; Walsh, John

    2016-11-01

    We report on the performance of a specialized processor capable of reconstructing charged particle tracks in a realistic LHC silicon tracker detector, at the same speed of the readout and with sub-microsecond latency. The processor is based on an innovative pattern-recognition algorithm, called "artificial retina algorithm", inspired from the vision system of mammals. A prototype of the processor has been designed, simulated, and implemented on Tel62 boards equipped with high-bandwidth Altera Stratix III FPGA devices. The prototype is the first step towards a real-time track reconstruction device aimed at processing complex events of high-luminosity LHC experiments at 40 MHz crossing rate.

  20. Digital optical cellular image processor (DOCIP) - Experimental implementation

    Science.gov (United States)

    Huang, K.-S.; Sawchuk, A. A.; Jenkins, B. K.; Chavel, P.; Wang, J.-M.; Weber, A. G.; Wang, C.-H.; Glaser, I.

    1993-01-01

    We demonstrate experimentally the concept of the digital optical cellular image processor architecture by implementing one processing element of a prototype optical computer that includes a 54-gate processor, an instruction decoder, and electronic input-output interfaces. The processor consists of a two-dimensional (2-D) array of 54 optical logic gates implemented by use of a liquid-crystal light valve and a 2-D array of 53 subholograms to provide interconnections between gates. The interconnection hologram is fabricated by a computer-controlled optical system.

  1. Ethernet-Enabled Power and Communication Module for Embedded Processors

    Science.gov (United States)

    Perotti, Jose; Oostdyk, Rebecca

    2010-01-01

    The power and communications module is a printed circuit board (PCB) that has the capability of providing power to an embedded processor and converting Ethernet packets into serial data to transfer to the processor. The purpose of the new design is to address the shortcomings of previous designs, including limited bandwidth and program memory, lack of control over packet processing, and lack of support for timing synchronization. The new design of the module creates a robust serial-to-Ethernet conversion that is powered using the existing Ethernet cable. This innovation has a small form factor that allows it to power processors and transducers with minimal space requirements.

  2. On our best behavior: optimality models in human behavioral ecology.

    Science.gov (United States)

    Driscoll, Catherine

    2009-06-01

    This paper discusses problems associated with the use of optimality models in human behavioral ecology. Optimality models are used in both human and non-human animal behavioral ecology to test hypotheses about the conditions generating and maintaining behavioral strategies in populations via natural selection. The way optimality models are currently used in behavioral ecology faces significant problems, which are exacerbated by employing the so-called 'phenotypic gambit': that is, the bet that the psychological and inheritance mechanisms responsible for behavioral strategies will be straightforward. I argue that each of several different possible ways we might interpret how optimality models are being used for humans face similar and additional problems. I suggest some ways in which human behavioral ecologists might adjust how they employ optimality models; in particular, I urge the abandonment of the phenotypic gambit in the human case.

  3. Diffusion Based Modeling of Human Brain Response to External Stimuli

    CERN Document Server

    Namazi, Hamidreza

    2012-01-01

    Human brain response is the overall ability of the brain in analyzing internal and external stimuli in the form of transferred energy to the mind/brain phase-space and thus, making the proper decisions. During the last decade scientists discovered about this phenomenon and proposed some models based on computational, biological, or neuropsychological methods. Despite some advances in studies related to this area of the brain research there was less effort which have been done on the mathematical modeling of the human brain response to external stimuli. This research is devoted to the modeling of human EEG signal, as an alert state of overall human brain activity monitoring, due to receiving external stimuli, based on fractional diffusion equation. The results of this modeling show very good agreement with the real human EEG signal and thus, this model can be used as a strong representative of the human brain activity.

  4. Multipurpose silicon photonics signal processor core.

    Science.gov (United States)

    Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José

    2017-09-21

    Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.

  5. Element Load Data Processor (ELDAP) Users Manual

    Science.gov (United States)

    Ramsey, John K., Jr.; Ramsey, John K., Sr.

    2015-01-01

    Often, the shear and tensile forces and moments are extracted from finite element analyses to be used in off-line calculations for evaluating the integrity of structural connections involving bolts, rivets, and welds. Usually the maximum forces and moments are desired for use in the calculations. In situations where there are numerous structural connections of interest for numerous load cases, the effort in finding the true maximum force and/or moment combinations among all fasteners and welds and load cases becomes difficult. The Element Load Data Processor (ELDAP) software described herein makes this effort manageable. This software eliminates the possibility of overlooking the worst-case forces and moments that could result in erroneous positive margins of safety and/or selecting inconsistent combinations of forces and moments resulting in false negative margins of safety. In addition to forces and moments, any scalar quantity output in a PATRAN report file may be evaluated with this software. This software was originally written to fill an urgent need during the structural analysis of the Ares I-X Interstage segment. As such, this software was coded in a straightforward manner with no effort made to optimize or minimize code or to develop a graphical user interface.

  6. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  7. Food processors requirements met by radiation processing

    Science.gov (United States)

    Durante, Raymond W.

    2002-03-01

    Processing food using irradiation provides significant advantages to food producers by destroying harmful pathogens and extending shelf life without any detectable physical or chemical changes. It is expected that through increased public education, food irradiation will emerge as a viable commercial industry. Food production in most countries involves state of the art manufacturing, packaging, labeling, and shipping techniques that provides maximum efficiency and profit. In the United States, food sales are extremely competitive and profit margins small. Most food producers have heavily invested in equipment and are hesitant to modify their equipment. Meat and poultry producers in particular utilize sophisticated production machinery that processes enormous volumes of product on a continuous basis. It is incumbent on the food irradiation equipment suppliers to develop equipment that can easily merge with existing processes without requiring major changes to either the final food product or the process utilized to produce that product. Before a food producer can include irradiation as part of their food production process, they must be certain the available equipment meets their needs. This paper will examine several major requirements of food processors that will most likely have to be provided by the supplier of the irradiation equipment.

  8. Median and Morphological Specialized Processors for a Real-Time Image Data Processing

    Directory of Open Access Journals (Sweden)

    Kazimierz Wiatr

    2002-01-01

    Full Text Available This paper presents the considerations on selecting a multiprocessor MISD architecture for fast implementation of the vision image processing. Using the author′s earlier experience with real-time systems, implementing of specialized hardware processors based on the programmable FPGA systems has been proposed in the pipeline architecture. In particular, the following processors are presented: median filter and morphological processor. The structure of a universal reconfigurable processor developed has been proposed as well. Experimental results are presented as delays on LCA level implementation for median filter, morphological processor, convolution processor, look-up-table processor, logic processor and histogram processor. These times compare with delays in general purpose processor and DSP processor.

  9. Modelling Human Emotions for Tactical Decision-Making Games

    Science.gov (United States)

    Visschedijk, Gillian C.; Lazonder, Ard W.; van der Hulst, Anja; Vink, Nathalie; Leemkuil, Henny

    2013-01-01

    The training of tactical decision making increasingly occurs through serious computer games. A challenging aspect of designing such games is the modelling of human emotions. Two studies were performed to investigate the relation between fidelity and human emotion recognition in virtual human characters. Study 1 compared five versions of a virtual…

  10. Modelling human emotions for tactical decision-making games

    NARCIS (Netherlands)

    Visschedijk, G.C.; Lazonder, A.W.; Hulst, A.H. van der; Vink, N.; Leemkuil, H.

    2013-01-01

    The training of tactical decision making increasingly occurs through serious computer games. A challenging aspect of designing such games is the modelling of human emotions. Two studieswere performed to investigate the relation between fidelity and human emotion recognition in virtual human characte

  11. Modelling human emotions for tactical decision-making games

    NARCIS (Netherlands)

    Visschedijk, G.C.; Lazonder, A.W.; Hulst, A.H. van der; Vink, N.; Leemkuil, H.

    2013-01-01

    The training of tactical decision making increasingly occurs through serious computer games. A challenging aspect of designing such games is the modelling of human emotions. Two studieswere performed to investigate the relation between fidelity and human emotion recognition in virtual human characte

  12. The Berkeley Out-of-Order Machine (BOOM): An Industry-Competitive, Synthesizable, Parameterized RISC-V Processor

    Science.gov (United States)

    2015-06-13

    Illinois Verilog Model (IVM) is a 4-issue, out- of-order core designed to study transient faults .[13] The Santa Cruz Out-of-Order RISC Engine (SCOORE...ISCA-35, 2008. [8] B. H. Dwiel et al., “Fpga modeling of diverse superscalar processors,” in Performance Analysis of Systems and Software (ISPASS), 2012...al., “Rationale for a 3d heterogeneous multi-core processor,” in Computer Design (ICCD), 2013 IEEE 31st International Conference on, Oct 2013, pp. 154

  13. Research on Dynamic Model of the Human Body

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chun-lin; WANG Guang-quan; LU Dun-yong

    2005-01-01

    After summarizing the current situation of the research on human body modeling, a new dynamic model containing 5 equivalent masses has been proposed and the corresponding dynamic equations has been deduced too. By using this new model, more detailed information about the situation of the human body under impact and vibration can be obtained. The new model solves the problem that transmission functions of forces inside the human body can't be deduced by using 3-equivalent-mass model. It will find its usage in many applications.

  14. Modelling Human Exposure to Chemicals in Food

    NARCIS (Netherlands)

    Slob W

    1993-01-01

    Exposure to foodborne chemicals is often estimated using the average consumption pattern in the human population. To protect the human population instead of the average individual, however, interindividual variability in consumption behaviour must be taken into account. This report shows how food

  15. Dynamics of the two process model of human sleep regulation

    Science.gov (United States)

    Kenngott, Max; McKay, Cavendish

    2011-04-01

    We examine the dynamics of the two process model of human sleep regulation. In this model, sleep propensity is governed by the interaction between a periodic threshold (process C) and a saturating growth/decay (process S). We find that the parameter space of this model admits sleep cycles with a wide variety of characteristics, many of which are not observed in normal human sleepers. We also examine the effects of phase dependent feedback on this model.

  16. Minimizing Human Risk: Human Performance Models in the Space Human Factors and Habitability and Behavioral Health and Performance Elements

    Science.gov (United States)

    Gore, Brian F.

    2016-01-01

    Human space exploration has never been more exciting than it is today. Human presence to outer worlds is becoming a reality as humans are leveraging much of our prior knowledge to the new mission of going to Mars. Exploring the solar system at greater distances from Earth than ever before will possess some unique challenges, which can be overcome thanks to the advances in modeling and simulation technologies. The National Aeronautics and Space Administration (NASA) is at the forefront of exploring our solar system. NASA's Human Research Program (HRP) focuses on discovering the best methods and technologies that support safe and productive human space travel in the extreme and harsh space environment. HRP uses various methods and approaches to answer questions about the impact of long duration missions on the human in space including: gravity's impact on the human body, isolation and confinement on the human, hostile environments impact on the human, space radiation, and how the distance is likely to impact the human. Predictive models are included in the HRP research portfolio as these models provide valuable insights into human-system operations. This paper will provide an overview of NASA's HRP and will present a number of projects that have used modeling and simulation to provide insights into human-system issues (e.g. automation, habitat design, schedules) in anticipation of space exploration.

  17. Architecture and Design of Medical Processor Units for Medical Networks

    CERN Document Server

    Ahamed, Syed V; 10.5121/ijcnc.2010.2602

    2011-01-01

    This paper introduces analogical and deductive methodologies for the design medical processor units (MPUs). From the study of evolution of numerous earlier processors, we derive the basis for the architecture of MPUs. These specialized processors perform unique medical functions encoded as medical operational codes (mopcs). From a pragmatic perspective, MPUs function very close to CPUs. Both processors have unique operation codes that command the hardware to perform a distinct chain of subprocesses upon operands and generate a specific result unique to the opcode and the operand(s). In medical environments, MPU decodes the mopcs and executes a series of medical sub-processes and sends out secondary commands to the medical machine. Whereas operands in a typical computer system are numerical and logical entities, the operands in medical machine are objects such as such as patients, blood samples, tissues, operating rooms, medical staff, medical bills, patient payments, etc. We follow the functional overlap betw...

  18. APEmille a parallel processor in the teraflop range

    CERN Document Server

    Panizzi, E

    1996-01-01

    APEmille is a SIMD parallel processor under development at the Italian National Institute for Nuclear Physics (INFN). APEmille is very well suited for Lattice QCD applications, both for its hardware characteristics and for its software and language features. APEmille is an array of custom arithmetic processors arranged on a tridimensional torus. The replicated processor is a pipelined VLIW device performing integer and single/double precision IEEE floating point operations. The processor is optimized for complex computations and has a peak performance of 528Mflop at 66MHz and of 800Mflop at 100MHz. In principle an array of 2048 nodes is able to break the Tflops barrier. A powerful programming language named TAO is provided and is highly optimized for QCD. A C++ compiler is foreseen. Specific data structures, operators and even statements can be defined by the user for each different application. Effort has been made to define the language constructs for QCD.

  19. Compiler for Fast, Accurate Mathematical Computing on Integer Processors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposers will develop a computer language compiler to enable inexpensive, low-power, integer-only processors to carry our mathematically-intensive comptutations...

  20. A Shared Memory Module for Asynchronous Arrays of Processors

    Directory of Open Access Journals (Sweden)

    Zhiyi Yu

    2007-05-01

    Full Text Available A shared memory module connecting multiple independently clocked processors is presented. The memory module itself is independently clocked, supports hardware address generation, mutual exclusion, and multiple addressing modes. The architecture supports independent address generation and data generation/consumption by different processors which increases efficiency and simplifies programming for many embedded and DSP tasks. Simultaneous access by different processors is arbitrated using a least-recently-serviced priority scheme. Simulations show high throughputs over a variety of memory loads. A standard cell implementation shares an 8 K-word SRAM among four processors, and can support a 64 K-word SRAM with no additional changes. It cycles at 555 MHz and occupies 1.2 mm2 in 0.18 μm CMOS.

  1. 2009 Survey of Gulf of Mexico Dockside Seafood Processors

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This survey gathered and analyze economic data from seafood processors throughout the states in the Gulf region. The survey sought to collect financial variables...

  2. A Shared Memory Module for Asynchronous Arrays of Processors

    Directory of Open Access Journals (Sweden)

    Meeuwsen MichaelJ

    2007-01-01

    Full Text Available A shared memory module connecting multiple independently clocked processors is presented. The memory module itself is independently clocked, supports hardware address generation, mutual exclusion, and multiple addressing modes. The architecture supports independent address generation and data generation/consumption by different processors which increases efficiency and simplifies programming for many embedded and DSP tasks. Simultaneous access by different processors is arbitrated using a least-recently-serviced priority scheme. Simulations show high throughputs over a variety of memory loads. A standard cell implementation shares an 8 K-word SRAM among four processors, and can support a 64 K-word SRAM with no additional changes. It cycles at 555 MHz and occupies 1.2 mm2 in 0.18 μm CMOS.

  3. Processors' training needs on modern shea butter processing ...

    African Journals Online (AJOL)

    Processors' training needs on modern shea butter processing technologies in North Central ... South African Journal of Agricultural Extension ... The need for continual production of high quality shea butter in Nigeria through the use of modern ...

  4. Reconfigurable VLIW Processor for Software Defined Radio Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and formally verify a VLIW processor that is radiation-hardened, and where the VLIW instructions consist of predicated RISC instructions from the...

  5. Numerical Modeling of Electromagnetic Field Effects on the Human Body

    Directory of Open Access Journals (Sweden)

    Zuzana Psenakova

    2006-01-01

    Full Text Available Interactions of electromagnetic field (EMF with environment and with tissue of human beings are still under discussion and many research teams are investigating it. The human simulation models are used for biomedical research in a lot of areas, where it is advantage to replace real human body (tissue by the numerical model. Biological effects of EMF are one of the areas, where numerical models are used with many advantages. On the other side, this research is very specific and it is always quite hard to simulate realistic human tissue. This paper deals with different possibilities of numerical modelling of electromagnetic field effects on the human body (especially calculation of the specific absorption rate (SAR distribution in human body and thermal effect.

  6. Pharmacological migraine provocation: a human model of migraine

    DEFF Research Database (Denmark)

    Ashina, Messoud; Hansen, Jakob Møller

    2010-01-01

    for migraine mechanisms. So far, however, animal models cannot predict the efficacy of new therapies for migraine. Because migraine attacks are fully reversible and can be aborted by therapy, the headache- or migraine-provoking property of naturally occurring signaling molecules can be tested in a human model....... If a naturally occurring substance can provoke migraine in human patients, then it is likely, although not certain, that blocking its effect will be effective in the treatment of acute migraine attacks. To this end, a human in vivo model of experimental headache and migraine in humans has been developed...

  7. Predicting human walking gaits with a simple planar model.

    Science.gov (United States)

    Martin, Anne E; Schmiedeler, James P

    2014-04-11

    Models of human walking with moderate complexity have the potential to accurately capture both joint kinematics and whole body energetics, thereby offering more simultaneous information than very simple models and less computational cost than very complex models. This work examines four- and six-link planar biped models with knees and rigid circular feet. The two differ in that the six-link model includes ankle joints. Stable periodic walking gaits are generated for both models using a hybrid zero dynamics-based control approach. To establish a baseline of how well the models can approximate normal human walking, gaits were optimized to match experimental human walking data, ranging in speed from very slow to very fast. The six-link model well matched the experimental step length, speed, and mean absolute power, while the four-link model did not, indicating that ankle work is a critical element in human walking models of this type. Beyond simply matching human data, the six-link model can be used in an optimization framework to predict normal human walking using a torque-squared objective function. The model well predicted experimental step length, joint motions, and mean absolute power over the full range of speeds.

  8. Multi-agent Model of Trust in a Human Game

    NARCIS (Netherlands)

    Jonker, C.M.; Meijer, S.A.; Tykhonov, D.; Verwaart, D.

    2006-01-01

    Individual-level trust is formalized within the context of a multi-agent system that models human behaviour with respect to trust in the Trust and Tracing Game. This is a trade game on commodity supply chains and networks, designed as a reserach tool and to be played by human players. The model of t

  9. Fuzzy Control Strategies in Human Operator and Sport Modeling

    CERN Document Server

    Ivancevic, Tijana T; Markovic, Sasa

    2009-01-01

    The motivation behind mathematically modeling the human operator is to help explain the response characteristics of the complex dynamical system including the human manual controller. In this paper, we present two different fuzzy logic strategies for human operator and sport modeling: fixed fuzzy-logic inference control and adaptive fuzzy-logic control, including neuro-fuzzy-fractal control. As an application of the presented fuzzy strategies, we present a fuzzy-control based tennis simulator.

  10. An integrated model of human-wildlife interdependence

    Science.gov (United States)

    John, Kun H.; Walsh, Richard G.; Johnson, R. L.

    1994-01-01

    This paper attempts to integrate wildlife-related ecologic and economic variables into an econometric model. The model reveals empirical evidence of the presumed interdependence of human-wildlife and the holistic nature of humanity's relationship to the ecosystem. Human use of biologic resources varies not only with income, education, and population, but also with sustainability of humankind's action relative to the quality and quantity of the supporting ecological base.

  11. Floating-point multiple data stream digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Fortier, M.; Corinthios, M.J.

    1982-01-01

    A microprogrammed multiple data stream digital signal processor is introduced. This floating-point processor is capable of implementing optimum Wiener filtering of signals, in general, and images in particular. Generalised spectral analysis transforms such as Fourier, Walsh, Hadamard, and generalised Walsh are efficiently implemented in a bit-slice microprocessor-based architecture. In this architecture, a microprogrammed sequencing section directly controls a central floating-point signal processing unit. Throughout, computations are performed on pipelined multiple complex data streams. 12 references.

  12. Real time simulator with Ti floating point digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    Razazian, K.; Bobis, J.P.; Dieckman, S.L.; Raptis, A.C.

    1994-08-01

    This paper describes the design and operation of a Real Time Simulator using Texas Instruments TMS320C30 digital signal processor. This system operates with two banks of memory which provide the input data to digital signal processor chip. This feature enables the TMS320C30 to be utilized in variety of applications for which external connections to acquire input data is not needed. In addition, some practical applications of this Real Time Simulator are discussed.

  13. Nanosensor Data Processor in Quantum-Dot Cellular Automata

    OpenAIRE

    Fenghui Yao; Mohamed Saleh Zein-Sabatto; Guifeng Shao; Mohammad Bodruzzaman; Mohan Malkani

    2014-01-01

    Quantum-dot cellular automata (QCA) is an attractive nanotechnology with the potential alterative to CMOS technology. QCA provides an interesting paradigm for faster speed, smaller size, and lower power consumption in comparison to transistor-based technology, in both communication and computation. This paper describes the design of a 4-bit multifunction nanosensor data processor (NSDP). The functions of NSDP contain (i) sending the preprocessed raw data to high-level processor, (ii) counting...

  14. An Imaging Infrared (IIR) seeker using a microprogrammed processor

    Science.gov (United States)

    Richmond, K. V.

    1980-01-01

    A recently developed Imaging Infrared Seeker uses a microprogrammed processor to perform gimbal servo control and system interface while performing the seeker functions of automatic target detection, acquisition, and tracking. The automatic detection mode requires up to 80% of the available capability of a high performance microprogrammed processor. Although system complexity was increased significantly, this approach can be cost effective when the basic computation capacity is already available.

  15. Fast Parallel Computation of Polynomials Using Few Processors

    DEFF Research Database (Denmark)

    Valiant, Leslie G.; Skyum, Sven; Berkowitz, S.;

    1983-01-01

    It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors.......It is shown that any multivariate polynomial of degree $d$ that can be computed sequentially in $C$ steps can be computed in parallel in $O((\\log d)(\\log C + \\log d))$ steps using only $(Cd)^{O(1)} $ processors....

  16. Fast parallel computation of polynomials using few processors

    DEFF Research Database (Denmark)

    Valiant, Leslie; Skyum, Sven

    1981-01-01

    It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors.......It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1) processors....

  17. Optimal Generic Advertising under Bilateral Imperfect Competition between Processors and Retailers

    OpenAIRE

    Chung, Chanjin; Eom, Young Sook; Yang, Byung Woo; Han, Sungill

    2013-01-01

    The purpose of this paper is to examine the impact of bilateral imperfect competition between processors and retailers and of import supply on optimal advertising intensity, advertising expenditures, and checkoff assessment rates. First, comparative static analyses were conducted on the newly developed optimal advertising intensity formula. Second, to consider the endogenous nature of optimal advertising, a linear market equilibrium model was developed and applied to the U.S. beef industry. R...

  18. Fast Track Pattern Recognition in High Energy Physics Experiments with the Automata Processor

    CERN Document Server

    Wang, Michael H L S; Green, Christopher; Guo, Deyuan; Wang, Ke; Zmuda, Ted

    2016-01-01

    We explore the Micron Automata Processor (AP) as a suitable commodity technology that can address the growing computational needs of track pattern recognition in High Energy Physics experiments. A toy detector model is developed for which a track trigger based on the Micron AP is used to demonstrate a proof-of-principle. Although primarily meant for high speed text-based searches, we demonstrate that the Micron AP is ideally suited to track finding applications.

  19. Humanized mouse model for assessing the human immune response to xenogeneic and allogeneic decellularized biomaterials.

    Science.gov (United States)

    Wang, Raymond M; Johnson, Todd D; He, Jingjin; Rong, Zhili; Wong, Michelle; Nigam, Vishal; Behfar, Atta; Xu, Yang; Christman, Karen L

    2017-06-01

    Current assessment of biomaterial biocompatibility is typically implemented in wild type rodent models. Unfortunately, different characteristics of the immune systems in rodents versus humans limit the capability of these models to mimic the human immune response to naturally derived biomaterials. Here we investigated the utility of humanized mice as an improved model for testing naturally derived biomaterials. Two injectable hydrogels derived from decellularized porcine or human cadaveric myocardium were compared. Three days and one week after subcutaneous injection, the hydrogels were analyzed for early and mid-phase immune responses, respectively. Immune cells in the humanized mouse model, particularly T-helper cells, responded distinctly between the xenogeneic and allogeneic biomaterials. The allogeneic extracellular matrix derived hydrogels elicited significantly reduced total, human specific, and CD4(+) T-helper cell infiltration in humanized mice compared to xenogeneic extracellular matrix hydrogels, which was not recapitulated in wild type mice. T-helper cells, in response to the allogeneic hydrogel material, were also less polarized towards a pro-remodeling Th2 phenotype compared to xenogeneic extracellular matrix hydrogels in humanized mice. In both models, both biomaterials induced the infiltration of macrophages polarized towards a M2 phenotype and T-helper cells polarized towards a Th2 phenotype. In conclusion, these studies showed the importance of testing naturally derived biomaterials in immune competent animals and the potential of utilizing this humanized mouse model for further studying human immune cell responses to biomaterials in an in vivo environment.

  20. Having Your Cake and Eating It Too Autonomy and Interaction in a Model of Sentence Processing

    CERN Document Server

    Eiselt, K P; Holbrook, J K; Eiselt, Kurt P.; Mahesh, Kavi; Holbrook, Jennifer K.

    1994-01-01

    Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this paper we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language processing community significantly closer to offering human-like language processing systems.

  1. Pearls and pitfalls in human pharmacological models of migraine

    DEFF Research Database (Denmark)

    Ashina, Messoud; Hansen, Jakob Møller; Olesen, Jes

    2013-01-01

    In vitro studies have contributed to the characterization of receptors in cranial blood vessels and the identification of new possible anti-migraine agents. In vivo animal models enable the study of vascular responses, neurogenic inflammation, peptide release and genetic predisposition and thus...... have provided leads in the search for migraine mechanisms. All animal-based results must, however, be validated in human studies because so far no animal models can predict the efficacy of new therapies for migraine. Given the nature of migraine attacks, fully reversible and treatable, the headache....... To this end, a human in vivo model of experimental headache and migraine in humans has been developed. Human models of migraine offer unique possibilities to study mechanisms responsible for migraine and to explore the mechanisms of action of existing and future anti-migraine drugs. The human model has played...

  2. THOR Field and Wave Processor - FWP

    Science.gov (United States)

    Soucek, Jan; Rothkaehl, Hanna; Balikhin, Michael; Zaslavsky, Arnaud; Nakamura, Rumi; Khotyaintsev, Yuri; Uhlir, Ludek; Lan, Radek; Yearby, Keith; Morawski, Marek; Winkler, Marek

    2016-04-01

    If selected, Turbulence Heating ObserveR (THOR) will become the first mission ever flown in space dedicated to plasma turbulence. The Fields and Waves Processor (FWP) is an integrated electronics unit for all electromagnetic field measurements performed by THOR. FWP will interface with all fields sensors: electric field antennas of the EFI instrument, the MAG fluxgate magnetometer and search-coil magnetometer (SCM) and perform data digitization and on-board processing. FWP box will house multiple data acquisition sub-units and signal analyzers all sharing a common power supply and data processing unit and thus a single data and power interface to the spacecraft. Integrating all the electromagnetic field measurements in a single unit will improve the consistency of field measurement and accuracy of time synchronization. The feasibility of making highly sensitive electric and magnetic field measurements in space has been demonstrated by Cluster (among other spacecraft) and THOR instrumentation complemented by a thorough electromagnetic cleanliness program will further improve on this heritage. Taking advantage of the capabilities of modern electronics, FWP will provide simultaneous synchronized waveform and spectral data products at high time resolution from the numerous THOR sensors, taking advantage of the large telemetry bandwidth of THOR. FWP will also implement a plasma a resonance sounder and a digital plasma quasi-thermal noise analyzer designed to provide high cadence measurements of plasma density and temperature complementary to data from particle instruments. FWP will be interfaced with the particle instrument data processing unit (PPU) via a dedicated digital link which will enable performing on board correlation between waves and particles, quantifying the transfer of energy between waves and particles. The FWP instrument shall be designed and built by an international consortium of scientific institutes from Czech Republic, Poland, France, UK, Sweden

  3. High Throughput Bent-Pipe Processor Demonstrator

    Science.gov (United States)

    Tabacco, P.; Vernucci, A.; Russo, L.; Cangini, P.; Botticchio, T.; Angeletti, P.

    2008-08-01

    The work associated to this article is a study initiative sponsored by ESA/ESTEC that responds to the crucial need of developing new Satellite payload aimed at making rapid progresses in handling large amounts of data at a competitive price with respect to terrestrial one in the telecommunication field. Considering the quite limited band allowed to space communications at Ka band, reusing the same band in a large number of beams is mandatory: therefore beam-forming is the right technological answer. Technological progresses - mainly in the digital domain - also help greatly in increasing the satellite capacity. Next Satellite payload target are set in throughput range of 50Gbps. Despite the fact that the implementation of a wideband transparent processor for a high capacity communication payload is a very challenging task, Space Engineering team in the frame of this ESA study proposed an intermediate step of development for a scalable unit able to demonstrate both the capacity and flexibility objectives for different type of Wideband Beamforming antennas designs. To this aim the article describes the features of Wideband HW (analog and digital) platform purposely developed by Space Engineering in the frame of this ESA/ESTEC contract ("WDBFN" contract) with some preliminary system test results. The same platform and part of the associated SW will be used in the development and demonstration of the real payload digital front end Mux and Demux algorithms as well as the Beam Forming and on Board channel switching in frequency domain. At the time of this article writing, despite new FPGA and new ADC and DAC converters have become available as choices for wideband system implementation, the two HW platforms developed by Space Engineering, namely WDBFN ADC and DAC Boards, represent still the most performing units in terms of analog bandwidth, processing capability (in terms of FPGA module density), SERDES (SERiliazer DESerializers) external links density, integration form

  4. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  5. Review of trigger and on-line processors at SLAC

    Energy Technology Data Exchange (ETDEWEB)

    Lankford, A.J.

    1984-07-01

    The role of trigger and on-line processors in reducing data rates to manageable proportions in e/sup +/e/sup -/ physics experiments is defined not by high physics or background rates, but by the large event sizes of the general-purpose detectors employed. The rate of e/sup +/e/sup -/ annihilation is low, and backgrounds are not high; yet the number of physics processes which can be studied is vast and varied. This paper begins by briefly describing the role of trigger processors in the e/sup +/e/sup -/ context. The usual flow of the trigger decision process is illustrated with selected examples of SLAC trigger processing. The features are mentioned of triggering at the SLC and the trigger processing plans of the two SLC detectors: The Mark II and the SLD. The most common on-line processors at SLAC, the BADC, the SLAC Scanner Processor, the SLAC FASTBUS Controller, and the VAX CAMAC Channel, are discussed. Uses of the 168/E, 3081/E, and FASTBUS VAX processors are mentioned. The manner in which these processors are interfaced and the function they serve on line is described. Finally, the accelerator control system for the SLC is outlined. This paper is a survey in nature, and hence, relies heavily upon references to previous publications for detailed description of work mentioned here. 27 references, 9 figures, 1 table.

  6. High-Speed General Purpose Genetic Algorithm Processor.

    Science.gov (United States)

    Hoseini Alinodehi, Seyed Pourya; Moshfe, Sajjad; Saber Zaeimian, Masoumeh; Khoei, Abdollah; Hadidi, Khairollah

    2016-07-01

    In this paper, an ultrafast steady-state genetic algorithm processor (GAP) is presented. Due to the heavy computational load of genetic algorithms (GAs), they usually take a long time to find optimum solutions. Hardware implementation is a significant approach to overcome the problem by speeding up the GAs procedure. Hence, we designed a digital CMOS implementation of GA in [Formula: see text] process. The proposed processor is not bounded to a specific application. Indeed, it is a general-purpose processor, which is capable of performing optimization in any possible application. Utilizing speed-boosting techniques, such as pipeline scheme, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the proposed processor significantly reduces the processing time. Furthermore, by relying on a built-in discard operator the proposed hardware may be used in constrained problems that are very common in control applications. In the proposed design, a large search space is achievable through the bit string length extension of individuals in the genetic population by connecting the 32-bit GAPs. In addition, the proposed processor supports parallel processing, in which the GAs procedure can be run on several connected processors simultaneously.

  7. Reconfigurable signal processor designs for advanced digital array radar systems

    Science.gov (United States)

    Suarez, Hernan; Zhang, Yan (Rockee); Yu, Xining

    2017-05-01

    The new challenges originated from Digital Array Radar (DAR) demands a new generation of reconfigurable backend processor in the system. The new FPGA devices can support much higher speed, more bandwidth and processing capabilities for the need of digital Line Replaceable Unit (LRU). This study focuses on using the latest Altera and Xilinx devices in an adaptive beamforming processor. The field reprogrammable RF devices from Analog Devices are used as analog front end transceivers. Different from other existing Software-Defined Radio transceivers on the market, this processor is designed for distributed adaptive beamforming in a networked environment. The following aspects of the novel radar processor will be presented: (1) A new system-on-chip architecture based on Altera's devices and adaptive processing module, especially for the adaptive beamforming and pulse compression, will be introduced, (2) Successful implementation of generation 2 serial RapidIO data links on FPGA, which supports VITA-49 radio packet format for large distributed DAR processing. (3) Demonstration of the feasibility and capabilities of the processor in a Micro-TCA based, SRIO switching backplane to support multichannel beamforming in real-time. (4) Application of this processor in ongoing radar system development projects, including OU's dual-polarized digital array radar, the planned new cylindrical array radars, and future airborne radars.

  8. Design and Implementation of Quintuple Processor Architecture Using FPGA

    Directory of Open Access Journals (Sweden)

    P.Annapurna

    2014-09-01

    Full Text Available The advanced quintuple processor core is a design philosophy that has become a mainstream in Scientific and engineering applications. Increasing performance and gate capacity of recent FPGA devices permit complex logic systems to be implemented on a single programmable device. The embedded multiprocessors face a new problem with thread synchronization. It is caused by the distributed memory, when thread synchronization is violated the processors can access the same value at the same time. Basically the processor performance can be increased by adopting clock scaling technique and micro architectural Enhancements. Therefore, Designed a new Architecture called Advanced Concurrent Computing. This is implemented on the FPGA chip using VHDL. The advanced Concurrent Computing architecture performs a simultaneous use of both parallel and distributed computing. The full architecture of quintuple processor core designed for realistic to perform arithmetic, logical, shifting and bit manipulation operations. The proposed advanced quintuple processor core contains Homogeneous RISC processors, added with pipelined processing units, multi bus organization and I/O ports along with the other functional elements required to implement embedded SOC solutions. The designed quintuple performance issues like area, speed and power dissipation and propagation delay are analyzed at 90nm process technology using Xilinx tool.

  9. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beccherle, R; Beretta, M; Cipriani, R; Citraro, S; Citterio, M; Colombo, A; Crescioli, F; Dimas, D; Donati, S; Giannetti, P; Kordas, K; Lanza, A; Liberali, V; Luciano, P; Magalotti, D; Neroutsos, P; Nikolaidis, S; Piendibene, M; Sakellariou, A; Shojaii, S; Sotiropoulou, C-L; Stabile, A

    2014-01-01

    The Associative Memory (AM) system of the FTK processor has been designed to perform pattern matching using the hit information of the ATLAS silicon tracker. The AM is the heart of the FTK and it finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside the FTK, multiple designs and tests have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of the AM chip, an ASIC designed and optimized to perform pattern matching, and two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. Special relevance will be given to the AMchip design that includes two custom cells optimized for low consumption. We repo...

  10. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed to execute pattern matching with a high degree of parallelism. The AM system finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 828 2 Gbit/s serial links for a total in/out bandwidth of 56 Gb/s. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. ...

  11. The Serial Link Processor for the Fast TracKer (FTK) processor at ATLAS

    CERN Document Server

    Biesuz, Nicolo Vladi; The ATLAS collaboration; Luciano, Pierluigi; Magalotti, Daniel; Rossi, Enrico

    2015-01-01

    The Associative Memory (AM) system of the Fast Tracker (FTK) processor has been designed to perform pattern matching using the hit information of the ATLAS experiment silicon tracker. The AM is the heart of FTK and is mainly based on the use of ASICs (AM chips) designed on purpose to execute pattern matching with a high degree of parallelism. It finds track candidates at low resolution that are seeds for a full resolution track fitting. To solve the very challenging data traffic problems inside FTK, multiple board and chip designs have been performed. The currently proposed solution is named the “Serial Link Processor” and is based on an extremely powerful network of 2 Gb/s serial links. This paper reports on the design of the Serial Link Processor consisting of two types of boards, the Local Associative Memory Board (LAMB), a mezzanine where the AM chips are mounted, and the Associative Memory Board (AMB), a 9U VME board which holds and exercises four LAMBs. We report on the performance of the intermedia...

  12. Distributed digital signal processors for multi-body flexible structures

    Science.gov (United States)

    Lee, Gordon K. F.

    1992-01-01

    Multi-body flexible structures, such as those currently under investigation in spacecraft design, are large scale (high-order) dimensional systems. Controlling and filtering such structures is a computationally complex problem. This is particularly important when many sensors and actuators are located along the structure and need to be processed in real time. This report summarizes research activity focused on solving the signal processing (that is, information processing) issues of multi-body structures. A distributed architecture is developed in which single loop processors are employed for local filtering and control. By implementing such a philosophy with an embedded controller configuration, a supervising controller may be used to process global data and make global decisions as the local devices are processing local information. A hardware testbed, a position controller system for a servo motor, is employed to illustrate the capabilities of the embedded controller structure. Several filtering and control structures which can be modeled as rational functions can be implemented on the system developed in this research effort. Thus the results of the study provide a support tool for many Control/Structure Interaction (CSI) NASA testbeds such as the Evolutionary model and the nine-bay truss structure.

  13. Mice with human immune system components as in vivo models for infections with human pathogens

    OpenAIRE

    Rämer, P C; Chijioke, O; Meixlsperger, S; Leung, C S; Münz, C.

    2011-01-01

    Many pathogens relevant to human disease do not infect other animal species. Therefore, animal models that reconstitute or harbour human tissues are explored as hosts for these. In this review, we will summarize recent advances to utilize mice with human immune system components, reconstituted from hematopoietic progenitor cells in vivo. Such mice can be used to study human pathogens that replicate in leucocytes. In addition to studying the replication of these pathogens, the reconstituted hu...

  14. Variation in calculated human exposure. Comparison of calculations with seven European human exposure models

    NARCIS (Netherlands)

    Swartjes F; ECO

    2003-01-01

    Twenty scenarios, differing with respect to land use, soil type and contaminant, formed the basis for calculating human exposure from soil contaminants with the use of models contributed by seven European countries (one model per country). Here, the human exposures to children and children

  15. Modeling Human Behaviour with Higher Order Logic: Insider Threats

    DEFF Research Database (Denmark)

    Boender, Jaap; Ivanova, Marieta Georgieva; Kammuller, Florian

    2014-01-01

    In this paper, we approach the problem of modeling the human component in technical systems with a view on the difference between the use of model and theory in sociology and computer science. One aim of this essay is to show that building of theories and models for sociology can be compared...... it to the sociological process of logical explanation. As a case study on modeling human behaviour, we present the modeling and analysis of insider threats as a Higher Order Logic theory in Isabelle/HOL. We show how each of the three step process of sociological explanation can be seen in our modeling of insider’s state...

  16. Application of Acoustical Processor Reactors for Degradation of Diazinon from Surface Water

    Directory of Open Access Journals (Sweden)

    M Shayeghi

    2010-12-01

    Full Text Available "nAbstract"nBackground: Since organophosphorus pesticides are widely used for industry and insect control in agricultural crops, their fate in the environment is very important. Pesticide contamination of surface water has been recog­nized as a major contaminant in world because of their potential toxicity towards human and animals. The objec­tive of this research was to investigate the influence of various parameters including the influence of time, power, and initial concentration on degradation of diazinon pesticide."nMethods: The sonochemical degradation of diazinon was investigated using acoustical processor reactor. Acous­tical processor reactor with 130 kHz was used to study the degradation of pesticide solution. Samples were ana­lyzed using HPLC at different time intervals. Effectiveness of APR at different times (20, 40, 60, 80, 100, and 120 min, concentrations (2, 4 and 8 mg/L and powers (300W, 400W, 500W were compared."nResults: The degradation of the diazinon at lower concentrations was greater in comparison to higher concentra­tions. There was also direct correlation between power and diazinon degradation. In addition, when the power increased, the ability to degraded diazinon increased."nConclusion: The sonodegradation of diazinon pesticide at different concentrations and powers was successfully provided. It has been shown that APR can be used to reduce the concentration of dissolved pesticide using high frequency.  Keywords: Diazinon, acoustical processor reactor, initial concentration, power, time

  17. Human Cancer Models Initiative | Office of Cancer Genomics

    Science.gov (United States)

    The Human Cancer Models Initiative (HCMI) is an international consortium that is generating novel human tumor-derived culture models, which are annotated with genomic and clinical data. In an effort to advance cancer research and more fully understand how in vitro findings are related to clinical biology, HCMI-developed models and related data will be available as a community resource for cancer research.

  18. Human surrogate models of neuropathic pain: validity and limitations.

    Science.gov (United States)

    Binder, Andreas

    2016-02-01

    Human surrogate models of neuropathic pain in healthy subjects are used to study symptoms, signs, and the hypothesized underlying mechanisms. Although different models are available, different spontaneous and evoked symptoms and signs are inducible; 2 key questions need to be answered: are human surrogate models conceptually valid, ie, do they share the sensory phenotype of neuropathic pain states, and are they sufficiently reliable to allow consistent translational research?

  19. [Attempt at computer modeling of evolution of human society].

    Science.gov (United States)

    Levchenko, V F; Menshutkin, V V

    2009-01-01

    A model of evolution of human society and biosphere, which is based on the concepts of V. I. Vernadskii about noosphere and of L. N. Gumilev about ethnogenesis is developed and studied. The mathematical apparatus of the model is composition of finite stochastic automata. By using this model, a possibility of the global ecological crisis is demonstrated in the case of preservation of the current tendencies of interaction of the biosphere and the human civilization.

  20. A novel SCID mouse model for studying spontaneous metastasis of human lung cancer to human tissue.

    Science.gov (United States)

    Teraoka, S; Kyoizumi, S; Seyama, T; Yamakido, M; Akiyama, M

    1995-05-01

    We established a novel severe combined immunodeficient (SCID) mouse model for the study of human lung cancer metastasis to human lung. Implantation of both human fetal and adult lung tissue into mammary fat pads of SCID mice showed a 100% rate of engraftment, but only fetal lung implants revealed normal morphology of human lung tissue. Using these chimeric mice, we analyzed human lung cancer metastasis to both mouse and human lungs by subcutaneous inoculation of human squamous cell carcinoma and adenocarcinoma cell lines into the mice. In 60 to 70% of SCID mice injected with human-lung squamous-cell carcinoma, RERF-LC-AI, cancer cells were found to have metastasized to both mouse lungs and human fetal lung implants but not to human adult lung implants 80 days after cancer inoculation. Furthermore, human-lung adenocarcinoma cells, RERF-LC-KJ, metastasized to the human lung implants within 90 days in about 40% of SCID mice, whereas there were no metastases to the lungs of the mice. These results demonstrate the potential of this model for the in vivo study of human lung cancer metastasis.

  1. Detection of Unusual Human Activities Based on Behavior Modeling

    OpenAIRE

    Hiraishi, Kunihiko; Kobayashi, Koichi

    2014-01-01

    A type of services that require human physical actions and intelligent decision making exists in various real fields, such as nursing in hospitals and caregiving in nursing homes. In this paper, we propose new formalism for modeling human behavior in such services. Behavior models are estimated from event-logs, and can be used for analysis of human activities. We show two analysis methods: one is to detect unusual human activities that appear in event-logs, and the other is to find staffs tha...

  2. Data flow analysis of a highly parallel processor for a level 1 pixel trigger

    Energy Technology Data Exchange (ETDEWEB)

    Cancelo, G. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Gottschalk, Erik Edward [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Pavlicek, V. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Wang, M. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Wu, J. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)

    2003-01-01

    The present work describes the architecture and data flow analysis of a highly parallel processor for the Level 1 Pixel Trigger for the BTeV experiment at Fermilab. First the Level 1 Trigger system is described. Then the major components are analyzed by resorting to mathematical modeling. Also, behavioral simulations are used to confirm the models. Results from modeling and simulations are fed back into the system in order to improve the architecture, eliminate bottlenecks, allocate sufficient buffering between processes and obtain other important design parameters. An interesting feature of the current analysis is that the models can be extended to a large class of architectures and parallel systems.

  3. Mouse models for understanding human developmental anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Generoso, W.M.

    1989-01-01

    The mouse experimental system presents an opportunity for studying the nature of the underlying mutagenic damage and the molecular pathogenesis of this class of anomalies by virtue of the accessibility of the zygote and its descendant blastomeres. Such studies could contribute to the understanding of the etiology of certain sporadic but common human malformations. The vulnerability of the zygotes to mutagens as demonstrated in the studies described in this report should be a major consideration in chemical safety evaluation. It raises questions regarding the danger to human zygotes when the mother is exposed to drugs and environmental chemicals.

  4. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    Science.gov (United States)

    Luo, Junwen; Nikolic, Konstantin; Evans, Benjamin D; Dong, Na; Sun, Xiaohan; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2016-08-17

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  5. Diesel fuel processor for hydrogen production for 5 kW fuel cell application

    Energy Technology Data Exchange (ETDEWEB)

    Sopena, D.; Melgar, A.; Briceno, Y. [Fundacion CIDAUT. Parque Tecnologico de Boecillo, P. 209, 47151 Boecillo (Valladolid) (Spain); Navarro, R.M.; Alvarez-Galvan, M.C. [Instituto de Catalisis y Petroquimica (CSIC), C/ Marie Curie 2, Cantoblanco (Madrid) (Spain); Rosa, F. [Instituto Nacional de Tecnica Aeroespacial, Carretera San Juan del Puerto-Matalascanas, km 33, 21130 Mazagon-Moguer (Huelva) (Spain)

    2007-07-15

    The present paper describes a diesel fuel processor designed to produce hydrogen to feed a PEM fuel cell of 5 kW. The fuel processor includes three reactors in series: (1) oxidative steam reforming reactor; (2) one-step water gas shift reactor; and (3) a preferential oxidation reactor. The design of the system was accomplished by means of a one-dimensional model. A specific study of the fuel-air mixing chamber was carried out with Fluent by taking into account fuel evaporation and cool flame processes. The assembly of the installation allowed the characterisation of each component and the control of each working parameter. The first experimental results obtained in the reformer system using decaline and diesel fuels demonstrate the feasibility of the design to produce hydrogen suitable to feed a PEM fuel cell. (author)

  6. Optogenetics in Silicon: A Neural Processor for Predicting Optically Active Neural Networks.

    Science.gov (United States)

    Junwen Luo; Nikolic, Konstantin; Evans, Benjamin D; Na Dong; Xiaohan Sun; Andras, Peter; Yakovlev, Alex; Degenaar, Patrick

    2017-02-01

    We present a reconfigurable neural processor for real-time simulation and prediction of opto-neural behaviour. We combined a detailed Hodgkin-Huxley CA3 neuron integrated with a four-state Channelrhodopsin-2 (ChR2) model into reconfigurable silicon hardware. Our architecture consists of a Field Programmable Gated Array (FPGA) with a custom-built computing data-path, a separate data management system and a memory approach based router. Advancements over previous work include the incorporation of short and long-term calcium and light-dependent ion channels in reconfigurable hardware. Also, the developed processor is computationally efficient, requiring only 0.03 ms processing time per sub-frame for a single neuron and 9.7 ms for a fully connected network of 500 neurons with a given FPGA frequency of 56.7 MHz. It can therefore be utilized for exploration of closed loop processing and tuning of biologically realistic optogenetic circuitry.

  7. VLSI based FFT Processor with Improvement in Computation Speed and Area Reduction

    Directory of Open Access Journals (Sweden)

    M.Sheik Mohamed

    2013-06-01

    Full Text Available In this paper, a modular approach is presented to develop parallel pipelined architectures for the fast Fourier transform (FFT processor. The new pipelined FFT architecture has the advantage of underutilized hardware based on the complex conjugate of final stage results without increasing the hardware complexity. The operating frequency of the new architecture can be decreased that in turn reduces the power consumption. A comparison of area and computing time are drawn between the new design and the previous architectures. The new structure is synthesized using Xilinx ISE and simulated using ModelSim Starter Edition. The designed FFT algorithm is realized in our processor to reduce the number of complex computations.

  8. A natural-gas fuel processor for a residential fuel cell system

    Science.gov (United States)

    Adachi, H.; Ahmed, S.; Lee, S. H. D.; Papadias, D.; Ahluwalia, R. K.; Bendert, J. C.; Kanner, S. A.; Yamazaki, Y.

    A system model was used to develop an autothermal reforming fuel processor to meet the targets of 80% efficiency (higher heating value) and start-up energy consumption of less than 500 kJ when operated as part of a 1-kWe natural-gas fueled fuel cell system for cogeneration of heat and power. The key catalytic reactors of the fuel processor - namely the autothermal reformer, a two-stage water gas shift reactor and a preferential oxidation reactor - were configured and tested in a breadboard apparatus. Experimental results demonstrated a reformate containing ∼48% hydrogen (on a dry basis and with pure methane as fuel) and less than 5 ppm CO. The effects of steam-to-carbon and part load operations were explored.

  9. Evaluation of MERIS Case-II Water Processors in the Baltic Sea

    OpenAIRE

    Arroyo Pedrero, Jaume

    2009-01-01

    Projecte realitzat en col.laboració amb Helsinki University of Technology Four MERIS Case-II Water Processors are studied, compared and evaluated: Coastal Case 2 Regional Processor, Boreal Lakes Processor, Eutrophic Lakes Processor and FUB/Wew Water Processor. In situ data from the Baltic Sea have been used to evaluate the water constituent estimations. In addition, the effect of adjacency effect ICOL on the estimation has been analyzed. For this purpose, a set of tools has been d...

  10. Drosophila Melanogaster as an Emerging Translational Model of Human Nephrolithiasis

    Science.gov (United States)

    Miller, Joe; Chi, Thomas; Kapahi, Pankaj; Kahn, Arnold J.; Kim, Man Su; Hirata, Taku; Romero, Michael F.; Dow, Julian A.T.; Stoller, Marshall L.

    2013-01-01

    Purpose The limitations imposed by human clinical studies and mammalian models of nephrolithiasis have hampered the development of effective medical treatments and preventative measures for decades. The simple but elegant Drosophila melanogaster is emerging as a powerful translational model of human disease, including nephrolithiasis and may provide important information essential to our understanding of stone formation. We present the current state of research using D. melanogaster as a model of human nephrolithiasis. Materials and Methods A comprehensive review of the English language literature was performed using PUBMED. When necessary, authoritative texts on relevant subtopics were consulted. Results The genetic composition, anatomic structure and physiologic function of Drosophila Malpighian tubules are remarkably similar to those of the human nephron. The direct effects of dietary manipulation, environmental alteration, and genetic variation on stone formation can be observed and quantified in a matter of days. Several Drosophila models of human nephrolithiasis, including genetically linked and environmentally induced stones, have been developed. A model of calcium oxalate stone formation is among the most recent fly models of human nephrolithiasis. Conclusions The ability to readily manipulate and quantify stone formation in D. melanogaster models of human nephrolithiasis presents the urologic community with a unique opportunity to increase our understanding of this enigmatic disease. PMID:23500641

  11. An Ideal Design for an Idea Processor.

    Science.gov (United States)

    1986-09-01

    Braid, and in The Mind’s Eye (with Daniel Dennett ), ignited my interest in conceptual aspects of the human mind. On the practical side, it was while...factors studies traditionally cover more specific processes of the human mind, such as memory and perception . Research on communication is typically more...understanding of perception , memory, motivation, and other aspects of human thought. However, the specific activities involved in composing have not been

  12. Current humanized mouse models for studying human immunology and HIV-1 immuno-pathogenesis

    Institute of Scientific and Technical Information of China (English)

    MEISSNER; Eric

    2010-01-01

    A robust animal model for "hypothesis-testing/mechanistic" research in human immunology and immuno-pathology should meet the following criteria.First,it has well-studied hemato-lymphoid organs and target cells similar to those of humans.Second,the human pathogens establish infection and lead to relevant diseases.Third,it is genetically inbred and can be manipulated via genetic,immunological and pharmacological means.Many human-tropic pathogens such as HIV-1 fail to infect murine cells due to the blocks at multiple steps of their life cycle.The mouse with a reconstituted human immune system and other human target organs is a good candidate.A number of human-mouse chimeric models with human immune cells have been developed in the past 20 years,but most with only limited success due to the selective engraftment of xeno-reactive human T cells in hu-PBL-SCID mice or the lack of significant human immune responses in the SCID-hu Thy/Liv mouse.This review summarizes the current understanding of HIV-1 immuno-pathogenesis in human patients and in SIV-infected primate models.It also reviews the recent progress in the development of humanized mouse models with a functional human immune system,especially the recent progress in the immunodeficient mice that carry a defective gammaC gene.NOD/SCID/gammaC-/(NOG or NSG) or the Rag2-/-/gammaC-/double knockout (DKO) mice,which lack NK as well as T and B cells (NTB-null mice),have been used to reconstitute a functional human immune system in central and peripheral lymphoid organs with human CD34+ HSC.These NTB-hu HSC humanized models have been used to investigate HIV-1 infection,immuno-pathogenesis and therapeutic interventions.Such models,with further improvements,will contribute to study human immunology,human-tropic pathogens as well as human stem cell biology in the tissue development and function in vivo.

  13. THOR Fields and Wave Processor - FWP

    Science.gov (United States)

    Soucek, Jan; Rothkaehl, Hanna; Ahlen, Lennart; Balikhin, Michael; Carr, Christopher; Dekkali, Moustapha; Khotyaintsev, Yuri; Lan, Radek; Magnes, Werner; Morawski, Marek; Nakamura, Rumi; Uhlir, Ludek; Yearby, Keith; Winkler, Marek; Zaslavsky, Arnaud

    2017-04-01

    If selected, Turbulence Heating ObserveR (THOR) will become the first spacecraft mission dedicated to the study of plasma turbulence. The Fields and Waves Processor (FWP) is an integrated electronics unit for all electromagnetic field measurements performed by THOR. FWP will interface with all THOR fields sensors: electric field antennas of the EFI instrument, the MAG fluxgate magnetometer, and search-coil magnetometer (SCM), and perform signal digitization and on-board data processing. FWP box will house multiple data acquisition sub-units and signal analyzers all sharing a common power supply and data processing unit and thus a single data and power interface to the spacecraft. Integrating all the electromagnetic field measurements in a single unit will improve the consistency of field measurement and accuracy of time synchronization. The scientific value of highly sensitive electric and magnetic field measurements in space has been demonstrated by Cluster (among other spacecraft) and THOR instrumentation will further improve on this heritage. Large dynamic range of the instruments will be complemented by a thorough electromagnetic cleanliness program, which will prevent perturbation of field measurements by interference from payload and platform subsystems. Taking advantage of the capabilities of modern electronics and the large telemetry bandwidth of THOR, FWP will provide multi-component electromagnetic field waveforms and spectral data products at a high time resolution. Fully synchronized sampling of many signals will allow to resolve wave phase information and estimate wavelength via interferometric correlations between EFI probes. FWP will also implement a plasma resonance sounder and a digital plasma quasi-thermal noise analyzer designed to provide high cadence measurements of plasma density and temperature complementary to data from particle instruments. FWP will rapidly transmit information about magnetic field vector and spacecraft potential to the

  14. Mice with human immune system components as in vivo models for infections with human pathogens.

    Science.gov (United States)

    Rämer, Patrick C; Chijioke, Obinna; Meixlsperger, Sonja; Leung, Carol S; Münz, Christian

    2011-03-01

    Many pathogens relevant to human disease do not infect other animal species. Therefore, animal models that reconstitute or harbor human tissues are explored as hosts for these. In this review, we will summarize recent advances to utilize mice with human immune system components, reconstituted from hematopoietic progenitor cells in vivo. Such mice can be used to study human pathogens that replicate in leukocytes. In addition to studying the replication of these pathogens, the reconstituted human immune system components can also be analyzed for initiating immune responses and control against these infections. Moreover, these new animal models of human infectious disease should replicate the reactivity of the human immune system to vaccine candidates and, especially, the adjuvants contained in them, more faithfully.

  15. A novel mouse model for stable engraftment of a human immune system and human hepatocytes.

    Directory of Open Access Journals (Sweden)

    Helene Strick-Marchand

    Full Text Available Hepatic infections by hepatitis B virus (HBV, hepatitis C virus (HCV and Plasmodium parasites leading to acute or chronic diseases constitute a global health challenge. The species tropism of these hepatotropic pathogens is restricted to chimpanzees and humans, thus model systems to study their pathological mechanisms are severely limited. Although these pathogens infect hepatocytes, disease pathology is intimately related to the degree and quality of the immune response. As a first step to decipher the immune response to infected hepatocytes, we developed an animal model harboring both a human immune system (HIS and human hepatocytes (HUHEP in BALB/c Rag2-/- IL-2Rγc-/- NOD.sirpa uPAtg/tg mice. The extent and kinetics of human hepatocyte engraftment were similar between HUHEP and HIS-HUHEP mice. Transplanted human hepatocytes were polarized and mature in vivo, resulting in 20-50% liver chimerism in these models. Human myeloid and lymphoid cell lineages developed at similar frequencies in HIS and HIS-HUHEP mice, and splenic and hepatic compartments were humanized with mature B cells, NK cells and naïve T cells, as well as monocytes and dendritic cells. Taken together, these results demonstrate that HIS-HUHEP mice can be stably (> 5 months and robustly engrafted with a humanized immune system and chimeric human liver. This novel HIS-HUHEP model provides a platform to investigate human immune responses against hepatotropic pathogens and to test novel drug strategies or vaccine candidates.

  16. A novel mouse model for stable engraftment of a human immune system and human hepatocytes.

    Science.gov (United States)

    Strick-Marchand, Helene; Dusséaux, Mathilde; Darche, Sylvie; Huntington, Nicholas D; Legrand, Nicolas; Masse-Ranson, Guillemette; Corcuff, Erwan; Ahodantin, James; Weijer, Kees; Spits, Hergen; Kremsdorf, Dina; Di Santo, James P

    2015-01-01

    Hepatic infections by hepatitis B virus (HBV), hepatitis C virus (HCV) and Plasmodium parasites leading to acute or chronic diseases constitute a global health challenge. The species tropism of these hepatotropic pathogens is restricted to chimpanzees and humans, thus model systems to study their pathological mechanisms are severely limited. Although these pathogens infect hepatocytes, disease pathology is intimately related to the degree and quality of the immune response. As a first step to decipher the immune response to infected hepatocytes, we developed an animal model harboring both a human immune system (HIS) and human hepatocytes (HUHEP) in BALB/c Rag2-/- IL-2Rγc-/- NOD.sirpa uPAtg/tg mice. The extent and kinetics of human hepatocyte engraftment were similar between HUHEP and HIS-HUHEP mice. Transplanted human hepatocytes were polarized and mature in vivo, resulting in 20-50% liver chimerism in these models. Human myeloid and lymphoid cell lineages developed at similar frequencies in HIS and HIS-HUHEP mice, and splenic and hepatic compartments were humanized with mature B cells, NK cells and naïve T cells, as well as monocytes and dendritic cells. Taken together, these results demonstrate that HIS-HUHEP mice can be stably (> 5 months) and robustly engrafted with a humanized immune system and chimeric human liver. This novel HIS-HUHEP model provides a platform to investigate human immune responses against hepatotropic pathogens and to test novel drug strategies or vaccine candidates.

  17. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  18. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  19. Silicon quantum processor with robust long-distance qubit couplings

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, Rajib [Purdue University; Tosi, Guilherme [Centre for Quantum Computation and Communication Technology; Schmitt, Vivien [Centre for Quantum Computation and Communication Technology; Klimeck, Gerhard [Purdue University; Tenberg, Stefanie B. [Centre for Quantum Computation and Communication Technology; Morello, Andrea [Centre for Quantum Computation and Communication Technology; Mohiyaddin, Fahd A. [ORNL

    2017-09-01

    Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.

  20. Detector defect correction of medical images on graphics processors

    Science.gov (United States)

    Membarth, Richard; Hannig, Frank; Teich, Jürgen; Litz, Gerhard; Hornegger, Heinz

    2011-03-01

    The ever increasing complexity and power dissipation of computer architectures in the last decade blazed the trail for more power efficient parallel architectures. Hence, such architectures like field-programmable gate arrays (FPGAs) and particular graphics cards attained great interest and are consequently adopted for parallel execution of many number crunching loop programs from fields like image processing or linear algebra. However, there is little effort to deploy barely computational, but memory intensive applications to graphics hardware. This paper considers a memory intensive detector defect correction pipeline for medical imaging with strict latency requirements. The image pipeline compensates for different effects caused by the detector during exposure of X-ray images and calculates parameters to control the subsequent dosage. So far, dedicated hardware setups with special processors like DSPs were used for such critical processing. We show that this is today feasible with commodity graphics hardware. Using CUDA as programming model, it is demonstrated that the detector defect correction pipeline consisting of more than ten algorithms is significantly accelerated and that a speedup of 20x can be achieved on NVIDIA's Quadro FX 5800 compared to our reference implementation. For deployment in a streaming application with steadily new incoming data, it is shown that the memory transfer overhead of successive images to the graphics card memory is reduced by 83% using double buffering.

  1. On program restructuring, scheduling, and communication for parallel processor systems

    Energy Technology Data Exchange (ETDEWEB)

    Polychronopoulos, Constantine D.

    1986-08-01

    This dissertation discusses several software and hardware aspects of program execution on large-scale, high-performance parallel processor systems. The issues covered are program restructuring, partitioning, scheduling and interprocessor communication, synchronization, and hardware design issues of specialized units. All this work was performed focusing on a single goal: to maximize program speedup, or equivalently, to minimize parallel execution time. Parafrase, a Fortran restructuring compiler was used to transform programs in a parallel form and conduct experiments. Two new program restructuring techniques are presented, loop coalescing and subscript blocking. Compile-time and run-time scheduling schemes are covered extensively. Depending on the program construct, these algorithms generate optimal or near-optimal schedules. For the case of arbitrarily nested hybrid loops, two optimal scheduling algorithms for dynamic and static scheduling are presented. Simulation results are given for a new dynamic scheduling algorithm. The performance of this algorithm is compared to that of self-scheduling. Techniques for program partitioning and minimization of interprocessor communication for idealized program models and for real Fortran programs are also discussed. The close relationship between scheduling, interprocessor communication, and synchronization becomes apparent at several points in this work. Finally, the impact of various types of overhead on program speedup and experimental results are presented. 69 refs., 74 figs., 14 tabs.

  2. The Potential of the Cell Processor for Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Shalf, John; Oliker, Leonid; Husbands, Parry; Kamil, Shoaib; Yelick, Katherine

    2005-10-14

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of the using the forth coming STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. We are the first to present quantitative Cell performance data on scientific kernels and show direct comparisons against leading superscalar (AMD Opteron), VLIW (IntelItanium2), and vector (Cray X1) architectures. Since neither Cell hardware nor cycle-accurate simulators are currently publicly available, we develop both analytical models and simulators to predict kernel performance. Our work also explores the complexity of mapping several important scientific algorithms onto the Cells unique architecture. Additionally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  3. Reward-based learning under hardware constraints - Using a RISC processor embedded in a neuromorphic substrate

    Directory of Open Access Journals (Sweden)

    Simon eFriedmann

    2013-09-01

    Full Text Available In this study, we propose and analyze in simulations a new, highly flexible method of imple-menting synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. Thestudy focuses on globally modulated STDP, as a special use-case of this method. Flexibility isachieved by embedding a general-purpose processor dedicated to plasticity into the wafer. Toevaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spiketrain learning task. A single layer of neurons is trained to fire at specific points in time withonly the reward as feedback. This model is simulated to measure its performance, i.e. the in-crease in received reward after learning. Using this performance as baseline, we then simulatethe model with various constraints imposed by the proposed implementation and compare theperformance. The simulated constraints include discretized synaptic weights, a restricted inter-face between analog synapses and embedded processor, and mismatch of analog circuits. Wefind that probabilistic updates can increase the performance of low-resolution weights, a simpleinterface between analog synapses and processor is sufficient for learning, and performance isinsensitive to mismatch. Further, we consider communication latency between wafer and theconventional control computer system that is simulating the environment. This latency increasesthe delay, with which the reward is sent to the embedded processor. Because of the time continu-ous operation of the analog synapses, delay can cause a deviation of the updates as compared tothe not delayed situation. We find that for highly accelerated systems latency has to be kept to aminimum. This study demonstrates the suitability of the proposed implementation to emulatethe selected reward modulated STDP learning rule. It is therefore an ideal candidate for imple-mentation in an upgraded version of the wafer-scale system developed within the BrainScaleSproject.

  4. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  5. Understanding What We Do: Emerging Models for Human Rights Education

    Science.gov (United States)

    Tibbitts, Felisa

    2002-07-01

    The author presents three approaches to contemporary human rights education practice: the Values and Awareness Model, the Accountability Model and the Transformational Model. Each model is associated with particular target groups, contents and strategies. The author suggests that these models can lend themselves to theory development and research in what might be considered an emerging educational field. Human rights education can be further strengthened through the appropriate use oflearning theory, as well as through the setting of standards for trainer preparation and program content, and through evaluating the impact of programs in terms of reaching learner goals (knowledge, values and skills) and contributing to social change.

  6. Biostereometric Data Processing In ERGODATA: Choice Of Human Body Models

    Science.gov (United States)

    Pineau, J. C.; Mollard, R.; Sauvignon, M.; Amphoux, M.

    1983-07-01

    The definition of human body models was elaborated with anthropometric data from ERGODATA. The first model reduces the human body into a series of points and lines. The second model is well adapted to represent volumes of each segmentary element. The third is an original model built from the conventional anatomical points. Each segment is defined in space by a tri-angular plane located with its 3-D coordinates. This new model can answer all the processing possibilities in the field of computer-aided design (C.A.D.) in ergonomy but also biomechanics and orthopaedics.

  7. Human Digital Modeling & Hand Scanning Lab

    Data.gov (United States)

    Federal Laboratory Consortium — This laboratory incorporates specialized scanning equipment, computer workstations and software applications for the acquisition and analysis of digitized models of...

  8. Digital signal processor for silicon audio playback devices; Silicon audio saisei kikiyo digital signal processor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)

  9. Emissions variability processor (EMVAP): design, evaluation, and application.

    Science.gov (United States)

    Paine, Robert; Szembek, Carlos; Heinold, David; Knipping, Eladio; Kumar, Naresh

    2014-12-01

    Emissions of pollutants such as SO2 and NOx from external combustion sources can vary widely depending on fuel sulfur content, load, and transient conditions such as startup, shutdown, and maintenance/malfunction. While monitoring will automatically reflect variability from both emissions and meteorological influences, dispersion modeling has been typically conducted with a single constant peak emission rate. To respond to the need to account for emissions variability in addressing probabilistic 1-hr ambient air quality standards for SO2 and NO2, we have developed a statistical technique, the Emissions Variability Processor (EMVAP), which can account for emissions variability in dispersion modeling through Monte Carlo sampling from a specified frequency distribution of emission rates. Based upon initial AERMOD modeling of from 1 to 5 years of actual meteorological conditions, EMVAP is used as a postprocessor to AERMOD to simulate hundreds or even thousands of years of concentration predictions. This procedure uses emissions varied hourly with a Monte Carlo sampling process that is based upon the user-specified emissions distribution, from which a probabilistic estimate can be obtained of the controlling concentration. EMVAP can also accommodate an advanced Tier 2 NO2 modeling technique that uses a varying ambient ratio method approach to determine the fraction of total oxides of nitrogen that are in the form of nitrogen dioxide. For the case of the 1-hr National Ambient Air Quality Standards (NAAQS, established for SO2 and NO2), a "critical value" can be defined as the highest hourly emission rate that would be simulated to satisfy the standard using air dispersion models assuming constant emissions throughout the simulation. The critical value can be used as the starting point for a procedure like EMVAP that evaluates the impact of emissions variability and uses this information to determine an appropriate value to use for a longer-term (e.g., 30-day) average

  10. Synthetic vision and emotion calculation in intelligent virtual human modeling.

    Science.gov (United States)

    Zhao, Y; Kang, J; Wright, D K

    2007-01-01

    The virtual human technique can already provide vivid and believable human behaviour in more and more scenarios. Virtual humans are expected to replace real humans in hazardous situations to undertake tests and feed back valuable information. This paper will introduce a virtual human with a novel collision-based synthetic vision, short-term memory model and a capability to implement emotion calculation and decision making. The virtual character based on this model can 'see' what is in its field of view (FOV) and remember those objects. After that, a group of affective computing equations have been introduced. These equations have been implemented into a proposed emotion calculation process to enlighten emotion for virtual intelligent humans.

  11. Modelling mutational landscapes of human cancers in vitro

    Science.gov (United States)

    Olivier, Magali; Weninger, Annette; Ardin, Maude; Huskova, Hana; Castells, Xavier; Vallée, Maxime P.; McKay, James; Nedelko, Tatiana; Muehlbauer, Karl-Rudolf; Marusawa, Hiroyuki; Alexander, John; Hazelwood, Lee; Byrnes, Graham; Hollstein, Monica; Zavadil, Jiri

    2014-03-01

    Experimental models that recapitulate mutational landscapes of human cancers are needed to decipher the rapidly expanding data on human somatic mutations. We demonstrate that mutation patterns in immortalised cell lines derived from primary murine embryonic fibroblasts (MEFs) exposed in vitro to carcinogens recapitulate key features of mutational signatures observed in human cancers. In experiments with several cancer-causing agents we obtained high genome-wide concordance between human tumour mutation data and in vitro data with respect to predominant substitution types, strand bias and sequence context. Moreover, we found signature mutations in well-studied human cancer driver genes. To explore endogenous mutagenesis, we used MEFs ectopically expressing activation-induced cytidine deaminase (AID) and observed an excess of AID signature mutations in immortalised cell lines compared to their non-transgenic counterparts. MEF immortalisation is thus a simple and powerful strategy for modelling cancer mutation landscapes that facilitates the interpretation of human tumour genome-wide sequencing data.

  12. Cyberpsychology: a human-interaction perspective based on cognitive modeling.

    Science.gov (United States)

    Emond, Bruno; West, Robert L

    2003-10-01

    This paper argues for the relevance of cognitive modeling and cognitive architectures to cyberpsychology. From a human-computer interaction point of view, cognitive modeling can have benefits both for theory and model building, and for the design and evaluation of sociotechnical systems usability. Cognitive modeling research applied to human-computer interaction has two complimentary objectives: (1) to develop theories and computational models of human interactive behavior with information and collaborative technologies, and (2) to use the computational models as building blocks for the design, implementation, and evaluation of interactive technologies. From the perspective of building theories and models, cognitive modeling offers the possibility to anchor cyberpsychology theories and models into cognitive architectures. From the perspective of the design and evaluation of socio-technical systems, cognitive models can provide the basis for simulated users, which can play an important role in usability testing. As an example of application of cognitive modeling to technology design, the paper presents a simulation of interactive behavior with five different adaptive menu algorithms: random, fixed, stacked, frequency based, and activation based. Results of the simulation indicate that fixed menu positions seem to offer the best support for classification like tasks such as filing e-mails. This research is part of the Human-Computer Interaction, and the Broadband Visual Communication research programs at the National Research Council of Canada, in collaboration with the Carleton Cognitive Modeling Lab at Carleton University.

  13. Modeling Human Behaviour with Higher Order Logic: Insider Threats

    NARCIS (Netherlands)

    Boender, Jaap; Ivanova, Marieta Georgieva; Kammüller, Florian; Primierio, Giuseppe

    2014-01-01

    In this paper, we approach the problem of modeling the human component in technical systems with a view on the difference between the use of model and theory in sociology and computer science. One aim of this essay is to show that building of theories and models for sociology can be compared and imp

  14. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  15. Human Spaceflight Architecture Model (HSFAM) Data Dictionary

    Science.gov (United States)

    Shishko, Robert

    2016-01-01

    HSFAM is a data model based on the DoDAF 2.02 data model with some for purpose extensions. These extensions are designed to permit quantitative analyses regarding stakeholder concerns about technical feasibility, configuration and interface issues, and budgetary and/or economic viability.

  16. Behavior genetic modeling of human fertility

    DEFF Research Database (Denmark)

    Rodgers, J L; Kohler, H P; Kyvik, K O;

    2001-01-01

    Try) and number of children (NumCh). Behavior genetic models were fitted using structural equation modeling and DF analysis. A consistent medium-level additive genetic influence was found for NumCh, equal across genders; a stronger genetic influence was identified for FirstTry, greater for females than for males...

  17. Phase transitions in models of human cooperation

    Science.gov (United States)

    Perc, Matjaž

    2016-08-01

    If only the fittest survive, why should one cooperate? Why should one sacrifice personal benefits for the common good? Recent research indicates that a comprehensive answer to such questions requires that we look beyond the individual and focus on the collective behavior that emerges as a result of the interactions among individuals, groups, and societies. Although undoubtedly driven also by culture and cognition, human cooperation is just as well an emergent, collective phenomenon in a complex system. Nonequilibrium statistical physics, in particular the collective behavior of interacting particles near phase transitions, has already been recognized as very valuable for understanding counterintuitive evolutionary outcomes. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. Here we briefly review research done in the realm of the public goods game, and we outline future research directions with an emphasis on merging the most recent advances in the social sciences with methods of nonequilibrium statistical physics. By having a firm theoretical grip on human cooperation, we can hope to engineer better social systems and develop more efficient policies for a sustainable and better future.

  18. Human Engineering Modeling and Performance Lab Study Project

    Science.gov (United States)

    Oliva-Buisson, Yvette J.

    2014-01-01

    The HEMAP (Human Engineering Modeling and Performance) Lab is a joint effort between the Industrial and Human Engineering group and the KAVE (Kennedy Advanced Visualiations Environment) group. The lab consists of sixteen camera system that is used to capture human motions and operational tasks, through te use of a Velcro suit equipped with sensors, and then simulate these tasks in an ergonomic software package know as Jac, The Jack software is able to identify the potential risk hazards.

  19. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  20. Human vascular model with defined stimulation medium - a characterization study.

    Science.gov (United States)

    Huttala, Outi; Vuorenpää, Hanna; Toimela, Tarja; Uotila, Jukka; Kuokkanen, Hannu; Ylikomi, Timo; Sarkanen, Jertta-Riina; Heinonen, Tuula

    2015-01-01

    The formation of blood vessels is a vital process in embryonic development and in normal physiology. Current vascular modelling is mainly based on animal biology leading to species-to-species variation when extrapolating the results to humans. Although there are a few human cell based vascular models available these assays are insufficiently characterized in terms of culture conditions and developmental stage of vascular structures. Therefore, well characterized vascular models with human relevance are needed for basic research, embryotoxicity testing, development of therapeutic strategies and for tissue engineering. We have previously shown that the in vitro vascular model based on co-culture of human adipose stromal cells (hASC) and human umbilical vein endothelial cells (HUVEC) is able to induce an extensive vascular-like network with high reproducibility. In this work we developed a defined serum-free vascular stimulation medium (VSM) and performed further characterization in terms of cell identity, maturation and structure to obtain a thoroughly characterized in vitro vascular model to replace or reduce corresponding animal experiments. The results showed that the novel vascular stimulation medium induced intact and evenly distributed vascular-like network with morphology of mature vessels. Electron microscopic analysis assured the three-dimensional microstructure of the network containing lumen. Additionally, elevated expressions of the main human angiogenesis-related genes were detected. In conclusion, with the new defined medium the vascular model can be utilized as a characterized test system for chemical testing as well as in creating vascularized tissue models.