WorldWideScience

Sample records for unit cpu keyboard

  1. Combustion Power Unit--400: CPU-400.

    Science.gov (United States)

    Combustion Power Co., Palo Alto, CA.

    Aerospace technology may have led to a unique basic unit for processing solid wastes and controlling pollution. The Combustion Power Unit--400 (CPU-400) is designed as a turboelectric generator plant that will use municipal solid wastes as fuel. The baseline configuration is a modular unit that is designed to utilize 400 tons of refuse per day…

  2. Reconfigurable work station for a video display unit and keyboard

    Science.gov (United States)

    Shields, Nicholas L. (Inventor); Roe, Fred D., Jr. (Inventor); Fagg, Mary F. (Inventor); Henderson, David E. (Inventor)

    1988-01-01

    A reconfigurable workstation is described having video, keyboard, and hand operated motion controller capabilities. The workstation includes main side panels between which a primary work panel is pivotally carried in a manner in which the primary work panel may be adjusted and set in a negatively declined or positively inclined position for proper forearm support when operating hand controllers. A keyboard table supports a keyboard in such a manner that the keyboard is set in a positively inclined position with respect to the negatively declined work panel. Various adjustable devices are provided for adjusting the relative declinations and inclinations of the work panels, tables, and visual display panels.

  3. Evaluating the effectiveness of ultraviolet-C lamps for reducing keyboard contamination in the intensive care unit: A longitudinal analysis.

    Science.gov (United States)

    Gostine, Andrew; Gostine, David; Donohue, Cristina; Carlstrom, Luke

    2016-10-01

    Ultraviolet (UV) spectrum light for decontamination of patient care areas is an effective way to reduce transmission of infectious pathogens. Our purpose was to investigate the efficacy of an automated UV-C device to eliminate bioburden on hospital computer keyboards. The study took place at an academic hospital in Chicago, Illinois. Baseline cultures were obtained from keyboards in intensive care units. Automated UV-C lamps were installed over keyboards and mice of those computers. The lamps were tested at varying cycle lengths to determine shortest effective cycles. Delay after use and prior to cycle initiation was varied to minimize cycle interruptions. Finally, 218 postinstallation samples were analyzed. Of 203 baseline samples, 193 (95.1%) were positive for bacteria, with a median of 120 colony forming units (CFU) per keyboard. There were numerous bacteria linked to health care-associated infections (HAIs), including Staphylococcus, Streptococcus, Enterococcus, Pseudomonas, Pasteurella, Klebsiella, Acinetobacter, and Enterobacter. Of the 193 keyboards, 25 (12.3%) had gram-negative species. Of 218 postinstallation samples, 205 (94%) were sterile. Of the 13 that showed bacterial growth, 6 produced a single CFU. Comparison of pre- and post-UV decontamination median CFU values (120 and 0, respectively) revealed a >99% reduction in bacteria. The UV lamp effectively decontaminates keyboards with minimal interruption and low UV exposure. Further studies are required to determine reduction of HAI transmission with use of these devices. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Self-referral to chest pain units: results of the German CPU-registry.

    Science.gov (United States)

    Nowak, Bernd; Giannitsis, Evangelos; Riemer, Thomas; Münzel, Thomas; Haude, Michael; Maier, Lars S; Schmitt, Claus; Schumacher, Burghard; Mudra, Harald; Hamm, Christian; Senges, Jochen; Voigtländer, Thomas

    2012-12-01

    Chest pain units (CPUs) are increasingly established in emergency cardiology services. With improved visibility of CPUs in the population, patients may refer themselves directly to these units, obviating emergency medical services (EMS). Little is known about characteristics and outcomes of self-referred patients, as compared with those referred by EMS. Therefore, we described self-referral patients enrolled in the CPU-registry of the German Cardiac Society and compared them with those referred by EMS. From 2008 until 2010, the prospective CPU-registry enrolled 11,581 consecutive patients. Of those 3789 (32.7%) were self-referrals (SRs), while 7792 (67.3%) were referred by EMS. SR-patients were significantly younger (63.6 vs. 70.1 years), had less prior myocardial infarction or coronary artery bypass surgery, but more previous percutaneous coronary interventions (PCIs). Acute coronary syndromes were diagnosed less frequently in the SR-patients (30.3 vs. 46.9%; pCPU as a self-referral are younger, less severely ill and have more non-coronary problems than those calling an emergency medical service. Nevertheless, 30% of self-referral patients had an acute coronary syndrome.

  5. A Keyboard

    DEFF Research Database (Denmark)

    2008-01-01

    The present invention relates to a keyboard with a plurality of keys with key switches that are covered by a flexible display for displaying individual labels of the keys whereby the label can be changed during operation of the keyboard by appropriate control of the keyboard. The flexible display...

  6. Optical keyboard

    Science.gov (United States)

    Veligdan, James T.; Feichtner, John D.; Phillips, Thomas E.

    2001-01-01

    An optical keyboard includes an optical panel having optical waveguides stacked together. First ends of the waveguides define an inlet face, and opposite ends thereof define a screen. A projector transmits a light beam outbound through the waveguides for display on the screen as a keyboard image. A light sensor is optically aligned with the inlet face for sensing an inbound light beam channeled through the waveguides from the screen upon covering one key of the keyboard image.

  7. The German CPU Registry: Dyspnea independently predicts negative short-term outcome in patients admitted to German Chest Pain Units.

    Science.gov (United States)

    Hellenkamp, Kristian; Darius, Harald; Giannitsis, Evangelos; Erbel, Raimund; Haude, Michael; Hamm, Christian; Hasenfuss, Gerd; Heusch, Gerd; Mudra, Harald; Münzel, Thomas; Schmitt, Claus; Schumacher, Burghard; Senges, Jochen; Voigtländer, Thomas; Maier, Lars S

    2015-02-15

    While dyspnea is a common symptom in patients admitted to Chest Pain Units (CPUs) little is known about the impact of dyspnea on their outcome. The purpose of this study was to evaluate the impact of dyspnea on the short-term outcome of CPU patients. We analyzed data from a total of 9169 patients admitted to one of the 38 participating CPUs in this registry between December 2008 and January 2013. Only patients who underwent coronary angiography for suspected ACS were included. 2601 patients (28.4%) presented with dyspnea. Patients with dyspnea at admission were older and frequently had a wide range of comorbidities compared to patients without dyspnea. Heart failure symptoms in particular were more common in patients with dyspnea (21.0% vs. 5.3%, pCPU patients. Our data show that dyspnea is associated with a fourfold higher 3month mortality which is underestimated by the established ACS risk scores. To improve their predictive value we therefore propose to add dyspnea as an item to common risk scores. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Advanced Keyboard

    Science.gov (United States)

    1993-01-01

    Using chordic technology, a data entry operator can finger key combinations for text or graphics input. Because only one hand is needed, a disabled person may use it. Strain and fatigue are less than when using a conventional keyboard; input is faster, and the system can be learned in about an hour. Infogrip, Inc. developed chordic input technology with Stennis Space Center (SSC). (NASA is interested in potentially faster human/computer interaction on spacecraft as well as a low cost tactile/visual training system for the handicapped.) The company is now marketing the BAT as an improved system for both disabled and non-disabled computer operators.

  9. Identification of a site critical for kinase regulation on the central processing unit (CPU) helix of the aspartate receptor.

    Science.gov (United States)

    Trammell, M A; Falke, J J

    1999-01-05

    Ligand binding to the homodimeric aspartate receptor of Escherichia coli and Salmonella typhimurium generates a transmembrane signal that regulates the activity of a cytoplasmic histidine kinase, thereby controlling cellular chemotaxis. This receptor also senses intracellular pH and ambient temperature and is covalently modified by an adaptation system. A specific helix in the cytoplasmic domain of the receptor, helix alpha6, has been previously implicated in the processing of these multiple input signals. While the solvent-exposed face of helix alpha6 possesses adaptive methylation sites known to play a role in kinase regulation, the functional significance of its buried face is less clear. This buried region lies at the subunit interface where helix alpha6 packs against its symmetric partner, helix alpha6'. To test the role of the helix alpha6-helix alpha6' interface in kinase regulation, the present study introduces a series of 13 side-chain substitutions at the Gly 278 position on the buried face of helix alpha6. The substitutions are observed to dramatically alter receptor function in vivo and in vitro, yielding effects ranging from kinase superactivation (11 examples) to complete kinase inhibition (one example). Moreover, four hydrophobic, branched side chains (Val, Ile, Phe, and Trp) lock the kinase in the superactivated state regardless of whether the receptor is occupied by ligand. The observation that most side-chain substitutions at position 278 yield kinase superactivation, combined with evidence that such facile superactivation is rare at other receptor positions, identifies the buried Gly 278 residue as a regulatory hotspot where helix packing is tightly coupled to kinase regulation. Together, helix alpha6 and its packing interactions function as a simple central processing unit (CPU) that senses multiple input signals, integrates these signals, and transmits the output to the signaling subdomain where the histidine kinase is bound. Analogous CPU

  10. Who gets admitted to the Chest Pain Unit (CPU) and how do we manage them? Improving the use of the CPU in Waikato DHB, New Zealand.

    Science.gov (United States)

    Jade, Judith; Huggan, Paul; Stephenson, Douglas

    2015-01-01

    Chest pain is a commonly encountered presentation in the emergency department (ED). The chest pain unit at Waikato DHB is designed for patients with likely stable angina, who are at low risk of acute coronary syndrome (ACS), with a normal ECG and Troponin T, who have a history which is highly suggestive of coronary artery disease (CAD). Two issues were identified with patient care on the unit (1) the number of inappropriate admissions and (2) the number of inappropriate exercise tolerance tests. A baseline study showed that 73% of admissions did not fulfil the criteria and the majority of patients (72%) had an exercise tolerance test (ETT) irrespective of clinical picture. We delivered educational presentations to key stakeholders and the implementation of a new fast track chest pain pathway for discharging patients directly from the ED. There was an improvement in the number of patients inappropriately admitted, which fell to 61%. However, the number of inappropriate ETTs did not decrease, and were still performed on 76.9% of patients.

  11. A dynamic display keyboard and a key for use in a dynamic display keyboard

    DEFF Research Database (Denmark)

    2011-01-01

    The invention relates to a dynamic display keyboard comprising a plurality of key elements (101), each key element (101) comprises a transmitting part (102) capable of transmitting at least a part of light incident on the transmitting part; a mat (105) comprising a plurality of elevated elements......). In this way, the dynamic display keyboard is able to provide a tactile feedback in response to a user action directed towards a key of the keyboard. Further, the only power requiring element in the keyboard is the display unit....

  12. Chest pain unit (CPU) in the management of low to intermediate risk acute coronary syndrome: a tertiary hospital experience from New Zealand.

    Science.gov (United States)

    Mazhar, J; Killion, B; Liang, M; Lee, M; Devlin, G

    2013-02-01

    A chest pain unit (CPU) for management of patients with chest pain at low to intermediate risk for acute coronary syndrome (ACS) appears safe and cost-effective. We report our experience with a CPU from March 2005 to July 2009. Prospective audit of patients presenting with chest pain suggestive of ACS but no high risk features and managed using a CPU, which included; serial cardiac troponins and electrocardiography and exercise tolerance test (ETT) if indicated. Outcomes assessed included three-month readmission rate and one year mortality. 2358 patients were managed according to the CPU. Mean age 56 years (17-96 years), 59% men and median stay of 22h (IQR 17-26h). 1933 (82%) were diagnosed as non-cardiac chest pain. 1741 (74%) patients had an ETT. Median time from triage to ETT was 21h (IQR 16-24h). 64 (2.7%) were readmitted within three months. The majority of readmissions, 39 (61%) were for a non-cardiac cause. Twenty patients (1%) were readmitted with ACS. There was no cardiac death after one year of being discharged as non-cardiac chest pain. This study confirms that a CPU with high usage of predischarge ETT is a safe and effective way of excluding ACS in patients without high risk features in a New Zealand setting. Copyright © 2012 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.

  13. DESIGN OF A DUAL KEYBOARD

    OpenAIRE

    V. Ragavi; G. Geetha

    2013-01-01

    The design of a computer keyboard with dual function is proposed. This computer keyboard called Dual Keyboard can function both as a normal keyboard and as a pressure sensitive keyboard. The proposed device has a switch that decides the function. The keyboard makes use of sensors placed beneath the keys to measure the pressure applied on the key by the user. This device has many applications. In this study, it is applied to mitigate Denial of Service (DoS) attack.

  14. Disease distribution and outcome in troponin-positive patients with or without revascularization in a chest pain unit: results of the German CPU-Registry.

    Science.gov (United States)

    Illmann, Alexander; Riemer, Thomas; Erbel, Raimund; Giannitsis, Evangelos; Hamm, Christian; Haude, Michael; Heusch, Gerd; Maier, Lars S; Münzel, Thomas; Schmitt, Claus; Schumacher, Burghard; Senges, Jochen; Voigtländer, Thomas; Mudra, Harald

    2014-01-01

    The aim of this analysis was to compare troponin-positive patients presenting to a chest pain unit (CPU) and undergoing coronary angiography with or without subsequent revascularization. Leading diagnosis, disease distribution, and short-term outcomes were evaluated. Chest pain units are increasingly implemented to promptly clarify acute chest pain of uncertain origin, including patients with suspected acute coronary syndrome (ACS). A total of 11,753 patients were prospectively enrolled into the German CPU-Registry of the German Cardiac Society between December 2008 and April 2011. All patients with elevated troponin undergoing a coronary angiography were selected. Three months after discharge a follow-up was performed. A total of 2,218 patients were included. 1,613 troponin-positive patients (72.7 %) underwent a coronary angiography with subsequent PCI or CABG and had an ACS in 96.0 %. In contrast, 605 patients (27.3 %) underwent a coronary angiography without revascularization and had an ACS in 79.8 %. The most frequent non-coronary diagnoses in non-revascularized patients were acute arrhythmias (13.4 %), pericarditis/myocarditis (4.5 %), decompensated congestive heart failure (3.7 %), Takotsubo cardiomyopathy (2.7 %), hypertensive crisis (2.4 %), and pulmonary embolism (0.3 %). During the 3-month followup, patients without revascularization had a higher mortality (12.1 vs. 4.5 %, pCPU with elevated troponin levels mostly suffer from ACS and in a smaller proportion a variety of different diseases are responsible. The short-term outcome in troponin-positive patients with or without an ACS not undergoing a revascularization was worse, indicating that these patients were more seriously ill than patients with revascularization of the culprit lesion. Therefore, an adequate diagnostic evaluation and improved treatment strategies are warranted.

  15. A dynamic display keyboard and a key for use in a dynamic display keyboard

    DEFF Research Database (Denmark)

    2011-01-01

    The invention relates to a dynamic display keyboard comprising a plurality of key elements, each key element comprises a transmitting part capable of transmitting at least a part of light incident on the transmitting part; a mat comprising a plurality of elevated elements capable of providing...... part; at least one display unit capable of providing light to the plurality of transmitting parts via the optical element; and wherein the light provided to a transmitting part defines a visual value of the corresponding key element.; In this way, the keyboard is dynamic and further is able to provide...... a tactile feedback in response to a user action directed towards a key of the keyboard....

  16. Improved Optical Keyboard

    Science.gov (United States)

    Jamieson, R. S.

    1985-01-01

    Optical keyboard surfaces used in typewriters, computer terminals, and telephone inexpensively fabricated using stack of printed-circuit cards set in laminate. Internal laminations carry all illuminating and sensing light conductors to keys.

  17. Keyboard With Voice Output

    Science.gov (United States)

    Huber, W. C.

    1986-01-01

    Voice synthesizer tells what key is about to be depressed. Verbal feedback useful for blind operators or where dim light prevents sighted operator from seeing keyboard. Also used where operator is busy observing other things while keying data into control system. Used as training aid for touch typing, and to train blind operators to use both standard and braille keyboards. Concept adapted to such equipment as typewriters, computers, calculators, telephones, cash registers, and on/off controls.

  18. DESIGN OF A DUAL KEYBOARD

    Directory of Open Access Journals (Sweden)

    V. Ragavi

    2013-01-01

    Full Text Available The design of a computer keyboard with dual function is proposed. This computer keyboard called Dual Keyboard can function both as a normal keyboard and as a pressure sensitive keyboard. The proposed device has a switch that decides the function. The keyboard makes use of sensors placed beneath the keys to measure the pressure applied on the key by the user. This device has many applications. In this study, it is applied to mitigate Denial of Service (DoS attack.

  19. IBM model M keyboard

    CERN Multimedia

    1985-01-01

    In 1985, the IBM Model M keyboard was created. This timeless classic was a hit. IBM came out with several varients of the model M. They had the space saver 104 key which is the one most seen today and many international versions of that as well. The second type, and rarest is the 122 key model M which has 24 extra keys at the very top, dubbed the “programmers keyboard”. IBM manufactured these keyboards until 1991. The model M features “caps” over the actual keys that can be taken off separately one at a time for cleaning or to replace them with colored keys or keys of another language, that was a very cost effective way of shipping out internationally the keyboards.

  20. Optical controlled keyboard system

    Science.gov (United States)

    Budzyński, Łukasz; Długosz, Dariusz; Niewiarowski, Bartosz; Zajkowski, Maciej

    2011-06-01

    Control systems of our computers are common devices, based on the manipulation of keys or a moving ball. Completely healthy people have no problems with the operation of such devices. Human disability makes everyday activities become a challenge and create trouble. When a man can not move his hands, the work becomes difficult or often impossible. Controlled optical keyboard is a modern device that allows to bypass the limitations of disability limbs. The use of wireless optical transmission allows to control computer using a laser beam, which cooperates with the photodetectors. The article presents the construction and operation of non-contact optical keyboard for people with disabilities.

  1. SAFARI digital processing unit: performance analysis of the SpaceWire links in case of a LEON3-FT based CPU

    Science.gov (United States)

    Giusi, Giovanni; Liu, Scige J.; Di Giorgio, Anna M.; Galli, Emanuele; Pezzuto, Stefano; Farina, Maria; Spinoglio, Luigi

    2014-08-01

    SAFARI (SpicA FAR infrared Instrument) is a far-infrared imaging Fourier Transform Spectrometer for the SPICA mission. The Digital Processing Unit (DPU) of the instrument implements the functions of controlling the overall instrument and implementing the science data compression and packing. The DPU design is based on the use of a LEON family processor. In SAFARI, all instrument components are connected to the central DPU via SpaceWire links. On these links science data, housekeeping and commands flows are in some cases multiplexed, therefore the interface control shall be able to cope with variable throughput needs. The effective data transfer workload can be an issue for the overall system performances and becomes a critical parameter for the on-board software design, both at application layer level and at lower, and more HW related, levels. To analyze the system behavior in presence of the expected SAFARI demanding science data flow, we carried out a series of performance tests using the standard GR-CPCI-UT699 LEON3-FT Development Board, provided by Aeroflex/Gaisler, connected to the emulator of the SAFARI science data links, in a point-to-point topology. Two different communication protocols have been used in the tests, the ECSS-E-ST-50-52C RMAP protocol and an internally defined one, the SAFARI internal data handling protocol. An incremental approach has been adopted to measure the system performances at different levels of the communication protocol complexity. In all cases the performance has been evaluated by measuring the CPU workload and the bus latencies. The tests have been executed initially in a custom low level execution environment and finally using the Real- Time Executive for Multiprocessor Systems (RTEMS), which has been selected as the operating system to be used onboard SAFARI. The preliminary results of the carried out performance analysis confirmed the possibility of using a LEON3 CPU processor in the SAFARI DPU, but pointed out, in agreement

  2. Keyboard Emulation For Computerized Instrumentation

    Science.gov (United States)

    Wiegand, P. M.; Crouch, S. R.

    1989-01-01

    Keyboard emulator has interface at same level as manual keyboard entry. Since communication and control take place at high intelligence level in instrument, all instrument circuitry fully utilized. Little knowledge of instrument circuitry necessary, since only task interface performs is key closure. All existing logic and error checking still performed by instrument, minimizing workload of laboratory microcomputer. Timing constraints for interface operation minimal at keyboard entry level.

  3. Keyboarding--A Must in Tomorrow's World.

    Science.gov (United States)

    Kisner, Evelyn

    1984-01-01

    Describes keyboarding, i.e., entering information on electronic equipment through use of a typewriter-like keyboard, and briefly discusses when it should be taught, who should teach it, and what level of keyboarding efficiency is needed. (MBR)

  4. Keyboarding--A Must in Tomorrow's World.

    Science.gov (United States)

    Kisner, Evelyn

    1984-01-01

    Describes keyboarding, i.e., entering information on electronic equipment through use of a typewriter-like keyboard, and briefly discusses when it should be taught, who should teach it, and what level of keyboarding efficiency is needed. (MBR)

  5. 75 FR 22840 - In the Matter of Certain Adjustable Keyboard Support Systems and Components Thereof; Notice of...

    Science.gov (United States)

    2010-04-30

    ... COMMISSION In the Matter of Certain Adjustable Keyboard Support Systems and Components Thereof; Notice of... importation, and the sale within the United States after importation of certain adjustable keyboard support... infringing adjustable keyboard support systems and components thereof. The ALJ further recommended...

  6. Measuring keyboard response delays by comparing keyboard and joystick inputs.

    Science.gov (United States)

    Shimizu, Hidemi

    2002-05-01

    The response characteristics of PC keyboards have to be identified when they are used as response devices in psychological experiments. In the past, the proposed method has been to check the characteristics independently by means of external measurement equipment. However, with the availability of different PC models and the rapid pace of model change, there is an urgent need for the development of convenient and accurate methods of checking. The method proposed here consists of raising the precision of the PC's clock to the microsecond level and using a joystick connected to the MIDI terminal of a sound board to give the PC an independent timing function. Statistical processing of the data provided by this method makes it possible to estimate accurately the keyboard scanning interval time and the average keyboard delay time. The results showed that measured keyboard delay times varied from 11 to 73 msec, depending on the keyboard model, with most values being less than 30 msec.

  7. Hand-Held Keyboard

    Science.gov (United States)

    1994-01-01

    The Data Egg, a prototype chord key-based data entry device, can be used autonomously or as an auxiliary keyboard with a personal computer. Data is entered by pressing combinations of seven buttons positioned where the fingers naturally fall when clasping the device. An experienced user can enter text at 30 to 35 words per minute. No transcription is required. The input is downloaded into a computer and printed. The Data Egg can be used by an astronaut in space, a journalist, a bedridden person, etc. It was developed by a Jet Propulsion Laboratory engineer. Product is not currently manufactured.

  8. Ergonomic evaluation of the Apple Adjustable Keyboard

    Energy Technology Data Exchange (ETDEWEB)

    Tittiranonda, P.; Burastero, S.; Shih, M. [Lawrence Livermore National Lab., CA (United States); Rempel, D. [University of California Berkeley/San Francisco (United States). Ergonomics Laboratory

    1994-05-01

    This study presents an evaluation of the Apple Adjustable Keyboard based on subjective preference and observed joint angles during typing. Thirty five keyboard users were asked to use the Apple adjustable keyboard for 7--14 days and rate the various characteristics of the keyboard. Our findings suggest that the most preferred opening angles range from 11--20{degree}. The mean ulnar deviation on the Apple Adjustable keyboard is 11{degree}, compared to 16{degree} on the standard keyboard. The mean extension was decreased from 24{degree} to 16{degree} when using the adjustable keyboard. When asked to subjectively rate the adjustable keyboard in comparison to the standard, the average subject felt that the Apple Adjustable Keyboard was more comfortable and easier to use than the standard flat keyboard.

  9. The keyboard instruments.

    Science.gov (United States)

    Manchester, Ralph A

    2014-06-01

    Now that the field of performing arts medicine has been in existence for over three decades, we are approaching a key point: we should start to see more articles that bring together the data that have been collected from several studies in order to draw more robust conclusions. Review articles and their more structured relative, the meta-analysis, can help to improve our understanding of a particular topic, comparing and synthesizing the results of previous research that has been done on that subject area. One way this could be done would be to review the research that has been carried out on the performance-related problems associated with playing a particular instrument or group of instruments. While I am not going to do that myself, I hope that others will. In this editorial, I will do a very selective review of the playing-related musculoskeletal disorders (PRMDs) associated with one instrument group (the keyboard instruments), focusing on the most played instrument in that group (the piano;).

  10. SECURITY MEASURES OF RANDVUL KEYBOARD

    Directory of Open Access Journals (Sweden)

    RADHA DAMODARAM

    2010-05-01

    Full Text Available Phishing is a “con trick” by which consumers are sent email purporting to originate from legitimate services like banks or other financial institutions. Phishing can be thought of as the marriage of social engineering and technology. The goal of a phisher is typically to learn information that allows him to access resources belonging to his victims. The most common type of phishing attack aims to obtainaccount numbers and passwords used for online banking, in order to either steal money from these accounts or use them as “stepping stones” in money laundry schemes. In the latter type of situation, thephisher, who may belong to a criminal organization or a terrorist organization, will transfer money between accounts that he controls (without stealing money from either of them in order to obscure theactual flow of funds from some payer to some payee. Phishing is therefore not only of concern for potential victims and their inancial institutions, but also to society at large[1].In hacker's worlds, there is something called 'Key Logger'. The purpose of key logger is to log every key that you type in your keyboard, this includes every single personal information that you have typed in your keyboard while you surf the Net such as log ininto your online banking. Once your password has been logged, the hacker can use your information to their benefit[2]. Using Virtual Keyboard which contains randomly generated keys adds another security layer to authenticate yourself to their system. Virtual Keyboard works just like regular keyboard, one thing is you don't type it in your keyboard. Rather, you will be using your mouse to type the password by using virtual keyboard[3].

  11. CPU Efficiency Enhancement through Offload

    National Research Council Canada - National Science Library

    Naeem Akhter; Iqra Sattar

    2017-01-01

      There are several causes of slowness in personal computers. While working on a PC to regularly execute jobs of similar nature, it is essential to be aware of the reasons of slowness to achieve the optimal CPU speed...

  12. Video Player Keyboard Shortcuts: MedlinePlus

    Science.gov (United States)

    ... page: https://medlineplus.gov/hotkeys.html Video Player Keyboard Shortcuts To use the sharing features on this ... enable JavaScript. MedlinePlus offers a set of accessible keyboard shortcuts for our latest Health videos on the ...

  13. Comparison of programmable legend keyboard and dedicated keyboard for control of the flight management computer

    Science.gov (United States)

    Crane, Jean M.; Boucek, George P., Jr.; Smith, Wayne D.

    1986-01-01

    A study is described which compares two types of input devices used to operate a flight management computer: a programmable legend (multifunction) keyboard and a conventional (dedicated) keyboard. Pilot performance measures, subjective responses, and a timeline analysis were used in evaluating the two keyboard concepts. A discussion of the factors to be considered in the implementation of a multifunction keyboard is included.

  14. Effect of Keyboard Ownership on Keyboard Performance in a Music Fundamentals Course

    Science.gov (United States)

    Price, Harry E.

    2007-01-01

    This study examined whether requiring undergraduate college nonmusic majors, enrolled in a music fundamentals course, to own a keyboard would enhance their keyboard skills. The course included instruction in reading notation, singing and keyboard skills. There were two groups. One group (experimental) owned keyboards and had them accessible at…

  15. Importance of Explicit Vectorization for CPU and GPU Software Performance

    CERN Document Server

    Dickson, Neil G; Hamze, Firas

    2010-01-01

    Much of the current focus in high-performance computing is on multi-threading, multi-computing, and graphics processing unit (GPU) computing. However, vectorization and non-parallel optimization techniques, which can often be employed additionally, are less frequently discussed. In this paper, we present an analysis of several optimizations done on both central processing unit (CPU) and GPU implementations of a particular computationally intensive Metropolis Monte Carlo algorithm. Explicit vectorization on the CPU and the equivalent, explicit memory coalescing, on the GPU are found to be critical to achieving good performance of this algorithm in both environments. The fully-optimized CPU version achieves a 9x to 12x speedup over the original CPU version, in addition to speedup from multi-threading. This is 2x faster than the fully-optimized GPU version.

  16. Keyboards: from Typewriters to Tablet Computers

    Directory of Open Access Journals (Sweden)

    Gintautas Grigas

    2014-06-01

    Full Text Available The evolution of Lithuanian keyboards is reviewed. Keyboards are divided up to three categories according to flexibility of their adaptation for typing of Lithuanian texts: 1 mechanical typewriter keyboards (heavily adaptable, 2 electromechanical desktop or laptop computer keyboards, and 3 programmable touch screen tablet computer keyboards (easily adaptable. It is discussed how they were adapted for Lithuanian language, with solutions in other languages are compared. Both successful and unsuccessful solutions are discussed. The reasons of failures as well as their negative impact on writing culture and formation of bad habits in the work with computer are analyzed. The recommendations how to improve current situation are presented.

  17. GeantV: from CPU to accelerators

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  18. Keyboard cleanliness: a controlled study of the residual effect of chlorhexidine gluconate.

    Science.gov (United States)

    Jones, Rhiannon; Hutton, Anna; Mariyaselvam, Maryanne; Hodges, Emily; Wong, Katherine; Blunt, Mark; Young, Peter

    2015-03-01

    A controlled trial of once daily cleaning of computer keyboards in an intensive care unit was performed comparing 2% chlorhexidine gluconate-70% isopropyl alcohol (CHG) and a chlorine dioxide-based product used as a standard in our hospital. A study before and after the introduction of once daily keyboard cleaning with CHG in the wider hospital was also completed. Cleaning with CHG showed a sustained and significant reduction in bacterial colony forming units compared with the chlorine dioxide-based product, demonstrating its unique advantage of maintaining continuous keyboard cleanliness over time.

  19. Inventions on Soft Keyboards -- A TRIZ Based Analysis

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    The soft keyboards are onscreen representation of physical keyboard having alphanumeric characters and other controls. The user operates the soft keyboard with the mouse, a stylus or other pointing device. The soft keys dont have any mechanical component. The soft keyboards are used in many public places for informational purpose, educational systems and financial transactional systems. A soft keyboard is convenient in some cases where a hard keyboard is difficult to manage. The soft keyboard...

  20. Inventions on reducing keyboard size: A TRIZ based analysis

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    A conventional computer keyboard consists of as many as 101 keys. The keyboard has several sections, such as text entry section, navigation section, and numeric keypad etc. and each having several keys on the keyboard. The size of the keyboard is a major inconvenience for portable computers, as they cannot be carried easily. Thus there are certain circumstances which compels to reduce the size of a keyboard. Reducing the size of a keyboard leads to several problems. A reduced size keyboard ma...

  1. 10 Inventions on keyboard attachments-A TRIZ based analysis

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    Although the primary objective of the keyboard to input data into the computer, the advanced keyboards keep various other things in mind, such as, how to use the same keyboard for various other purposes, or how to use the same keyboard efficiently by using various other attachments to the keyboard. This objective led to various inventions on keyboard attachments, some of which are illustrated below in this article. This article illustrates 10 inventions on various keyboard attachments from US...

  2. CPU Scheduling Algorithms: A Survey

    Directory of Open Access Journals (Sweden)

    Imran Qureshi

    2014-01-01

    Full Text Available Scheduling is the fundamental function of operating system. For scheduling, resources of system shared among processes which are going to be executed. CPU scheduling is a technique by which processes are allocating to the CPU for a specific time quantum. In this paper the review of different scheduling algorithms are perform with different parameters, such as running time, burst time and waiting times etc. The reviews algorithms are first come first serve, Shortest Job First, Round Robin, and Priority scheduling algorithm.

  3. 75 FR 41238 - In the Matter of Certain Adjustable Keyboard Support Systems and Components Thereof; Notice of...

    Science.gov (United States)

    2010-07-15

    ... COMMISSION In the Matter of Certain Adjustable Keyboard Support Systems and Components Thereof; Notice of... importation of certain adjustable keyboard support systems and components thereof that infringe certain claims... a limited exclusion order barring entry into the United States of infringing adjustable...

  4. Backlit Keyboard Inspection Using Machine Vision

    Institute of Scientific and Technical Information of China (English)

    Der-Baau Perng; Hsiao-Wei Liu; Po-An Chen

    2015-01-01

    Abstract⎯A robust system for backlit keyboard inspection is revealed. The backlit keyboard not only has changeable diverse colors but also has the laser marking keys. The keys on the keyboard can be divided into regions of function keys, normal keys, and number keys. However, there might have some types of defects: incorrect illuminating area, non-uniform illumination of specified inspection region (IR), and incorrect luminance and intensity of individual key. Since the illumination features of backlit keyboard are too complex to inspect for human inspector in the production line, an auto-mated inspection system for the backlit keyboard is proposed in this paper. The system was designed into the operation module and inspection module. A set of image processing methods were developed for these defects inspection. Some experimental results demonstrate the robustness and effectiveness of the proposed system.

  5. Keyboard reaction force and finger flexor electromyograms during computer keyboard work.

    Science.gov (United States)

    Martin, B J; Armstrong, T J; Foulke, J A; Natarajan, S; Klinenberg, E; Serina, E; Rempel, D

    1996-12-01

    This study examines the relationship between forearm EMGs and keyboard reaction forces in 10 people during keyboard tasks performed at a comfortable speed. A linear fit of EMG force data for each person and finger was calculated during static fingertip loading. An average r2 of .71 was observed for forces below 50% of the maximal voluntary contraction (MVC). These regressions were used to characterize EMG data in force units during the typing task. Averaged peak reaction forces measured during typing ranged from 3.33 N (thumb) to 1.84 N (little finger), with an overall average of 2.54 N, which represents about 10% MVC and 5.4 times the key switch make force (0.47 N). Individual peak or mean finger forces obtained from EMG were greater (1.2 to 3.2 times) than force measurements; hence the range of r2 for EMG force was .10 to .46. A closer correspondence between EMG and peak force was obtained using EMG averaged across all fingers. For 5 of the participants the force computed from EMG was within +/-20% of the reaction force. For the other 5 participants forces were overestimated. For 9 participants the difference between EMG estimated force and the reaction force was less than 13% MVC. It is suggested that the difference between EMG and finger force partly results from the amount of muscle load not captured by the measured applied force.

  6. Idea units in notes and summaries for read texts by keyboard and pencil in middle childhood students with specific learning disabilities: Cognitive and brain findings.

    Science.gov (United States)

    Richards, Todd; Peverly, Stephen; Wolf, Amie; Abbott, Robert; Tanimoto, Steven; Thompson, Rob; Nagy, William; Berninger, Virginia

    2016-09-01

    Seven children with dyslexia and/or dysgraphia (2 girls, 5 boys, M=11 years) completed fMRI connectivity scans before and after twelve weekly computerized lessons in strategies for reading source material, taking notes, and writing summaries by touch typing or groovy pencils. During brain scanning they completed two reading comprehension tasks-one involving single sentences and one involving multiple sentences. From before to after intervention, fMRI connectivity magnitude changed significantly during sentence level reading comprehension (from right angular gyrus→right Broca's) and during text level reading comprehension (from right angular gyrus→cingulate). Proportions of ideas units in children's writing compared to idea units in source texts did not differ across combinations of reading-writing tasks and modes. Yet, for handwriting/notes, correlations insignificant before the lessons became significant after the strategy instruction between proportion of idea units and brain connectivity at all levels of language in reading comprehension (word-, sentence-, and text) during scanning; but for handwriting/summaries, touch typing/notes, and touch typing/summaries changes in those correlations from insignificant to significant after strategy instruction occurred only at text level reading comprehension during scanning. Thus, handwriting during note-taking may benefit all levels of language during reading comprehension, whereas all other combinations of modes and writing tasks in this exploratory study appear to benefit only the text level of reading comprehension. Neurological and educational significance of the interdisciplinary research findings for integrating reading and writing and future research directions are discussed.

  7. Design and evaluation of a curved computer keyboard.

    Science.gov (United States)

    McLoone, Hugh E; Jacobson, Melissa; Clark, Peter; Opina, Ryan; Hegg, Chau; Johnson, Peter

    2009-12-01

    Conventional, straight keyboards remain the most popular design among keyboards sold and used with personal computers despite the biomechanical benefits offered by alternative keyboard designs. Some typists indicate that the daunting medical device-like appearance of these alternative 'ergonomic' keyboards is the reason for not purchasing an alternative keyboard design. The purpose of this research was to create a new computer keyboard that promoted more neutral postures in the wrist while maintaining the approachability and typing performance of a straight keyboard. The design process created a curved alphanumeric keyboard, designed to reduce ulnar deviation, and a built-in, padded wrist-rest to reduce wrist extension. Typing performance, wrist postures and perceptions of fatigue when using the new curved keyboard were compared to those when using a straight keyboard design. The curved keyboard reduced ulnar deviation by 2.2 degrees +/- 0.7 (p keyboard without a built-in wrist-rest, the prototype curved keyboard with the built-in padded wrist-rest reduced wrist extension by 6.3 degrees +/- 1.2 (p keyboards. Perceived fatigue ratings were significantly lower in the hands, forearms and shoulders with the curved keyboard. The new curved keyboard achieved its design goal of reducing discomfort and promoting more neutral wrist postures while not compromising users' preferences and typing performance.

  8. A keyboard for dynamic display and a system comprising the keyboard

    DEFF Research Database (Denmark)

    2011-01-01

    The invention relates to a dynamic display keyboard comprising a plurality of key elements, each key element comprises a transmitting part capable of transmitting at least a part of light incident on the transmitting part; an elastic mat comprising a plurality of elevated elements capable...... keyboard is able to provide a tactile feedback in response to a user action directed towards a key of the keyboard....

  9. 10 Inventions on improving keyboard efficiency: A TRIZ based analysis

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    A keyboard is the most important input device for a computer. With the development of technology a basic keyboard does not want to remain confined within the basic functionalities of a keyboard, rather it wants to go beyond. There are several inventions which attempt to improve the efficiency of a conventional keyboard. This article illustrates 10 inventions from US Patent database all of which have proposed very interesting methods for improving the efficiency of a computer keyboard. Some in...

  10. DESIGN OF KEYBOARD LAYOUT USING CADWORK

    Directory of Open Access Journals (Sweden)

    Udosen, U. J.

    2007-06-01

    Full Text Available CADWORK has been employed for the design of a keyboard layout and compared with QWERTY keyboard layout evaluated by CADWORK heuristics. The time simulated by CADWORK to type a document supplied as data using the QWERTY layout was 647.79 TMU while that obtained when using the layout designed by CADWORK was 604.69 TMU. From the standpoint of ergonomic considerations, a keyboard layout which permits the operator to type more efficiently is to be preferred in order to reduce the medical problem of cumulative trauma disorders. A test set up to determine the acceptability of the CADWORK layout using participants drawn from experienced keyboard layout users indicated a bias towards the QWERTY layout. After a third trial, some participants were found to type faster with the CADWORK layout than others using the QWERTY layout.

  11. Electromyographic activity during typewriter and keyboard use.

    Science.gov (United States)

    Fernström, E; Ericson, M O; Malker, H

    1994-03-01

    This study investigated how ergonomic design influences neck-and-shoulder muscle strain, through keyboard assessment. Muscular activity was measured electromyographically (EMG) from six muscles in the forearm and shoulders of eight experienced typists using each of five different types of keyboard: one mechanical, one electromechanical, and one electronic typewriter; one personal computer/word processor (PC-XT) keyboard; and one angled at 20 degrees in the horizontal plane. The impact on muscular activity of using a palmrest was also studied. The mechanical typewriter induced a higher strain in the forearm and finger muscles than did the modern typewriters and keyboards. These induced no different strain on the neck-and-shoulder muscles, except for the right shoulder muscle, which was more active with the electronic typewriter than with the other machines. Using a palmrest did not decrease the strain on the muscles investigated. Use of the 'angled' PC-XT keyboard did not influence the measured muscular load on the forearm and finger muscles compared to typing on an ordinary PC-XT keyboard, but decreased the extensor muscular strain compared to the electronic typewriter.

  12. 78 FR 6835 - Certain Mobile Handset Devices and Related Touch Keyboard Software; Institution of Investigation

    Science.gov (United States)

    2013-01-31

    ... COMMISSION Certain Mobile Handset Devices and Related Touch Keyboard Software; Institution of Investigation... importation, and the sale within the United States after importation of certain mobile handset devices and... the sale within the United States after importation of certain mobile handset devices and...

  13. GPU/CPU Algorithm for Generalized Born/Solvent-Accessible Surface Area Implicit Solvent Calculations.

    Science.gov (United States)

    Tanner, David E; Phillips, James C; Schulten, Klaus

    2012-07-10

    Molecular dynamics methodologies comprise a vital research tool for structural biology. Molecular dynamics has benefited from technological advances in computing, such as multi-core CPUs and graphics processing units (GPUs), but harnessing the full power of hybrid GPU/CPU computers remains difficult. The generalized Born/solvent-accessible surface area implicit solvent model (GB/SA) stands to benefit from hybrid GPU/CPU computers, employing the GPU for the GB calculation and the CPU for the SA calculation. Here, we explore the computational challenges facing GB/SA calculations on hybrid GPU/CPU computers and demonstrate how NAMD, a parallel molecular dynamics program, is able to efficiently utilize GPUs and CPUs simultaneously for fast GB/SA simulations. The hybrid computation principles demonstrated here are generally applicable to parallel applications employing hybrid GPU/CPU calculations.

  14. CPU-GPU hybrid accelerating the Zuker algorithm for RNA secondary structure prediction applications.

    Science.gov (United States)

    Lei, Guoqing; Dou, Yong; Wan, Wen; Xia, Fei; Li, Rongchun; Ma, Meng; Zou, Dan

    2012-01-01

    Prediction of ribonucleic acid (RNA) secondary structure remains one of the most important research areas in bioinformatics. The Zuker algorithm is one of the most popular methods of free energy minimization for RNA secondary structure prediction. Thus far, few studies have been reported on the acceleration of the Zuker algorithm on general-purpose processors or on extra accelerators such as Field Programmable Gate-Array (FPGA) and Graphics Processing Units (GPU). To the best of our knowledge, no implementation combines both CPU and extra accelerators, such as GPUs, to accelerate the Zuker algorithm applications. In this paper, a CPU-GPU hybrid computing system that accelerates Zuker algorithm applications for RNA secondary structure prediction is proposed. The computing tasks are allocated between CPU and GPU for parallel cooperate execution. Performance differences between the CPU and the GPU in the task-allocation scheme are considered to obtain workload balance. To improve the hybrid system performance, the Zuker algorithm is optimally implemented with special methods for CPU and GPU architecture. Speedup of 15.93× over optimized multi-core SIMD CPU implementation and performance advantage of 16% over optimized GPU implementation are shown in the experimental results. More than 14% of the sequences are executed on CPU in the hybrid system. The system combining CPU and GPU to accelerate the Zuker algorithm is proven to be promising and can be applied to other bioinformatics applications.

  15. Warning: This keyboard will deconstruct--the role of the keyboard in skilled typewriting.

    Science.gov (United States)

    Crump, Matthew J C; Logan, Gordon D

    2010-06-01

    Skilled actions are commonly assumed to be controlled by precise internal schemas or cognitive maps. We challenge these ideas in the context of skilled typing, where prominent theories assume that typing is controlled by a well-learned cognitive map that plans finger movements without feedback. In two experiments, we demonstrate that online physical interaction with the keyboard critically mediates typing skill. Typists performed single-word and paragraph typing tasks on a regular keyboard, a laser-projection keyboard, and two deconstructed keyboards, made by removing successive layers of a regular keyboard. Averaged over the laser and deconstructed keyboards, response times for the first keystroke increased by 37%, the interval between keystrokes increased by 120%, and error rate increased by 177%, relative to those of the regular keyboard. A schema view predicts no influence of external motor feedback, because actions could be planned internally with high precision. We argue that the expert knowledge mediating action control emerges during online interaction with the physical environment.

  16. Portable Android Dari and Pashto Soft Keyboard

    Science.gov (United States)

    2013-08-01

    generating an Android application package ( APK ) file and deploying it to an Android device. Note that this APK file is not an independent runnable...AndroidManifest.xml The following is the AndroidManfest.xml file. <?xml version=ŕ.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/ apk ...Dari keyboard file. <?xml version=ŕ.0" encoding="utf-8"?> <Keyboard xmlns:android="http://schemas.android.com/ apk /res/android" xmlns:ask="http

  17. Stretch not flex: programmable rubber keyboard

    Science.gov (United States)

    Xu, Daniel; Tairych, Andreas; Anderson, Iain A.

    2016-01-01

    Stretchability is a property that brings versatility and design freedom to human interface devices. We present a soft, flexible and stretchable keyboard made from a dielectric elastomer sensor sheet. Using a multi-frequency capacitance sensing technique based on a transmission line model, we demonstrate how this keyboard can detect touch in two dimensions, programmable to increase the number of keys and into different layouts, all without adding any new wires, connections or modifying the hardware. The method is efficient and scalable for large sensing systems with multiple degrees of freedom.

  18. Design of Double Keyboard Acquisition Module with Brightness Adjustment Function%一种具有亮度调节功能的双键盘采集模块设计

    Institute of Scientific and Technical Information of China (English)

    姚毅; 雷凌毅

    2014-01-01

    针对某型显控终端设计一种具有亮度调节和记忆功能的双键盘采集模块,该模块综合PWM输出和EEPROM记忆功能,将双键盘采集和显示屏亮度控制集成到一个模块中;通过虚拟PS/2键盘和CPU键盘控制器,实现用一个PS/2键盘接口同时响应两种类型键盘的按键输入,WINDOWS系统下PS/2键盘的即插即用,独立的显示屏亮度控制和亮度记忆功能。%We designed a double keyboard acquisition module with brightness adjustment and brightness memory functions for a certain type of display and control terminal.The module synthesized PWM output and EEPROM memory function to integrate double keyboard acquisition and display brightness control into one module.Through the virtual PS/2 keyboard and CPU keyboard controller,we could achieve three goals:first,to respond two types of input keyboard through one PS/2 keyboard interface at the same time;second,to allow for plug and play of the PS/2 keyboard under the WINDOWS system;third,independent screen brightness control and brightness memory function.

  19. A low-cost MRI compatible keyboard

    DEFF Research Database (Denmark)

    Jensen, Martin Snejbjerg; Heggli, Ole Adrian; Alves da Mota, Patricia

    2017-01-01

    , presenting a challenging environment for playing an instrument. Here, we present an MRI-compatible polyphonic keyboard with a materials cost of 850 $, designed and tested for safe use in 3T (three Tesla) MRI-scanners. We describe design considerations, and prior work in the field. In addition, we provide...

  20. Efficient simulation of diffusion-based choice RT models on CPU and GPU.

    Science.gov (United States)

    Verdonck, Stijn; Meers, Kristof; Tuerlinckx, Francis

    2016-03-01

    In this paper, we present software for the efficient simulation of a broad class of linear and nonlinear diffusion models for choice RT, using either CPU or graphical processing unit (GPU) technology. The software is readily accessible from the popular scripting languages MATLAB and R (both 64-bit). The speed obtained on a single high-end GPU is comparable to that of a small CPU cluster, bringing standard statistical inference of complex diffusion models to the desktop platform.

  1. The effects of split keyboard geometry on upper body postures.

    Science.gov (United States)

    Rempel, David; Nathan-Roberts, Dan; Chen, Bing Yune; Odell, Dan

    2009-01-01

    Split, gabled keyboard designs can prevent or improve upper extremity pain among computer users; the mechanism appears to involve the reduction of awkward wrist and forearm postures. This study evaluated the effects of changes in opening angle, slope and height (independent variables) of a gabled (14 degrees) keyboard on typing performance and upper extremity postures. Twenty-four experienced touch typists typed on seven keyboard conditions while typing speed and right and left wrist extension, ulnar deviation, forearm pronation and elbow position were measured using a motion tracking system. The lower keyboard height led to a lower elbow height (i.e. less shoulder elevation) and less wrist ulnar deviation and forearm pronation. Keyboard slope and opening angle had mixed effects on wrist extension and ulnar deviation, forearm pronation and elbow height and separation. The findings suggest that in order to optimise wrist, forearm and upper arm postures on a split, gabled keyboard, the keyboard should be set to the lowest height of the two heights tested. Keyboard slopes in the mid-range of those tested, 0 degrees to -4 degrees, provided the least wrist extension, forearm pronation and the lowest elbow height. A keyboard opening angle in the mid-range of those tested, 15 degrees, may provide the best balance between reducing ulnar deviation while not increasing forearm pronation or elbow separation. These findings may be useful in the design of computer workstations and split keyboards. The geometry of a split keyboard can influence wrist and forearm postures. The findings of this study are relevant to the positioning and adjustment of split keyboards. The findings will also be useful for engineers who design split keyboards.

  2. Piano crossing - walking on a keyboard

    OpenAIRE

    Bojan Kverh; Matevz Lipanje; Borut Batagelj; Franc Solina

    2015-01-01

    Piano Crossing is an interactive art installation which turns a pedestrian crossing marked with white stripes into a piano keyboard so that pedestrians can generate music by walking over it. Matching tones are generated when a pedestrian is over a particular stripe or key. A digital camera is directed at the crossing from above. A special computer vision application was developed that maps the stripes of the pedestrian crossing to piano keys and which detects over which key is the center of g...

  3. CPU and GPU (Cuda Template Matching Comparison

    Directory of Open Access Journals (Sweden)

    Evaldas Borcovas

    2014-05-01

    Full Text Available Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I, NVidia GeForce GT320M CUDAcompliable graphics card (GPU I and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II, NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II.Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have.

  4. The internal structure of university students’ keyboard skills

    Directory of Open Access Journals (Sweden)

    Grabowski, Joachim

    2008-01-01

    Full Text Available Nowadays, university students do not necessarily acquire their typing skills through systematic touch-typing training, like professional typists. But then, how are the resulting typing skills structured? To reveal the composition of today’s typical typing skills, 32 university students performed on three writing tasks: copying from memory, copying from text, and generating from memory.Variables of keyboard operation that presumably reflect typing abilities and strategies, were recorded with ScriptLog, a keystroke logging software; these include typing speed, keyboard efficiency, and keyboard activity beyond keypresses that become visible in the final text. Factor analyses reveal three components of typing behavior per task. Their clearest interpretations relate to keyboard activity/efficiency and typing speed. Across tasks, typing speed is the strongest individually stable facet of keyboard operation. In summary, university students’ keyboard behavior is a multi-faceted skill rather than the mere mastery of a touch-typing method.

  5. SATA Controller Into a Space CPU

    Science.gov (United States)

    De Nino, M.; Titomanlio, D.; Calvanese, R.; Capuano, G.; Rovatti, M.

    2014-08-01

    This paper is dedicated to the presentation of a project, funded by ESA, named "SATA Controller into a Space CPU" aimed at starting a development activity to spin- in the SATA technology to the space market.Space applications could benefit from the adoption of the SATA protocol as interface layer between the host controller and the mass memory module. Currently no space-proven implementation of the SATA specification exists.

  6. 10 Inventions on special type of keyboards -A study based on US patents

    OpenAIRE

    Mishra, Umakant

    2013-01-01

    A keyboard is the most important input device for a computer. It is used with various types and sizes of computer. But the same standard keyboard will not work efficiently with different types of computers at different environments. There is a need to develop special keyboards to meet special requirements. This article illustrates 10 inventions on special types of keyboards. The special keyboard are used in special computers or computers used for special purposes. A special keyboard is to be ...

  7. Optimization strategies for parallel CPU and GPU implementations of a meshfree particle method

    CERN Document Server

    Domínguez, Jose M; Gómez-Gesteira, Moncho

    2011-01-01

    Much of the current focus in high performance computing (HPC) for computational fluid dynamics (CFD) deals with grid based methods. However, parallel implementations for new meshfree particle methods such as Smoothed Particle Hydrodynamics (SPH) are less studied. In this work, we present optimizations for both central processing unit (CPU) and graphics processing unit (GPU) of a SPH method. These optimization strategies can be further applied to many other meshfree methods. The obtained performance for each architecture and a comparison between the most efficient implementations for CPU and GPU are shown.

  8. MIDI Keyboards: Memory Skills and Building Values toward School.

    Science.gov (United States)

    Marcinkiewicz, Henryk R.; And Others

    This document summarizes the results of a study which evaluated whether school instruction with Musical Instrument Digital Interface (MIDI) keyboards improves memory skill and whether school instruction with MIDI keyboards improves sentiments toward school and instructional media. Pupils in early elementary grades at five schools were evaluated…

  9. Older Amateur Keyboard Players Learning for Self-Fulfilment

    Science.gov (United States)

    Taylor, Angela

    2011-01-01

    This article investigates self-reported music learning experiences of 21 older amateur pianists and electronic keyboard players. Significant changes in their lives and the encouragement of friends were catalysts for returning to or taking up a keyboard instrument as an adult, although not all returners had positive memories of learning a keyboard…

  10. Older Amateur Keyboard Players Learning for Self-Fulfilment

    Science.gov (United States)

    Taylor, Angela

    2011-01-01

    This article investigates self-reported music learning experiences of 21 older amateur pianists and electronic keyboard players. Significant changes in their lives and the encouragement of friends were catalysts for returning to or taking up a keyboard instrument as an adult, although not all returners had positive memories of learning a keyboard…

  11. Secure Authentication using Anti-Screenshot Virtual Keyboard

    Directory of Open Access Journals (Sweden)

    Ankit Parekh

    2011-09-01

    Full Text Available With the development of electronic commerce, a lot of companies have established online trading platforms of their own such as e-tickets, online booking, online shopping, etc. Virtual Keyboard is used for authentication on such web based platform. However Virtual Keyboard still suffers from numerous other fallacies that an attacker can take advantage of. These include click based screenshot capturing and over the shoulder spoofing. To overcome these drawbacks, we have designed a virtual keyboard that is generated dynamically each time the user access the web site. Also after each click event of the user the arrangement of the keys of the virtual keyboard are shuffled. The position of the keys is hidden so that a user standing behind may not be able to see the pressed key. Our proposed approach makes the usage of virtual keyboard even more secure for users and makes it tougher for malware programs to capture authentication details.

  12. Piano Crossing – Walking on a Keyboard

    Directory of Open Access Journals (Sweden)

    Bojan Kverh

    2010-11-01

    Full Text Available Piano Crossing is an interactive art installation which turns a pedestrian crossing marked with white stripes into a piano keyboard so that pedestrians can generate music by walking over it. Matching tones are created when a pedestrian steps on a particular stripe or key. A digital camera is directed at the crossing from above. A special computer vision application was developed, which maps the stripes of the pedestrian crossing to piano keys and detects by means of an image over which key the center of gravity of each pedestrian is placed at any given moment. Black stripes represent the black piano keys. The application consists of two parts: (1 initialization, where the model of the abstract piano keyboard is mapped to the image of the pedestrian crossing, and (2 the detection of pedestrians at the crossing, so that musical tones can be generated according to their locations. The art installation Piano crossing was presented to the public for the first time during the 51st Jazz Festival in Ljubljana in July 2010.

  13. Piano Crossing – Walking on a Keyboard

    Directory of Open Access Journals (Sweden)

    Franc Solina

    2010-04-01

    Full Text Available Piano Crossing is an interactive art installation which turns a pedestrian crossing marked with white stripes into a piano keyboard so that pedestrians can generate music by walking over it. Matching tones are created when a pedestrian steps on a particular stripe or key. A digital camera is directed at the crossing from above. A special computer vision application was developed, which maps the stripes of the pedestrian crossing to piano keys and detects by means of an image over which key the center of gravity of each pedestrian is placed at any given moment. Black stripes represent the black piano keys. The application consists of two parts: (1 initialization, where the model of the abstract piano keyboard is mapped to the image of the pedestrian crossing, and (2 the detection of pedestrians at the crossing, so that musical tones can be generated according to their locations. The art installation Piano crossing was presented to the public for the first time during the 51st Jazz Festival in Ljubljana in July 2010.

  14. RSVP Keyboard: An EEG Based Typing Interface.

    Science.gov (United States)

    Orhan, Umut; Hild, Kenneth E; Erdogmus, Deniz; Roark, Brian; Oken, Barry; Fried-Oken, Melanie

    2012-01-01

    Humans need communication. The desire to communicate remains one of the primary issues for people with locked-in syndrome (LIS). While many assistive and augmentative communication systems that use various physiological signals are available commercially, the need is not satisfactorily met. Brain interfaces, in particular, those that utilize event related potentials (ERP) in electroencephalography (EEG) to detect the intent of a person noninvasively, are emerging as a promising communication interface to meet this need where existing options are insufficient. Existing brain interfaces for typing use many repetitions of the visual stimuli in order to increase accuracy at the cost of speed. However, speed is also crucial and is an integral portion of peer-to-peer communication; a message that is not delivered timely often looses its importance. Consequently, we utilize rapid serial visual presentation (RSVP) in conjunction with language models in order to assist letter selection during the brain-typing process with the final goal of developing a system that achieves high accuracy and speed simultaneously. This paper presents initial results from the RSVP Keyboard system that is under development. These initial results on healthy and locked-in subjects show that single-trial or few-trial accurate letter selection may be possible with the RSVP Keyboard paradigm.

  15. Virtual Keyboard for Hands-Free Operations

    Science.gov (United States)

    Abou-Ali, Abdel-Latief; Porter, William A.

    1996-01-01

    The measurement of direction of gaze (d.o.g.) has been used for clinical purposes to detect illness, such as nystagmus, unusual fixation movements and many others. It also is used to determine the points of interest in objects. In this study we employ a measurement of d.o.g. as a computer interface. The interface provides a full keyboard as well as a mouse function. Such an interface is important to computer users with paralysis or in environments where hand-free machine interface is required. The study utilizes the commercially available (ISCAN Model RK426TC) headset which consists of an InfraRed (IR) source and an IR camera to sense deflection of the illuminating beam. It also incorporates image processing package that provides the position of the pupil as well as the pupil size. The study shows the ability of implementing a full keyboard, together with some control functions, imaged on a head mounted monitor screen. This document is composed of four sections: (1) The Nature of the Equipment; (2) The Calibration Process; (3) Running Process; and (4) Conclusions.

  16. Using all of your CPU's in HIPE

    Science.gov (United States)

    Jacobson, J. D.; Fadda, D.

    2012-09-01

    Modern computer architectures increasingly feature multi-core CPU's. For example, the MacbookPro features the Intel quad-core i7 processors. Through the use of hyper-threading, where each core can execute two threads simultaneously, the quad-core i7 can support eight simultaneous processing threads. All this on your laptop! This CPU power can now be put into service by scientists to perform data reduction tasks, but only if the software has been designed to take advantage of the multiple processor architectures. Up to now, software written for Herschel data reduction (HIPE), written in Jython and JAVA, is single-threaded and can only utilize a single processor. Users of HIPE do not get any advantage from the additional processors. Why not put all of the CPU resources to work reducing your data? We present a multi-threaded software application that corrects long-term transients in the signal from the PACS unchopped spectroscopy line scan mode. In this poster, we present a multi-threaded software framework to achieve performance improvements from parallel execution. We will show how a task to correct transients in the PACS Spectroscopy Pipeline for the un-chopped line scan mode, has been threaded. This computation-intensive task uses either a one-parameter or a three parameter exponential function, to characterize the transient. The task uses a JAVA implementation of Minpack, translated from the C (Moshier) and IDL (Markwardt) by the authors, to optimize the correction parameters. We also explain how to determine if a task can benefit from threading (Amdahl's Law), and if it is safe to thread. The design and implementation, using the JAVA concurrency package completions service is described. Pitfalls, timing bugs, thread safety, resource control, testing and performance improvements are described and plotted.

  17. Keyboard Proficiency: An Essential Skill in a Technological Age. Number 2.

    Science.gov (United States)

    Gillmon, Eve

    A structured keyboard skills training scheme for students in England should be included within school curricula. Negative attitudes toward keyboard training prevail in schools although employers value keyboard application skills. There are several reasons why keyboard proficiency, which facilitates the efficient input and retrieval of text and…

  18. The relationship between keyboarding skills and self-regulated ...

    African Journals Online (AJOL)

    Erna Kinsey

    The results of the empirical study indicated that self-regu- lated learners ..... Table 1 Difference between experimental and control groups in keyboarding and writing skills ..... Applying educational psychology in the classroom, 4th edn.

  19. The Most Advantageous Bangla Keyboard Layout Using Data Mining Technique

    CERN Document Server

    Masum, Abdul Kadar Muhammad; Kamruzzaman, S M

    2010-01-01

    Bangla alphabet has a large number of letters, for this it is complicated to type faster using Bangla keyboard. The proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Association rule of data mining to distribute the Bangla characters in the keyboard is used here. The frequencies of data consisting of monograph, digraph and trigraph are analyzed, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance. This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort.

  20. Optimal Bangla Keyboard Layout using Data Mining Technique

    CERN Document Server

    Kamruzzaman, S M; Masum, Abdul Kadar Muhammad; Hassan, Md Mahadi

    2010-01-01

    This paper presents an optimal Bangla Keyboard Layout, which distributes the load equally on both hands so that maximizing the ease and minimizing the effort. Bangla alphabet has a large number of letters, for this it is difficult to type faster using Bangla keyboard. Our proposed keyboard will maximize the speed of operator as they can type with both hands parallel. Here we use the association rule of data mining to distribute the Bangla characters in the keyboard. First, we analyze the frequencies of data consisting of monograph, digraph and trigraph, which are derived from data wire-house, and then used association rule of data mining to distribute the Bangla characters in the layout. Experimental results on several data show the effectiveness of the proposed approach with better performance.

  1. 一种高逼真度的控制显示组件仿真方法%High Fidelity Control and Display Unit Simulation Method

    Institute of Scientific and Technical Information of China (English)

    廖峰; 郑书朋; 侯伟钦; 姜洪洲

    2011-01-01

    In order to improve the visual feelings and simulation fidelity of Control and Display Unit(CDU) in flight simulator research, based on embedded PC104 CPU and 8255 programmable interface card, this paper designs membrane keyboard hardware circuit and keyboard code scan program and realizes the high fidelity simulation for CDU keyboard. Meanwhile, the CDU simulation page is constructed by utilizing object-oriented simulation technique. It successfully tackles pages system's tiresome generation and management problem. Simulation experimental results show that the method realistically implements routing data and performance parameters input and display in flight simulation.%为增强飞行员对控制显示组件(CDU)的直观感受,提高CDU仿真的逼真度,基于嵌入式PC104 CPU和8255可编程设备接口卡,设计仿真键盘硬件电路和键盘码扫描程序,实现对CDU键盘的高逼真硬件仿真.利用面向对象仿真技术,创建出CDU仿真页面,有效解决了页面系统繁重的生成与管理问题.仿真实验结果表明,该方法能够逼真地实现对飞行计算航路数据和性能参数的输入与显示.

  2. Portable Computer Keyboard For Use With One Hand

    Science.gov (United States)

    Friedman, Gary L.

    1992-01-01

    Data-entry device held in one hand and operated with five fingers. Contains seven keys. Letters, numbers, punctuation, and cursor commands keyed into computer by pressing keys in various combinations. Device called "data egg" used where standard typewriter keyboard unusable or unavailable. Contains micro-processor and 32-Kbyte memory. Captures text and transmits it to computer. Concept extended to computer mouse. Especially useful to handicapped or bedridden people who find it difficult or impossible to operate standard keyboards.

  3. 国产CPU自主生产程度评估模型%Evaluation Model for Initiative Degree of Domestic CPU Production

    Institute of Scientific and Technical Information of China (English)

    朱帅; 吴玲达; 郭静

    2016-01-01

    随着国内CPU设计加工技术的不断提高,国产 CPU 已进入批量生产阶段并在军队和政府部门得到初步应用.为鉴别国产 CPU 是否实现独立自主生产,杜绝由植入后门引起的安全隐患,在分析 CPU 设计生产一般流程的基础上,建立CPU自主生产程度指标体系;基于AHP法、Delphi法设计国产 CPU 自主生产程度评估模型,明确指标权重的确定策略与评分依据,得出待评估国产 CPU 自主生产程度值.%With continuous development of China-made central processing unit (CPU)design and manufacturing technology,domestic CPU has been in mass production and primary application in the military and governmental authorities.To identify whether China-made CPU is produced independ-ently and can avoid the security risk caused by implantation,based on the analysis on general process of CPU design and production,the paper establishes an indicator system of degree of independent CPU production;Based on AHP (Analytic Hierarchy Process)method and Delphi method,the paper designs a model of degree of independent production of China-made CPU,makes clear the determina-tion strategy and evaluation basis for the indicator weight and thus obtain the degree of independent production of China-made CPU to be evaluated.

  4. The Interwoven Evolution of the Early Keyboard and Baroque Culture

    Directory of Open Access Journals (Sweden)

    Rachel Stevenson

    2016-04-01

    Full Text Available The purpose of this paper is to analyze the impact that Baroque society had in the development of the early keyboard. While the main timeframe is Baroque, a few references are made to the late Medieval Period in determining the reason for the keyboard to more prominently emerge in the musical scene. As Baroque society develops and new genres are formed, different keyboard instruments serve vital roles unique to their construction. These new roles also affect the way music was written for the keyboard as well. Advantages and disadvantages of each instrument are discussed, providing an analysis of what would have been either accepted or rejected by Baroque culture. While music is the main focus, other fine arts are mentioned, including architecture, poetry, politics, and others. My research includes primary and secondary resources retrieved from databases provided by Cedarville University. By demonstrating the relationship between Baroque society and early keyboard development, roles and music, this will be a helpful source in furthering the pianist's understanding of the instrument he or she plays. It also serves pedagogical purposes in its analysis of context in helping a student interpret a piece written during this time period with these early keyboard instruments.

  5. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  6. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    Directory of Open Access Journals (Sweden)

    Chun-Liang Lee

    Full Text Available The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  7. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    Science.gov (United States)

    Lee, Chun-Liang; Lin, Yi-Shan; Chen, Yaw-Chung

    2015-01-01

    The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs) are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs) have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA) that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  8. Keyboard with Universal Communication Protocol Applied to CNC Machine

    Directory of Open Access Journals (Sweden)

    Mejía-Ugalde Mario

    2014-04-01

    Full Text Available This article describes the use of a universal communication protocol for industrial keyboard based microcontroller applied to computer numerically controlled (CNC machine. The main difference among the keyboard manufacturers is that each manufacturer has its own programming of source code, producing a different communication protocol, generating an improper interpretation of the function established. The above results in commercial industrial keyboards which are expensive and incompatible in their connection with different machines. In the present work the protocol allows to connect the designed universal keyboard and the standard keyboard of the PC at the same time, it is compatible with all the computers through the communications USB, AT or PS/2, to use in CNC machines, with extension to other machines such as robots, blowing, injection molding machines and others. The advantages of this design include its easy reprogramming, decreased costs, manipulation of various machine functions and easy expansion of entry and exit signals. The results obtained of performance tests were satisfactory, because each key has the programmed and reprogrammed facility in different ways, generating codes for different functions, depending on the application where it is required to be used.

  9. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  10. Hybrid CPU/GPU Integral Engine for Strong-Scaling Ab Initio Methods.

    Science.gov (United States)

    Kussmann, Jörg; Ochsenfeld, Christian

    2017-07-11

    We present a parallel integral algorithm for two-electron contributions occurring in Hartree-Fock and hybrid density functional theory that allows for a strong scaling parallelization on inhomogeneous compute clusters. With a particular focus on graphic processing units, we show that our approach allows an efficient use of CPUs and graphics processing units (GPUs) simultaneously, although the different architectures demand conflictive strategies in order to ensure efficient program execution. Furthermore, we present a general strategy to use large basis sets like quadruple-ζ split valence on GPUs and investigate the balance between CPUs and GPUs depending on l-quantum numbers of the corresponding basis functions. Finally, we present first illustrative calculations using a hybrid CPU/GPU environment and demonstrate the strong-scaling performance of our parallelization strategy also for pure CPU-based calculations.

  11. A Study of Exploring the Potentiality of Keyboards into Preschool Music Education

    OpenAIRE

    深見, 友紀子; FUKAMI, Yukiko; 冨田, 芳正; TOMITA, Yoshimasa; 横山, 七佳; YOKOYAMA, Nanaka

    2006-01-01

    In these days, electronic keyboards are popularly used among people, as substitute of piano or as "a toy with keyboards". The aim of this study is to explore the potentiality of adopting these keyboards into music education in kindergardens, nursery schools, and utilizing them at the scene of rhythmic activities and music plays as it can be handled by ordinary child-care workers. In the study, we focused on four subjects, "rhythm", "tone", "optical navigation", "electronic keyboards+α". "Rhyt...

  12. A Study of Exploring the Potentiality of Keyboards into Preschool Music Education

    OpenAIRE

    深見, 友紀子; FUKAMI, Yukiko; 冨田, 芳正; TOMITA, Yoshimasa; 横山, 七佳; YOKOYAMA, Nanaka

    2006-01-01

    In these days, electronic keyboards are popularly used among people, as substitute of piano or as "a toy with keyboards". The aim of this study is to explore the potentiality of adopting these keyboards into music education in kindergardens, nursery schools, and utilizing them at the scene of rhythmic activities and music plays as it can be handled by ordinary child-care workers. In the study, we focused on four subjects, "rhythm", "tone", "optical navigation", "electronic keyboards+α". "Rhyt...

  13. A Fusion Model for CPU Load Prediction in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dayu Xu

    2013-11-01

    Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.

  14. A novel method for piezoelectric energy harvesting from keyboard

    Science.gov (United States)

    Beker, Levent; Muhtaroglu, Ali; Külah, Haluk

    2012-04-01

    This paper presents a novel method and apparatus for converting keystrokes to electrical energy using a resonant energy harvester, which can be coupled with keyboards. The state-of-the-art dome switch design is modified to excite the tip of the energy harvester beam. Piezoelectric transduction converts vibrations to electrical power. The energy harvester design is optimized to give highest voltage output under use conditions, and is fabricated. A close match is observed for the first natural frequency. When the piezoelectric energy harvester is excited at 7.62 Hz with tip excitation to emulate keyboard use, 16.95 μW of power is generated.

  15. Classroom Keyboard Instruction Improves Kindergarten Children's Spatial-Temporal Performance: A Field Experiment.

    Science.gov (United States)

    Rauscher, Frances H.; Zupan, Mary Anne

    2000-01-01

    Determined the effects of classroom music instruction featuring the keyboard on the spatial-temporal reasoning of 62 kindergartners assigned to keyboard or no music conditions. Found that the keyboard group scored significantly higher than the no music group on both spatial-temporal tasks after 4 months of lessons, a difference that was greater in…

  16. The 7/8 Piano Keyboard: An Attractive Alternative for Small-Handed Players

    Science.gov (United States)

    Wristen, Brenda; Hallbeck, M. Susan

    2009-01-01

    This study examines whether the use of a 7/8 keyboard contributes to the physical ease of small-handed pianists in comparison to the conventional piano keyboard. A secondary research question focuses on the progression of physical ease in making the transition from one keyboard to the other. For the purposes of this study, the authors stipulated…

  17. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  18. Numerical Study of Geometric Multigrid Methods on CPU--GPU Heterogeneous Computers

    CERN Document Server

    Feng, Chunsheng; Xu, Jinchao; Zhang, Chen-Song

    2012-01-01

    The geometric multigrid method (GMG) is one of the most efficient solving techniques for discrete algebraic systems arising from many types of partial differential equations. GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously. Graphics processing units (GPUs) have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements. A central challenge in implementing GMG on GPUs, though, is that computational work on coarse levels cannot fully utilize the capacity of a GPU. In this work, we perform numerical studies of GMG on CPU--GPU heterogeneous computers. Furthermore, we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver, Fast Fourier Transform, in the cuFFT library developed by NVIDIA.

  19. PEMANFAATAN SPYWARE UNTUK MONITORING AKTIVITAS KEYBOARD DALAM JARINGAN MICROSOFT WINDOWS

    Directory of Open Access Journals (Sweden)

    Mulki Indana Zulfa

    2015-03-01

    Full Text Available Pengawasan terhadap penggunaan teknologi informasi sangat diperlukan terlebih semakin berkembangnya ilmu tentang pembuatan virus, worm, atau spyware. Memasang antivirus bisa juga menjadi solusi untuk mencegah virus masuk ke dalam jaringan atau sistem komputer. Tetapi antivirus tidak bisa melakukan monitoring terhadap aktivitas user contohnya aktivitas keyboard. Keylogger adalah perangkat lunak yang mampu merekam segala aktivitas keyboard. Keylogger harus diinstal terlebih dahulu terhadap target komputer (client yang akan direkam aktivitas keyboard-nya. Kemudian untuk mengambil file hasil rekamannya, file log, harus mempunyai akses fisik ke komputer tersebut dan hal ini akan menjadi masalah jika komputer target yang akan dimonitoring cukup banyak. Metode control keylogger-spy agent dengan memanfaatkan teknologi spyware menjadi solusi dari masalah tersebut. Spy agent akan secara aktif merekam aktivitas keyboard seseorang. File log yang dihasilkan akan disimpan didalam cache-nya sehingga tidak akan menimbulkan kecurigaan user dan tidak perlu mempunyai akses fisik jika ingin mengambil file lognya. Control keylogger dapat menghubungi spy agent mana yang akan diambil file lognya. File log yang berhasil diambil akan disimpan dengan baik di komputer server. Dari hasil pengujian lima komputer yang dijadikan target spy agent semuanya dapat memberikan file log kepada control keylogger.

  20. BrailleEasy: One-handed Braille Keyboard for Smartphones.

    Science.gov (United States)

    Šepić, Barbara; Ghanem, Abdurrahman; Vogel, Stephan

    2015-01-01

    The evolution of mobile technology is moving at a very fast pace. Smartphones are currently considered a primary communication platform where people exchange voice calls, text messages and emails. The human-smartphone interaction, however, is generally optimized for sighted people through the use of visual cues on the touchscreen, e.g., typing text by tapping on a visual keyboard. Unfortunately, this interaction scheme renders smartphone technology largely inaccessible to visually impaired people as it results in slow typing and higher error rates. Apple and some third party applications provide solutions specific to blind people which enables them to use Braille on smartphones. These applications usually require both hands for typing. However, Brailling with both hands while holding the phone is not very comfortable. Furthermore, two-handed Brailling is not possible on smartwatches, which will be used more pervasively in the future. Therefore, we develop a platform for one-handed Brailing consisting of a custom keyboard called BrailleEasy to input Arabic or English Braille codes within any application, and a BrailleTutor application for practicing. Our platform currently supports Braille grade 1, and will be extended to support contractions, spelling correction, and more languages. Preliminary analysis of user studies for blind participants showed that after less than two hours of practice, participants were able to type significantly faster with the BrailleEasy keyboard than with the standard QWERTY keyboard.

  1. A computer control system using a virtual keyboard

    Science.gov (United States)

    Ejbali, Ridha; Zaied, Mourad; Ben Amar, Chokri

    2015-02-01

    This work is in the field of human-computer communication, namely in the field of gestural communication. The objective was to develop a system for gesture recognition. This system will be used to control a computer without a keyboard. The idea consists in using a visual panel printed on an ordinary paper to communicate with a computer.

  2. Procedural Memory Consolidation in the Performance of Brief Keyboard Sequences

    Science.gov (United States)

    Duke, Robert A.; Davis, Carla M.

    2006-01-01

    Using two sequential key press sequences, we tested the extent to which subjects' performance on a digital piano keyboard changed between the end of training and retest on subsequent days. We found consistent, significant improvements attributable to sleep-based consolidation effects, indicating that learning continued after the cessation of…

  3. A New Keyboard for the Bohlen-Pierce Scale

    CERN Document Server

    Nassar, Antonio

    2011-01-01

    The study of harmonic scales of musical instruments is discussed in all introductory physics texts devoted to the science of sound. In this paper, we present a new piano keyboard to make the so-called Bohlen-Pierce scale more functional and pleasing for composition and performance.

  4. Wearable Keyboard Using Conducting Polymer Electrodes on Textiles.

    Science.gov (United States)

    Takamatsu, Seiichi; Lonjaret, Thomas; Ismailova, Esma; Masuda, Atsuji; Itoh, Toshihiro; Malliaras, George G

    2016-06-01

    A wearable keyboard is demonstrated in which conducting polymer electrodes on a knitted textile sense tactile input as changes in capacitance. The use of a knitted textile as a substrate endows stretchability and compatibility to large-area formats, paving the way for a new type of wearable human-machine interface.

  5. Creating a Single South African Keyboard Layout to Promote Language

    Directory of Open Access Journals (Sweden)

    Dwayne Bailey

    2011-10-01

    Full Text Available

    Abstract: In this case study, a description is given of a keyboard layout designed to address the input needs of South African languages, specifically Venda, a language which would otherwise be impossible to type on a computer. In creating this keyboard, the designer, Translate.org.za, uses a practical intervention that transforms technology from a means harming a language into one ensuring the creation and preservation of good language resources for minority languages. The study first looks at the implications and consequences of this missing keyboard, and then follows the process from conception, strategy, research and design to the final user response. Not only are problems such as researching the orthographies, key placement and keyboard input options examined, but strategic objectives such as ensuring its wide adoption and creating a multilingual keyboard for all South African languages are also discussed. The result is a keyboard that furthers multilingualism and ensures the capturing of good data for future research. Finally it is a tool helping to boost and bolster the vitality of a language.

    Keywords: KEYBOARD, MULTILINGUALISM, VENDA, AFRIKAANS, TSWANA, NORTH-ERN SOTHO, ZULU, SOURCE, FREE SOFTWARE, LAYOUT

    Opsomming: Die skep van 'n enkelvoudige Suid-Afrikaanse toetsborduit-leg om taal te bevorder. In hierdie gevallestudie word 'n beskrywing gegee van die ontwerp van 'n sleutelborduitleg vir die hantering van die insetbehoeftes van Suid-Afrikaanse tale, veral Venda, 'n taal wat andersins onmoontlik op 'n rekenaar getik sou kon word. Deur die skep van hierdie sleutelbord gebruik die ontwerper, Translate.org.za, 'n praktiese ingryp wat tegnologie verander van 'n middel wat 'n taal benadeel tot een wat die skep en bewaring van nuttige taal-hulpbronne vir minderheidstale verseker. Die studie kyk eers na die implikasies en gevolge van hierdie ontbrekende sleutelbord, en volg dan die proses van konsepsie, strategie, navorsing en

  6. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    Science.gov (United States)

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  7. Interactive physically-based X-ray simulation: CPU or GPU?

    Science.gov (United States)

    Vidal, Franck P; John, Nigel W; Guillemot, Romain M

    2007-01-01

    Interventional Radiology (IR) procedures are minimally invasive, targeted treatments performed using imaging for guidance. Needle puncture using ultrasound, x-ray, or computed tomography (CT) images is a core task in the radiology curriculum, and we are currently developing a training simulator for this. One requirement is to include support for physically-based simulation of x-ray images from CT data sets. In this paper, we demonstrate how to exploit the capability of today's graphics cards to efficiently achieve this on the Graphics Processing Unit (GPU) and compare performance with an efficient software only implementation using the Central Processing Unit (CPU).

  8. A multi-core CPU pipeline architecture for virtual environments.

    Science.gov (United States)

    Acosta, Eric; Liu, Alan; Sieck, Jennifer; Muniz, Gilbert; Bowyer, Mark; Armonda, Rocco

    2009-01-01

    Physically-based virtual environments (VEs) provide realistic interactions and behaviors for computer-based medical simulations. Limited CPU resources have traditionally forced VEs to be simplified for real-time performance. Multi-core processors greatly increase the computational capacity of computers and are quickly becoming standard. However, developing non-application specific methods to fully utilize all available CPU cores for processing VEs is difficult. The paper describes a pipeline VE architecture designed for multi-core CPU systems. The architecture enables development of VEs that leverage the computational resources of all CPU cores for VE simulation. A VE's workload is dynamically distributed across the available CPU cores. A VE can be developed once and scale efficiently with the number of cores. The described pipeline architecture makes it possible to develop complex physically-based VEs for medical simulations. Initial results for a craniotomy simulator being developed have shown super-linear and near-linear speedups when tested with up to four cores.

  9. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  10. An Improved Round Robin Scheduling Algorithm for CPU scheduling

    Directory of Open Access Journals (Sweden)

    Rakesh Kumar yadav

    2010-07-01

    Full Text Available There are many functions which are provided by operating system like process management, memory management, file management, input/outputmanagement, networking, protection system and command interpreter system. In these functions, the process management is most important function because operating system is a system program that means at the runtime process interact with hardware. Therefore, we can say that for improving the efficiency of a CPU we need to manage all process. For managing the process we use various types scheduling algorithm. There are many algorithm are available for CPU scheduling. But all algorithms have its own deficiency and limitations. In this paper, I proposed a new approach for round robin scheduling algorithm which helps to improve the efficiency of CPU.

  11. Multi-core CPU or GPU-accelerated Multiscale Modeling for Biomolecular Complexes.

    Science.gov (United States)

    Liao, Tao; Zhang, Yongjie; Kekenes-Huskey, Peter M; Cheng, Yuhui; Michailova, Anushka; McCulloch, Andrew D; Holst, Michael; McCammon, J Andrew

    2013-07-01

    Multi-scale modeling plays an important role in understanding the structure and biological functionalities of large biomolecular complexes. In this paper, we present an efficient computational framework to construct multi-scale models from atomic resolution data in the Protein Data Bank (PDB), which is accelerated by multi-core CPU and programmable Graphics Processing Units (GPU). A multi-level summation of Gaus-sian kernel functions is employed to generate implicit models for biomolecules. The coefficients in the summation are designed as functions of the structure indices, which specify the structures at a certain level and enable a local resolution control on the biomolecular surface. A method called neighboring search is adopted to locate the grid points close to the expected biomolecular surface, and reduce the number of grids to be analyzed. For a specific grid point, a KD-tree or bounding volume hierarchy is applied to search for the atoms contributing to its density computation, and faraway atoms are ignored due to the decay of Gaussian kernel functions. In addition to density map construction, three modes are also employed and compared during mesh generation and quality improvement to generate high quality tetrahedral meshes: CPU sequential, multi-core CPU parallel and GPU parallel. We have applied our algorithm to several large proteins and obtained good results.

  12. Design and Implementation of Interface Circuit Between CPU and GPU%CPU 与 GPU 之间接口电路的设计与实现

    Institute of Scientific and Technical Information of China (English)

    石茉莉; 蒋林; 刘有耀

    2013-01-01

    During constructing the Collaborative Computing between Central Process Unit and Graphic Process Unit or Central Process Unit and other device .Through the Peripheral Component Interconnect connects Graphic Process Unit and Central Process Unit ,it’ s responsiable for doing the parrallel computing . Point at the asyncronous transmission and timing matched in the connection of Peripheral Component Interconnect IP core and the Graphic Process Unit ,based on the standard of the Peripheral Component Interconnect and the Graphic Process Unit chip , use the method of processing asyncronous signals ,this paper design an timing matched interface circuit between the Central Process Unit and Graphic Process Unit ,which aims at the different clock systems and the timing matced of them .The simulation results prove that the interface circute between Central Process Unit and Graphic Process Unit can works at 252 M Hz frequency ,it achieves the circuit demand ,and realize high-speed data transmission between Graphic Process Unit and Central Process Unit .%在构建CPU(Central Process Unit ,CPU)与GPU(Graphic Process Unit)或者CPU与其它设备协同计算的过程中,通过PCI(Peripheral Component Interconnect)总线将GPU等其他设备连接至CPU ,承担并行计算的任务。为了解决PCI接口芯片与GPU芯片之间的异步传输和时序匹配问题,基于 PCI总线规范与GPU 芯片的时序规范,采用跨时钟域信号的处理方法,设计了一个CPU与GPU 之间跨时钟域连接的时序匹配接口电路。通过仿真,验证了该电路的正确性。结果表明,该电路可工作在252 M Hz频率下,能够满足GPU 与CPU 间接口电路对速率和带宽的要求。

  13. STEM image simulation with hybrid CPU/GPU programming.

    Science.gov (United States)

    Yao, Y; Ge, B H; Shen, X; Wang, Y G; Yu, R C

    2016-07-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. The effect of six keyboard designs on wrist and forearm postures.

    Science.gov (United States)

    Rempel, David; Barr, Alan; Brafman, David; Young, Ed

    2007-05-01

    There is increasing evidence that alternative geometry keyboards may prevent or reduce arm pain or disorders, and presumably the mechanism is by reducing awkward arm postures. However, the effect of alternative keyboards, especially the new designs, on wrist and arm postures are not well known. In this laboratory study, the wrist and forearm postures of 100 subjects were measured with a motion analysis system while they typed on 6 different keyboard configurations. There were significant differences in wrist extension, ulnar deviation, and forearm pronation between keyboards. When considering all 6 wrists and forearm postures together, the keyboard with an opening angle of 12 degrees , a gable angle of 14 degrees , and a slope of 0 degrees appears to provide the most neutral posture among the keyboards tested. Subjects most preferred this keyboard or a similar keyboard with a gable angle of 8 degrees and they least preferred the keyboard on a conventional laptop computer. These findings may assist in recommendations regarding the selection of keyboards for computer usage.

  15. A hammer in perpetual motion / The keyboard is the world

    OpenAIRE

    Chokroun, David

    2011-01-01

    Documentation of a hybrid performance-lecture featuring the music and ideas of "Larry Brown," a fictional composer and Marxist, presented at Simon Fraser University on Jan. 15-16, 2011. The score of "Brown's" piano work A Hammer in Perpetual Motion is reproduced in its entirety. The musical work incorporates musical and biographical references to African-American singer and activist Paul Robeson and the Chilean communist singer-composer Victor Jara. The accompanying text, The Keyboard is the ...

  16. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    Science.gov (United States)

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  17. The Effect of NUMA Tunings on CPU Performance

    Science.gov (United States)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  18. CPU and Cache Efficient Management of Memory-Resident Databases

    NARCIS (Netherlands)

    H. Pirk (Holger); F. Funke; M. Grund; T. Neumann (Thomas); U. Leser; S. Manegold (Stefan); A. Kemper; M.L. Kersten (Martin)

    2013-01-01

    htmlabstractMemory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementat

  19. Pipelined CPU Design with FPGA in Teaching Computer Architecture

    Science.gov (United States)

    Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon

    2012-01-01

    This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…

  20. CPU and cache efficient management of memory-resident databases

    NARCIS (Netherlands)

    Pirk, H.; Funke, F.; Grund, M.; Neumann, T.; Leser, U.; Manegold, S.; Kemper, A.; Kersten, M.L.

    2013-01-01

    Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations, bandwi

  1. Automatic Adjustment of Keyboard Settings Can Enhance Typing.

    Science.gov (United States)

    Koester, Heidi Horstmann; Mankowski, Jennifer

    2015-01-01

    We developed and evaluated a software tool for the automatic configuration of Windows keyboard settings. The software is intended to accommodate the needs of people with physical impairments, with a goal of improved productivity and comfort during typing. The prototype software, called AutoIDA, monitors user activity during performance of regular computer tasks and recommends the Sticky Keys and key repeat settings to meet the user's specific needs. The evaluation study included fourteen individuals with upper extremity impairments. AutoIDA recommended changes to the default keyboard settings for 10 of the 14 participants. For these individuals, average typing speed was essentially the same whether users typed with the default keyboard settings (5.5 wpm) or the AutoIDA-recommended settings (5.3 wpm). Average typing errors decreased with use of the recommended settings, from 17.6% to 13.3%, but this was not quite statistically significant (p = .10). On an individual basis, four participants appeared to improve their overall typing performance with AutoIDA-recommended settings. For more specific metrics, AutoIDA prevented about 90% of inadvertent key repeats (with a revised algorithm) and increased the efficiency and accuracy of entering modified (shifted) characters. Participants agreed that software like AutoIDA would be useful to them (average rating 4.1, where 5 = strongly agree).

  2. Differences in typing forces, muscle activity, comfort, and typing performance among virtual, notebook, and desktop keyboards.

    Science.gov (United States)

    Kim, Jeong Ho; Aulck, Lovenoor; Bartha, Michael C; Harper, Christy A; Johnson, Peter W

    2014-11-01

    The present study investigated whether there were physical exposure and typing productivity differences between a virtual keyboard with no tactile feedback and two conventional keyboards where key travel and tactile feedback are provided by mechanical switches under the keys. The key size and layout were same across all the keyboards. Typing forces; finger and shoulder muscle activity; self-reported comfort; and typing productivity were measured from 19 subjects while typing on a virtual (0 mm key travel), notebook (1.8 mm key travel), and desktop keyboard (4 mm key travel). When typing on the virtual keyboard, subjects typed with less force (p's typing forces and finger muscle activity came at the expense of a 60% reduction in typing productivity (p typing sessions or when typing productivity is at a premium, conventional keyboards with tactile feedback may be more suitable interface.

  3. A New Layout for English Letters on the Keyboard Using Evolutionary Strategy

    Directory of Open Access Journals (Sweden)

    Ali Asghar Poorhajikazam

    2015-10-01

    Full Text Available Since the keyboard is the primary device of entering text into a computer, a keyboard with letters on the proper layout of high performance is essential. Obtaining a suitable arrangements for the letters on the keyboard is an optimization problem which different methods have been proposed to solve it and its answer is the most appropriate permutation for letters on the keyboard which is 26 letters for English keyboard. In this paper, a new English keyboard layout has been proposed using evolutionary strategy which aims to increase typing speed and rectify some problems of current layout. To this end, a fitness function is used which includes parameters such as keys distance, fingers switch, frequency of use of both hands and etc. Different experiments have been conducted to evaluate the proposed approach and the results indicate that the obtained layout acts better than the current and other proposed layouts in the literature.

  4. Effect of VDT keyboard height and inclination on musculoskeletal discomfort for wheelchair users.

    Science.gov (United States)

    Wu, Swei-Pi; Yang, Chien-Hsin

    2005-04-01

    This study investigated the effect of keyboard height and inclination on musculoskeletal discomfort for wheelchair users. Eight Taiwanese male wheelchair users (28.75 +/- 8.75 years) were recruited as participants to perform nine experimental combinations of data entry tasks. Three keyboard heights and three inclinations were evaluated. Musculoskeletal discomfort was estimated by Rating of Perceived Exertion and Subjective Preference Ranking. Each subject performed a data entry task for all nine experimental combinations in a random order. The seated posture of all participants during the data entry operation was the upright posture. The height of the screen's center was adjusted according to the eye level of each subject. Analysis showed the keyboard height and keyboard inclination significantly affected rating of musculoskeletal discomfort. It is suggested that the optimum keyboard height choice is elbow-level height or 5 cm below elbow level with the keyboard inclination horizontal to the seat of the wheelchair.

  5. Multimodal user input to supervisory control systems - Voice-augmented keyboard

    Science.gov (United States)

    Mitchell, Christine M.; Forren, Michelle G.

    1987-01-01

    The use of a voice-augmented keyboard input modality is evaluated in a supervisory control application. An implementation of voice recognition technology in supervisory control is proposed: voice is used to request display pages, while the keyboard is used to input system reconfiguration commands. Twenty participants controlled GT-MSOCC, a high-fidelity simulation of the operator interface to a NASA ground control system, via a workstation equipped with either a single keyboard or a voice-augmented keyboard. Experimental results showed that in all cases where significant performance differences occurred, performance with the voice-augmented keyboard modality was inferior to and had greater variance than the keyboard-only modality. These results suggest that current moderately priced voice recognition systems are an inappropriate human-computer interaction technology in supervisory control systems.

  6. ABC versus QWERTZ: interference from mismatching sequences of letters in the alphabet and on the keyboard.

    Science.gov (United States)

    Kozlik, Julia; Neumann, Roland; Kunde, Wilfried

    2013-08-01

    Letters have a position in the alphabet and they have a position on standard personal computer keyboards. The present study explored the consequences of compatibility between spatial codes representing letter position in the alphabet and on the keyboard. In Experiment 1, participants responded faster to letter dyads in an alphabetic order judgment task, when the letters' alphabetical order matched their left to right order on the keyboard. In Experiment 2, compatible dyads were typed more quickly than incompatible dyads. Finally, in Experiments 3 and 4, letter dyads with compatible alphabetical and keyboard sequences of letters were more preferred than dyads with incompatible orders. Together, these results suggest that the perception of letters concurrently activates 2 representations of ordinal sequences. Compatibility between these representations enhances performance as well as affective evaluations. Limitations of this alphabet-keyboard compatibility effect as well as implications for the development of formal typing courses and computer keyboard design are discussed.

  7. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    Science.gov (United States)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  8. Quantification of speed-up and accuracy of multi-CPU computational flow dynamics simulations of hemodynamics in a posterior communicating artery aneurysm of complex geometry.

    Science.gov (United States)

    Karmonik, Christof; Yen, Christopher; Gabriel, Edgar; Partovi, Sasan; Horner, Marc; Zhang, Yi J; Klucznik, Richard P; Diaz, Orlando; Grossman, Robert G

    2013-11-01

    Towards the translation of computational fluid dynamics (CFD) techniques into the clinical workflow, performance increases achieved with parallel multi-central processing unit (CPU) pulsatile CFD simulations in a patient-derived model of a bilobed posterior communicating artery aneurysm were evaluated while simultaneously monitoring changes in the accuracy of the solution. Simulations were performed using 2, 4, 6, 8, 10 and 12 processors. In addition, a baseline simulation was obtained with a dual-core dual CPU computer of similar computational power to clinical imaging workstations. Parallel performance indices including computation speed-up, efficiency (speed-up divided by number of processors), computational cost (computation time × number of processors) and accuracy (velocity at four distinct locations: proximal and distal to the aneurysm, in the aneurysm ostium and aneurysm dome) were determined from the simulations and compared. Total computation time decreased from 9 h 10 min (baseline) to 2 h 34 min (10 CPU). Speed-up relative to baseline increased from 1.35 (2 CPU) to 3.57 (maximum at 10 CPU) while efficiency decreased from 0.65 to 0.35 with increasing cost (33.013 to 92.535). Relative velocity component deviations were less than 0.0073% and larger for 12 CPU than for 2 CPU (0.004 ± 0.002%, not statistically significant, p=0.07). Without compromising accuracy, parallel multi-CPU simulation reduces computing time for the simulation of hemodynamics in a model of a cerebral aneurysm by up to a factor of 3.57 (10 CPUs) to 2 h 34 min compared with a workstation with computational power similar to clinical imaging workstations.

  9. CPU and memory allocation optimization using fuzzy logic

    Science.gov (United States)

    Zalevsky, Zeev; Gur, Eran; Mendlovic, David

    2002-12-01

    The allocation of CPU time and memory resources, are well known problems in organizations with a large number of users, and a single mainframe. Usually the amount of resources given to a single user is based on its own statistics, not on the entire statistics of the organization therefore patterns are not well identified and the allocation system is prodigal. In this work the authors suggest a fuzzy logic based algorithm to optimize the CPU and memory distribution between the users based on the history of the users. The algorithm works separately on heavy users and light users since they have different patterns to be observed. The result is a set of rules, generated by the fuzzy logic inference engine that will allow the system to use its computing ability in an optimized manner. Test results on data taken from the Faculty of Engineering in Tel Aviv University, demonstrate the abilities of the new algorithm.

  10. Fuzzy-logic optical optimization of mainframe CPU and memory

    Science.gov (United States)

    Zalevsky, Zeev; Gur, Eran; Mendlovic, David

    2006-07-01

    The allocation of CPU time and memory resources is a familiar problem in organizations with a large number of users and a single mainframe. Usually the amount of resources allocated to a single user is based on the user's own statistics not on the statistics of the entire organization, therefore patterns are not well identified and the allocation system is prodigal. A fuzzy-logic-based algorithm to optimize the CPU and memory distribution among users based on their history is suggested. The algorithm works on heavy and light users separately since they present different patterns to be observed. The result is a set of rules generated by the fuzzy-logic inference engine that will allow the system to use its computing ability in an optimized manner. Test results on data taken from the Faculty of Engineering of Tel Aviv University demonstrate the capabilities of the new algorithm.

  11. An Optimized Round Robin Scheduling Algorithm for CPU Scheduling

    Directory of Open Access Journals (Sweden)

    Ajit Singh

    2010-10-01

    Full Text Available The main objective of this paper is to develop a new approach for round robin scheduling which help to improve the CPU efficiency in real time and time sharing operating system. There are many algorithms available for CPU scheduling. But we cannot implemented in real time operating system because of high context switch rates, large waiting time, large response time, large trn around time and less throughput. The proposed algorithm improves all the drawback of simple round robin architecture. The author have also given comparative analysis of proposed with simple round robin scheduling algorithm. Therefore, the author strongly feel that the proposed architecture solves all the problem encountered in simple round robin architecture by decreasing the performance parameters to desirable extent and thereby increasing the system throughput.

  12. OSPRay - A CPU Ray Tracing Framework for Scientific Visualization.

    Science.gov (United States)

    Wald, I; Johnson, G P; Amstutz, J; Brownlee, C; Knoll, A; Jeffers, J; Gunther, J; Navratil, P

    2017-01-01

    Scientific data is continually increasing in complexity, variety and size, making efficient visualization and specifically rendering an ongoing challenge. Traditional rasterization-based visualization approaches encounter performance and quality limitations, particularly in HPC environments without dedicated rendering hardware. In this paper, we present OSPRay, a turn-key CPU ray tracing framework oriented towards production-use scientific visualization which can utilize varying SIMD widths and multiple device backends found across diverse HPC resources. This framework provides a high-quality, efficient CPU-based solution for typical visualization workloads, which has already been integrated into several prevalent visualization packages. We show that this system delivers the performance, high-level API simplicity, and modular device support needed to provide a compelling new rendering framework for implementing efficient scientific visualization workflows.

  13. Fuzzy-logic optical optimization of mainframe CPU and memory.

    Science.gov (United States)

    Zalevsky, Zeev; Gur, Eran; Mendlovic, David

    2006-07-01

    The allocation of CPU time and memory resources is a familiar problem in organizations with a large number of users and a single mainframe. Usually the amount of resources allocated to a single user is based on the user's own statistics not on the statistics of the entire organization, therefore patterns are not well identified and the allocation system is prodigal. A fuzzy-logic-based algorithm to optimize the CPU and memory distribution among users based on their history is suggested. The algorithm works on heavy and light users separately since they present different patterns to be observed. The result is a set of rules generated by the fuzzy-logic inference engine that will allow the system to use its computing ability in an optimized manner. Test results on data taken from the Faculty of Engineering of Tel Aviv University demonstrate the capabilities of the new algorithm.

  14. Performance Analysis of CPU Scheduling Algorithms with Novel OMDRRS Algorithm

    Directory of Open Access Journals (Sweden)

    Neetu Goel

    2016-01-01

    Full Text Available CPU scheduling is one of the most primary and essential part of any operating system. It prioritizes processes to efficiently execute the user requests and help in choosing the appropriate process for execution. Round Robin (RR & Priority Scheduling(PS are one of the most widely used and acceptable CPU scheduling algorithm. But, its performance degrades with respect to turnaround time, waiting time & context switching with each recurrence. A New scheduling algorithm OMDRRS is developed to improve the performance of RR and priority scheduling algorithms. The new algorithm performs better than the popular existing algorithm. Drastic improvement is seen in waiting time, turnaround time, response time and context switching. Comparative analysis of Turn around Time(TAT, Waiting Time(WT, Response Time (RT is shown with the help of ANOVA and t-test.

  15. The Creation of a CPU Timer for High Fidelity Programs

    Science.gov (United States)

    Dick, Aidan A.

    2011-01-01

    Using C and C++ programming languages, a tool was developed that measures the efficiency of a program by recording the amount of CPU time that various functions consume. By inserting the tool between lines of code in the program, one can receive a detailed report of the absolute and relative time consumption associated with each section. After adapting the generic tool for a high-fidelity launch vehicle simulation program called MAVERIC, the components of a frequently used function called "derivatives ( )" were measured. Out of the 34 sub-functions in "derivatives ( )", it was found that the top 8 sub-functions made up 83.1% of the total time spent. In order to decrease the overall run time of MAVERIC, a launch vehicle simulation program, a change was implemented in the sub-function "Event_Controller ( )". Reformatting "Event_Controller ( )" led to a 36.9% decrease in the total CPU time spent by that sub-function, and a 3.2% decrease in the total CPU time spent by the overarching function "derivatives ( )".

  16. Information security governance simplified from the boardroom to the keyboard

    CERN Document Server

    Fitzgerald, Todd

    2011-01-01

    Security practitioners must be able to build cost-effective security programs while also complying with government regulations. Information Security Governance Simplified: From the Boardroom to the Keyboard lays out these regulations in simple terms and explains how to use control frameworks to build an air-tight information security (IS) program and governance structure. Defining the leadership skills required by IS officers, the book examines the pros and cons of different reporting structures and highlights the various control frameworks available. It details the functions of the security d

  17. 47 CFR 15.32 - Test procedures for CPU boards and computer power supplies.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Test procedures for CPU boards and computer... FREQUENCY DEVICES General § 15.32 Test procedures for CPU boards and computer power supplies. Power supplies and CPU boards used with personal computers and for which separate authorizations are required to be...

  18. Revisiting Molecular Dynamics on a CPU/GPU system: Water Kernel and SHAKE Parallelization.

    Science.gov (United States)

    Ruymgaart, A Peter; Elber, Ron

    2012-11-13

    We report Graphics Processing Unit (GPU) and Open-MP parallel implementations of water-specific force calculations and of bond constraints for use in Molecular Dynamics simulations. We focus on a typical laboratory computing-environment in which a CPU with a few cores is attached to a GPU. We discuss in detail the design of the code and we illustrate performance comparable to highly optimized codes such as GROMACS. Beside speed our code shows excellent energy conservation. Utilization of water-specific lists allows the efficient calculations of non-bonded interactions that include water molecules and results in a speed-up factor of more than 40 on the GPU compared to code optimized on a single CPU core for systems larger than 20,000 atoms. This is up four-fold from a factor of 10 reported in our initial GPU implementation that did not include a water-specific code. Another optimization is the implementation of constrained dynamics entirely on the GPU. The routine, which enforces constraints of all bonds, runs in parallel on multiple Open-MP cores or entirely on the GPU. It is based on Conjugate Gradient solution of the Lagrange multipliers (CG SHAKE). The GPU implementation is partially in double precision and requires no communication with the CPU during the execution of the SHAKE algorithm. The (parallel) implementation of SHAKE allows an increase of the time step to 2.0fs while maintaining excellent energy conservation. Interestingly, CG SHAKE is faster than the usual bond relaxation algorithm even on a single core if high accuracy is expected. The significant speedup of the optimized components transfers the computational bottleneck of the MD calculation to the reciprocal part of Particle Mesh Ewald (PME).

  19. Survey of CPU/GPU Synergetic Parallel Computing%CPU/GPU协同并行计算研究综述

    Institute of Scientific and Technical Information of China (English)

    卢风顺; 宋君强; 银福康; 张理论

    2011-01-01

    CPU/GPU异构混合并行系统以其强劲计算能力、高性价比和低能耗等特点成为新型高性能计算平台,但其复杂体系结构为并行计算研究提出了巨大挑战.CPU/GPU协同并行计算属于新兴研究领域,是一个开放的课题.根据所用计算资源的规模将CPU/GPU协同并行计算研究划分为三类,尔后从立项依据、研究内容和研究方法等方面重点介绍了几个混合计算项目,并指出了可进一步研究的方向,以期为领域科学家进行协同并行计算研究提供一定参考.%With the features of tremendous capability, high performance/price ratio and low power, the heterogeneous hybrid CPU/GPU parallel systems have become the new high performance computing platforms. However, the architecture complexity of the hybrid system poses many challenges on the parallel algorithms design on the infrastructure. According to the scale of computational resources involved in the synergetic parallel computing, we classified the recent researches into three categories, detailed the motivations, methodologies and applications of several projects, and discussed some on-going research issues in this direction in the end. We hope the domain experts can gain useful information about synergetic parallel computing from this work.

  20. Is Writing Performance Related to Keyboard Type? An Investigation from Examinees' Perspectives on the TOEFL IBT

    Science.gov (United States)

    Ling, Guangming

    2017-01-01

    To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…

  1. Is Writing Performance Related to Keyboard Type? An Investigation from Examinees' Perspectives on the TOEFL IBT

    Science.gov (United States)

    Ling, Guangming

    2017-01-01

    To investigate whether the type of keyboard used in exams introduces any construct-irrelevant variance to the TOEFL iBT Writing scores, we surveyed 17,040 TOEFL iBT examinees from 24 countries on their keyboard-related perceptions and preferences and analyzed the survey responses together with their test scores. Results suggest that controlling…

  2. The Use of Multiple Monitor and KVM (Keyboard, Video, and Mouse) Technologies in an Educational Setting

    Science.gov (United States)

    Snyder, Robin

    2004-01-01

    Having more than one screen of usable space can enhance productivity, both inside and outside of the classroom. So can using one keyboard, screen, and mouse with multiple computers. This paper (and session) will cover the author's use of multiple monitor and KVM (keyboard, video, and mouse) technologies both inside and outside the classroom, with…

  3. [Anesthetic machine leakage from vaporizer by external force derived from keyboard of electronic medical records].

    Science.gov (United States)

    Ikegami, Hiromi; Goto, Ryokichi; Sakamoto, Syotarou; Kohama, Hanako

    2012-11-01

    We experienced the leakage from the vaporizer of the anesthetic machine despite the normalities on performing the initial leak test. The vaporizer of the anesthetic machine was compressed by computer keyboard of EMR which caused a leak from vaporizer. After computer keyboard and the vaporizer were set at normal position, the leak stopped.

  4. Satisficing and the Use of Keyboard Shortcuts: Being Good Enough Is Enough?

    NARCIS (Netherlands)

    Tak, S.W.; Westendorp, P.; Rooij, I.J.E.I. van

    2013-01-01

    Keyboard shortcuts are generally accepted as the most efficient method for issuing commands, but previous research has suggested that many people do not use them. In this study we investigate the use of keyboard shortcuts further and explore reasons why they are underutilized by users. In Experiment

  5. Examining the Impact of L2 Proficiency and Keyboarding Skills on Scores on TOEFL-iBT Writing Tasks

    Science.gov (United States)

    Barkaoui, Khaled

    2014-01-01

    A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of…

  6. Examining the Impact of L2 Proficiency and Keyboarding Skills on Scores on TOEFL-iBT Writing Tasks

    Science.gov (United States)

    Barkaoui, Khaled

    2014-01-01

    A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of…

  7. An OpenCL implementation for the solution of TDSE on GPU and CPU architectures

    CERN Document Server

    O'Broin, Cathal

    2012-01-01

    Open Computing Language (OpenCL) is a parallel processing language that is ideally suited for running parallel algorithms on Graphical Processing Units (GPUs). In the present work we report the development of a generic parallel single-GPU code for the numerical solution of a system of first-order ordinary differential equations (ODEs) based on the openCL model. We have applied the code in the case of the time-dependent Schr\\"{o}dinger equation of atomic hydrogen in a strong laser field and studied its performance to the two basic kinds of compute units (GPUs and CPUs) . We found an excellent scalability and a significant speed-up of the GPU over the CPU device tending to a value of about 40.

  8. Self-Reconfiguration of CPU- Enhancement in the Performance

    Directory of Open Access Journals (Sweden)

    Prashant Singh Yadav

    2012-03-01

    Full Text Available This article presents the initial steps toward a distributed system that can optimize its performance by learning to reconfigure CPU and memory resources in reaction to current workload. We present a learning framework that uses standard system-monitoring tools to identify preferable configurations and their quantitative performance effects. The framework requires no instrumentation of the middleware or of the operating system. Using results from an implementation of the TPC Benchmark™ W (TPC-W online transaction-processing benchmark, we demonstrate a significant performance benefit to reconfiguration in response to workload changes.

  9. 键盘保护膜对降低电脑键盘微生物污染的作用%Effect of keyboard covers on reduction of microbial contamination

    Institute of Scientific and Technical Information of China (English)

    高晓东; 胡必杰; 鲍容; 崔扬文; 孙伟; 沈燕

    2014-01-01

    OBJECTIVE To evaluate the effect of keyboard covers on reduction of microbial contamination in inten-sive care unit (ICU) so as to prevent nosocomial infections .METHODS The keyboard covers were equipped for 109 keyboards in two ICUs from Feb 2014 to Apr 2014 ,which then were divided into two groups according to the cleaning and disinfection method for the keyboard covers ,the 50 keyboard covers from surgical intensive care unit (SICU) were wiped with effective chlorine disinfectant ,the 59 keyboards from the surgical intensive care unit (SICU) A were rinsed with running water ,then the keyboards and the keyboard covers were sampled before and after the cleaning ,and the bacterial colony counts were observed and calculated .RESULTS As for the samples col-lected from the keyboards ,the bacterial colony counts were 349 .3 CFU/10 keys before the cleaning ,226 .2 CFU/10 keys after the cleaning ,the difference was statistically significant (P< 0 .05) .The median bacterial colony counts in the SICU-A were 173 .5 CFU/10 keys after the cleaning ,significantly less than 458 .6 CFU/10 keys be-fore the cleaning(P=0 .01) ,and there was no significant difference in the bacterial colony counts in the SICU be-tween before and after the cleaning .CONCLUSION The keyboard covers can reduce the microbial contamination of the keyboards ,and the water washing is more effective than the wiping in the cleaning and disinfection of keyboard covers .%目的:评价键盘保护膜对降低IC U键盘微生物污染的效果,以预防医院感染。方法2014年2-4月某医院两个IC U的109个键盘配备键盘保护膜为研究对象,根据对键盘保护膜的清洗消毒方式分为两组,外科重症监护病房(SICU )50个键盘保护膜采用有效氯消毒液进行擦拭,外科重症监护病房A (SICU-A )59个键盘保护膜用流动水冲洗,对清洗前后的键盘及键盘保护膜进行采样,监测和计算污染菌落数。结果两组IC U 采集键盘样

  10. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  11. The effect of keyboard keyswitch make force on applied force and finger flexor muscle activity.

    Science.gov (United States)

    Rempel, D; Serina, E; Klinenberg, E; Martin, B J; Armstrong, T J; Foulke, J A; Natarajan, S

    1997-08-01

    The design of the force-displacement characteristics or 'feel' of keyboard keyswitches has been guided by preference and performance data; there has been very little information on how switch 'feel' alters muscle activity or applied force. This is a laboratory-based repeated measures design experiment to evaluate the effect of computer keyboard keyswitch design on applied finger force and muscle activity during a typing task. Ten experienced typists typed on three keyboards which differed in keyswitch make force (0.34, 0.47 and 1.02 N) while applied fingertip force and finger flexor electromyograms were recorded. The keyboard testing order was randomized and subjects typed on each keyboard for three trials, while data was collected for a minimum of 80 keystrokes per trial. No differences in applied fingertip force or finger flexor EMG were observed during typing on keyboards with switch make force of 0.34 or 0.47 N. However, applied fingertip force increased by approximately 40% (p < 0.05) and EMG activity increased by approximately 20% (p < 0.05) when the keyswitch make force was increased from 0.47 to 1.02 N. These results suggest that, in order to minimize the biomechanical loads to forearm tendons and muscles of keyboard users, keyswitches with a make force of 0.47 N or less should be considered over switches with a make force of 1.02 N.

  12. Liquid Cooling System for CPU by Electroconjugate Fluid

    Directory of Open Access Journals (Sweden)

    Yasuo Sakurai

    2014-06-01

    Full Text Available The dissipated power of CPU for personal computer has been increased because the performance of personal computer becomes higher. Therefore, a liquid cooling system has been employed in some personal computers in order to improve their cooling performance. Electroconjugate fluid (ECF is one of the functional fluids. ECF has a remarkable property that a strong jet flow is generated between electrodes when a high voltage is applied to ECF through the electrodes. By using this strong jet flow, an ECF-pump with simple structure, no sliding portion, no noise, and no vibration seems to be able to be developed. And then, by the use of the ECF-pump, a new liquid cooling system by ECF seems to be realized. In this study, to realize this system, an ECF-pump is proposed and fabricated to investigate the basic characteristics of the ECF-pump experimentally. Next, by utilizing the ECF-pump, a model of a liquid cooling system by ECF is manufactured and some experiments are carried out to investigate the performance of this system. As a result, by using this system, the temperature of heat source of 50 W is kept at 60°C or less. In general, CPU is usually used at this temperature or less.

  13. Occupational overuse syndrome among keyboard users in Mauritius

    Directory of Open Access Journals (Sweden)

    Subratty A

    2005-01-01

    Full Text Available Ergonomics is a very important factor that cannot be over looked in the information technology working environment. This study was undertaken to assess reporting of occupational overuse syndrome (OOS among keyboard users in Mauritius. A questionnaire-based survey was carried out among 362 computer users. Two hundred completed questionnaires were returned and data analyzed. The main findings from the present work showed symptoms such as eye problems and lower back, neck and shoulder pain were common among computer users. Severity of pain increased with number of hours of computer use at work. Reporting of OOS was higher among females. In conclusion, it is proposed that computer users need to be provided with an ergonomically conducive environment as well as to be educated and trained with respect to OOS. Implementation of such program(s will go a long way towards preventing appearance of OOS symptoms among the young population currently engaged in the IT sector in Mauritius.

  14. Biometric Authentication Through a Virtual Keyboard for Smartphones

    Directory of Open Access Journals (Sweden)

    Matthias Trojahn

    2012-11-01

    Full Text Available Security through biometric keystroke authentication on mobile phones with a capacitive display and aQWERTZ-layout is a new approach. Keystroke on mobile phones with a 12-key layout has already shownthe possibility for authentication on these devices. But with hardware changes, new general requirementshave been arisen.In this paper, we focus on the authentication with keystroke dynamics. Therefore, we are presenting newimplemented keyboard layouts to show differences between a 12-key layout and a QWERTZ-layout. Inaddition, we compare a numerical (PIN and alphabetic (password input for mobile phones. For this, weadded new features for a keystroke authentication with a capacitive display. With the knowledge of the faultrates, we discuss the improvement of the security for keystroke dynamics with different virtual keyboardlayouts. Our results show, even with new hardware factors, that an authentication via keystroke dynamicsis possible.

  15. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  16. Online transcranial Doppler ultrasonographic control of an onscreen keyboard.

    Science.gov (United States)

    Lu, Jie; Mamun, Khondaker A; Chau, Tom

    2014-01-01

    Brain-computer interface (BCI) systems exploit brain activity for generating a control command and may be used by individuals with severe motor disabilities as an alternative means of communication. An emerging brain monitoring modality for BCI development is transcranial Doppler ultrasonography (TCD), which facilitates the tracking of cerebral blood flow velocities associated with mental tasks. However, TCD-BCI studies to date have exclusively been offline. The feasibility of a TCD-based BCI system hinges on its online performance. In this paper, an online TCD-BCI system was implemented, bilaterally tracking blood flow velocities in the middle cerebral arteries for system-paced control of a scanning keyboard. Target letters or words were selected by repetitively rehearsing the spelling while imagining the writing of the intended word, a left-lateralized task. Undesired letters or words were bypassed by performing visual tracking, a non-lateralized task. The keyboard scanning period was 15 s. With 10 able-bodied right-handed young adults, the two mental tasks were differentiated online using a Naïve Bayes classification algorithm and a set of time-domain, user-dependent features. The system achieved an average specificity and sensitivity of 81.44 ± 8.35 and 82.30 ± 7.39%, respectively. The level of agreement between the intended and machine-predicted selections was moderate (κ = 0.60). The average information transfer rate was 0.87 bits/min with an average throughput of 0.31 ± 0.12 character/min. These findings suggest that an online TCD-BCI can achieve reasonable accuracies with an intuitive language task, but with modest throughput. Future interface and signal classification enhancements are required to improve communication rate.

  17. Design and Development of Card-Sized Virtual Keyboard Using Permanent Magnets and Hall Sensors

    Science.gov (United States)

    Demachi, Kazuyuki; Ohyama, Makoto; Kanemoto, Yoshiki; Masaie, Issei

    This paper proposes a method to distinguish the key-type of human fingers attached with the small permanent magnets. The Hall sensors arrayed in the credit card size area feel the distribution of the magnetic field due to the key-typing movement of the human fingers as if the keyboard exists, and the signal is analyzed using the generic algorithm or the neural network algorism to distinguish the typed keys. By this method, the keyboard can be miniaturized to the credit card size (54mm×85mm). We called this system `The virtual keyboard system'.

  18. Pointright: a system to redirect mouse and keyboard control among multiple machines

    Science.gov (United States)

    Johanson, Bradley E.; Winograd, Terry A.; Hutchins, Gregory M.

    2008-09-30

    The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.

  19. Providing Source Code Level Portability Between CPU and GPU with MapCG

    Institute of Scientific and Technical Information of China (English)

    Chun-Tao Hong; De-Hao Chen; Yu-Bei Chen; Wen-Guang Chen; Wei-Min Zheng; Hai-Bo Lin

    2012-01-01

    Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years.At present,the common approach to programming GPU units is to write GPU specific code with low level GPU APIs such as CUDA.Although this approach can achieve good performance,it creates serious portability issues as programmers are required to write a specific version of the code for each potential target architecture.This results in high development and maintenance costs.We believe it is desirable to have a programming model which provides source code portability between CPUs and GPUs,as well as different GPUs.This would allow programmers to write one version of the code,which can be compiled and executed on either CPUs or GPUs efficiently without modification.In this paper,we propose MapCG,a MapReduce framework to provide source code level portability between CPUs and GPUs.In contrast to other approaches such as OpenCL,our framework,based on MapReduce,provides a high level programming model and makes programming much easier.We describe the design of MapCG,including the MapReduce-style high-level programming framework and the runtime system on the CPU and GPU.A prototype of the MapCG runtime,supporting multi-core CPUs and NVIDIA GPUs,was implemented. Our experimental results show that this implementation can execute the same source code efficiently on multi-core CPU platforms and GPUs,achieving an average speedup of 1.6~2.5x over previous implementations of MapReduce on eight commonly used applications.

  20. Real-world comparison of CPU and GPU implementations of SNPrank: a network analysis tool for GWAS.

    Science.gov (United States)

    Davis, Nicholas A; Pandey, Ahwan; McKinney, B A

    2011-01-15

    Bioinformatics researchers have a variety of programming languages and architectures at their disposal, and recent advances in graphics processing unit (GPU) computing have added a promising new option. However, many performance comparisons inflate the actual advantages of GPU technology. In this study, we carry out a realistic performance evaluation of SNPrank, a network centrality algorithm that ranks single nucleotide polymorhisms (SNPs) based on their importance in the context of a phenotype-specific interaction network. Our goal is to identify the best computational engine for the SNPrank web application and to provide a variety of well-tested implementations of SNPrank for Bioinformaticists to integrate into their research. Using SNP data from the Wellcome Trust Case Control Consortium genome-wide association study of Bipolar Disorder, we compare multiple SNPrank implementations, including Python, Matlab and Java as well as CPU versus GPU implementations. When compared with naïve, single-threaded CPU implementations, the GPU yields a large improvement in the execution time. However, with comparable effort, multi-threaded CPU implementations negate the apparent advantage of GPU implementations. The SNPrank code is open source and available at http://insilico.utulsa.edu/snprank.

  1. CUDASW++ 3.0: accelerating Smith-Waterman protein database search by coupling CPU and GPU SIMD instructions.

    Science.gov (United States)

    Liu, Yongchao; Wirawan, Adrianto; Schmidt, Bertil

    2013-04-04

    The maximal sensitivity for local alignments makes the Smith-Waterman algorithm a popular choice for protein sequence database search based on pairwise alignment. However, the algorithm is compute-intensive due to a quadratic time complexity. Corresponding runtimes are further compounded by the rapid growth of sequence databases. We present CUDASW++ 3.0, a fast Smith-Waterman protein database search algorithm, which couples CPU and GPU SIMD instructions and carries out concurrent CPU and GPU computations. For the CPU computation, this algorithm employs SSE-based vector execution units as accelerators. For the GPU computation, we have investigated for the first time a GPU SIMD parallelization, which employs CUDA PTX SIMD video instructions to gain more data parallelism beyond the SIMT execution model. Moreover, sequence alignment workloads are automatically distributed over CPUs and GPUs based on their respective compute capabilities. Evaluation on the Swiss-Prot database shows that CUDASW++ 3.0 gains a performance improvement over CUDASW++ 2.0 up to 2.9 and 3.2, with a maximum performance of 119.0 and 185.6 GCUPS, on a single-GPU GeForce GTX 680 and a dual-GPU GeForce GTX 690 graphics card, respectively. In addition, our algorithm has demonstrated significant speedups over other top-performing tools: SWIPE and BLAST+. CUDASW++ 3.0 is written in CUDA C++ and PTX assembly languages, targeting GPUs based on the Kepler architecture. This algorithm obtains significant speedups over its predecessor: CUDASW++ 2.0, by benefiting from the use of CPU and GPU SIMD instructions as well as the concurrent execution on CPUs and GPUs. The source code and the simulated data are available at http://cudasw.sourceforge.net.

  2. The Effect of Keyboard Learning Experiences on Middle School General Music Students' Music Achievement and Attitudes.

    Science.gov (United States)

    Wig, Jacob A., Jr.; Boyle, J. David

    1982-01-01

    Describes a study which compared the effects of a keyboard learning approach and a traditional general music approach on sixth-grade general music students' music achievement, attitudes toward music, and self-concept regarding music ability. (Author/RM)

  3. An investigation of the performance of novel chorded keyboards in combination with pointing input devices.

    Science.gov (United States)

    Shi, Wen-Zhou; Wu, Fong-Gong

    2015-01-01

    Rapid advances in computing power have driven the development of smaller and lighter technology products, with novel input devices constantly being produced in response to new user behaviors and usage contexts. The aim of this research was to investigate the feasibility of operating chorded keyboard control modules in concert with pointing devices such as styluses and mice. We compared combinations of two novel chorded keyboards with different pointing devices in hopes of finding a better combination for future electronic products. Twelve participants were recruited for simulation testing, and paired sample t testing was conducted to determine whether input and error rates for the novel keyboards were improved significantly over those of traditional input methods. The most efficient input device combination tested was the combination of a novel cross-shaped key keyboard and a stylus, suggesting the high potential for use of this combination with future mobile IT products.

  4. 78 FR 70320 - Certain Mobile Handset Devices and Related Touch Keyboard Software; Commission Determination Not...

    Science.gov (United States)

    2013-11-25

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Mobile Handset Devices and Related Touch Keyboard Software; Commission Determination Not... and Personal Communications Devices, LLC (``PCD'') of Hauppauge, New York as respondents. PCD has...

  5. The immediate effects of therapeutic keyboard music playing for finger training in adults undergoing hand rehabilitation

    Science.gov (United States)

    Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan

    2016-01-01

    [Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects’ hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics. PMID:27630419

  6. Integrating Piano Keyboarding into the Elementary Classroom: Effects on Memory Skills and Sentiment Toward School.

    Science.gov (United States)

    Marcinkiewicz, Henryk R.; And Others

    1995-01-01

    Discovered that the introduction of piano keyboarding into elementary school music instruction produced a positive effect regarding children's sentiment towards school. No discernible effect was revealed concerning memory skills. Includes statistical data and description of survey questionnaires. (MJP)

  7. The immediate effects of therapeutic keyboard music playing for finger training in adults undergoing hand rehabilitation.

    Science.gov (United States)

    Zhang, Xiaoying; Liu, Songhuai; Yang, Degang; Du, Liangjie; Wang, Ziyuan

    2016-08-01

    [Purpose] The purpose of this study was to examine the immediate effects of therapeutic keyboard music playing on the finger function of subjects' hands through measurements of the joint position error test, surface electromyography, probe reaction time, and writing time. [Subjects and Methods] Ten subjects were divided randomly into experimental and control groups. The experimental group used therapeutic keyboard music playing and the control group used grip training. All subjects were assessed and evaluated by the joint position error test, surface electromyography, probe reaction time, and writing time. [Results] After accomplishing therapeutic keyboard music playing and grip training, surface electromyography of the two groups showed no significant change, but joint position error test, probe reaction time, and writing time obviously improved. [Conclusion] These results suggest that therapeutic keyboard music playing is an effective and novel treatment for improving joint position error test scores, probe reaction time, and writing time, and it should be promoted widely in clinics.

  8. Teaching and Learning Undergraduate Music Theory at the Keyboard: Challenges, Solutions, and Impacts

    National Research Council Canada - National Science Library

    Michael R Callahan

    2015-01-01

      Music making at the keyboard can be of significant value to students learning music theory and aural skills, but an instructor must clear several logistical hurdles in order to integrate it fully...

  9. Effect of dynamic keyboard and word-prediction systems on text input speed in persons with functional tetraplegia.

    Science.gov (United States)

    Pouplin, Samuel; Robertson, Johanna; Antoine, Jean-Yves; Blanchet, Antoine; Kahloun, Jean Loup; Volle, Philippe; Bouteille, Justine; Lofaso, Frédéric; Bensmail, Djamel

    2014-01-01

    Information technology plays a very important role in society. People with disabilities are often limited by slow text input speed despite the use of assistive devices. This study aimed to evaluate the effect of a dynamic on-screen keyboard (Custom Virtual Keyboard) and a word-prediction system (Sibylle) on text input speed in participants with functional tetraplegia. Ten participants tested four modes at home (static on-screen keyboard with and without word prediction and dynamic on-screen keyboard with and without word prediction) for 1 mo before choosing one mode and then using it for another month. Initial mean text input speed was around 23 characters per minute with the static keyboard and 12 characters per minute with the dynamic keyboard. The results showed that the dynamic keyboard reduced text input speed by 37% compared with the standard keyboard and that the addition of word prediction had no effect on text input speed. We suggest that current forms of dynamic keyboards and word prediction may not be suitable for increasing text input speed, particularly for subjects who use pointing devices. Future studies should evaluate the optimal ergonomic design of dynamic keyboards and the number and position of words that should be predicted.

  10. Are Computer Keyboards Making You--or Your Students--Sick?

    Science.gov (United States)

    Goldsborough, Reid

    2008-01-01

    According to new research by the London-based consumer group Which? (www.which.co.uk), a computer's keyboard may be harboring the kinds of bugs that can cause a nasty case of food poisoning. "The main cause of a bug-infested keyboard is eating at one's desk," according to a report released by the group. Crumbs and spills can wind up on and between…

  11. A data-driven design evaluation tool for handheld device soft keyboards.

    Directory of Open Access Journals (Sweden)

    Matthieu B Trudeau

    Full Text Available Thumb interaction is a primary technique used to operate small handheld devices such as smartphones. Despite the different techniques involved in operating a handheld device compared to a personal computer, the keyboard layouts for both devices are similar. A handheld device keyboard that considers the physical capabilities of the thumb may improve user experience. We developed and applied a design evaluation tool for different geometries of the QWERTY keyboard using a performance evaluation model. The model utilizes previously collected data on thumb motor performance and posture for different tap locations and thumb movement directions. We calculated a performance index (PITOT, 0 is worst and 2 is best for 663 designs consisting in different combinations of three variables: the keyboard's radius of curvature (R (mm, orientation (O (°, and vertical location on the screen (L. The current standard keyboard performed poorly (PITOT = 0.28 compared to other designs considered. Keyboard location (L contributed to the greatest variability in performance out of the three design variables, suggesting that designers should modify this variable first. Performance was greatest for designs in the middle keyboard location. In addition, having a slightly upward curve (R = -20 mm and orientated perpendicular to the thumb's long axis (O = -20° improved performance to PITOT = 1.97. Poorest performances were associated with placement of the keyboard's spacebar in the bottom right corner of the screen (e.g., the worst was for R = 20 mm, O = 40°, L =  Bottom (PITOT = 0.09. While this evaluation tool can be used in the design process as an ergonomic reference to promote user motor performance, other design variables such as visual access and usability still remain unexplored.

  12. Ultra-fast hybrid CPU-GPU multiple scatter simulation for 3-D PET.

    Science.gov (United States)

    Kim, Kyung Sang; Son, Young Don; Cho, Zang Hee; Ra, Jong Beom; Ye, Jong Chul

    2014-01-01

    Scatter correction is very important in 3-D PET reconstruction due to a large scatter contribution in measurements. Currently, one of the most popular methods is the so-called single scatter simulation (SSS), which considers single Compton scattering contributions from many randomly distributed scatter points. The SSS enables a fast calculation of scattering with a relatively high accuracy; however, the accuracy of SSS is dependent on the accuracy of tail fitting to find a correct scaling factor, which is often difficult in low photon count measurements. To overcome this drawback as well as to improve accuracy of scatter estimation by incorporating multiple scattering contribution, we propose a multiple scatter simulation (MSS) based on a simplified Monte Carlo (MC) simulation that considers photon migration and interactions due to photoelectric absorption and Compton scattering. Unlike the SSS, the MSS calculates a scaling factor by comparing simulated prompt data with the measured data in the whole volume, which enables a more robust estimation of a scaling factor. Even though the proposed MSS is based on MC, a significant acceleration of the computational time is possible by using a virtual detector array with a larger pitch by exploiting that the scatter distribution varies slowly in spatial domain. Furthermore, our MSS implementation is nicely fit to a parallel implementation using graphic processor unit (GPU). In particular, we exploit a hybrid CPU-GPU technique using the open multiprocessing and the compute unified device architecture, which results in 128.3 times faster than using a single CPU. Overall, the computational time of MSS is 9.4 s for a high-resolution research tomograph (HRRT) system. The performance of the proposed MSS is validated through actual experiments using an HRRT.

  13. Differential effects of type of keyboard playing task and tempo on surface EMG amplitudes of forearm muscles

    Directory of Open Access Journals (Sweden)

    Hyun Ju eChong

    2015-09-01

    Full Text Available Despite increasing interest in keyboard playing as a strategy for repetitive finger exercises in fine motor skill development and hand rehabilitation, comparative analysis of task-specific finger movements relevant to keyboard playing has been less extensive. This study examined whether there were differences in surface EMG activity levels of forearm muscles associated with different keyboard playing tasks. Results demonstrated higher muscle activity with sequential keyboard playing in a random pattern compared to individuated playing or sequential playing in a successive pattern. Also, the speed of finger movements was found as a factor that affect muscle activity levels, demonstrating that faster tempo elicited significantly greater muscle activity than self-paced tempo. The results inform our understanding of the type of finger movements involved in different types of keyboard playing at different tempi so as to consider the efficacy and fatigue level of keyboard playing as an intervention for amateur pianists or individuals with impaired fine motor skills.

  14. Differential effects of type of keyboard playing task and tempo on surface EMG amplitudes of forearm muscles.

    Science.gov (United States)

    Chong, Hyun Ju; Kim, Soo Ji; Yoo, Ga Eul

    2015-01-01

    Despite increasing interest in keyboard playing as a strategy for repetitive finger exercises in fine motor skill development and hand rehabilitation, comparative analysis of task-specific finger movements relevant to keyboard playing has been less extensive. This study examined, whether there were differences in surface EMG activity levels of forearm muscles associated with different keyboard playing tasks. Results demonstrated higher muscle activity with sequential keyboard playing in a random pattern compared to individuated playing or sequential playing in a successive pattern. Also, the speed of finger movements was found as a factor that affect muscle activity levels, demonstrating that faster tempo elicited significantly greater muscle activity than self-paced tempo. The results inform our understanding of the type of finger movements involved in different types of keyboard playing at different tempi. This helps to consider the efficacy and fatigue level of keyboard playing tasks when being used as an intervention for amateur pianists or individuals with impaired fine motor skills.

  15. Plasma carboxypeptidase U (CPU, CPB2, TAFIa) generation during in vitro clot lysis and its interplay between coagulation and fibrinolysis.

    Science.gov (United States)

    Leenaerts, Dorien; Aernouts, Jef; Van Der Veken, Pieter; Sim, Yani; Lambeir, Anne-Marie; Hendriks, Dirk

    2017-07-26

    Carboxypeptidase U (CPU, CPB2, TAFIa) is a basic carboxypeptidase that is able to attenuate fibrinolysis. The inactive precursor procarboxypeptidase U is converted to its active form by thrombin, the thrombin-thrombomodulin complex or plasmin. The aim of this study was to investigate and characterise the time course of CPU generation in healthy individuals. In plasma of 29 healthy volunteers, CPU generation was monitored during in vitro clot lysis. CPU activity was measured by means of an enzymatic assay that uses the specific substrate Bz-o-cyano-Phe-Arg. An algorithm was written to plot the CPU generation curve and calculate the parameters that define it. In all individuals, CPU generation was biphasic. Marked inter-individual differences were present and a reference range was determined. The endogenous CPU generation potential is the composite effect of multiple factors. With respect to the first CPU activity peak characteristics, we found correlations with baseline proCPU concentration, proCPU Thr325Ile polymorphism, time to clot initiation and the clot lysis time. The second CPU peak related with baseline proCPU levels and with the maximum turbidity of the clot lysis profile. In conclusion, our method offers a technique to determine the endogenous CPU generation potential of an individual. The parameters obtained by the method quantitatively describe the different mechanisms that influence CPU generation during the complex interplay between coagulation and fibrinolysis, which are in line with the threshold hypothesis.

  16. Tablet Keyboard Configuration Affects Performance, Discomfort and Task Difficulty for Thumb Typing in a Two-Handed Grip.

    Directory of Open Access Journals (Sweden)

    Matthieu B Trudeau

    Full Text Available When holding a tablet computer with two hands, the touch keyboard configuration imposes postural constraints on the user because of the need to simultaneously hold the device and type with the thumbs. Designers have provided users with several possible keyboard configurations (device orientation, keyboard layout and location. However, potential differences in performance, usability and postures among these configurations have not been explored. We hypothesize that (1 the narrower standard keyboard layout in the portrait orientation leads to lower self-reported discomfort and less reach than the landscape orientation; (2 a split keyboard layout results in better overall outcomes compared to the standard layout; and (3 the conventional bottom keyboard location leads to the best outcomes overall compared to other locations. A repeated measures laboratory experiment of 12 tablet owners measured typing speed, discomfort, task difficulty, and thumb/wrist joint postures using an active marker system during typing tasks for different combinations of device orientation (portrait and landscape, keyboard layout (standard and split, and keyboard location (bottom, middle, top. The narrower standard keyboard with the device in the portrait orientation was associated with less discomfort (least squares mean (and S.E. 2.9±0.6 than the landscape orientation (4.5±0.7. Additionally, the split keyboard decreased the amount of reaching required by the thumb in the landscape orientation as defined by a reduced range of motion and less MCP extension, which may have led to reduced discomfort (2.7±0.6 compared to the standard layout (4.5±0.7. However, typing speed was greater for the standard layout (127±5 char./min. compared to the split layout (113±4 char./min. regardless of device orientation and keyboard location. Usage guidelines and designers can incorporate these findings to optimize keyboard design parameters and form factors that promote user performance

  17. Tablet Keyboard Configuration Affects Performance, Discomfort and Task Difficulty for Thumb Typing in a Two-Handed Grip.

    Science.gov (United States)

    Trudeau, Matthieu B; Catalano, Paul J; Jindrich, Devin L; Dennerlein, Jack T

    2013-01-01

    When holding a tablet computer with two hands, the touch keyboard configuration imposes postural constraints on the user because of the need to simultaneously hold the device and type with the thumbs. Designers have provided users with several possible keyboard configurations (device orientation, keyboard layout and location). However, potential differences in performance, usability and postures among these configurations have not been explored. We hypothesize that (1) the narrower standard keyboard layout in the portrait orientation leads to lower self-reported discomfort and less reach than the landscape orientation; (2) a split keyboard layout results in better overall outcomes compared to the standard layout; and (3) the conventional bottom keyboard location leads to the best outcomes overall compared to other locations. A repeated measures laboratory experiment of 12 tablet owners measured typing speed, discomfort, task difficulty, and thumb/wrist joint postures using an active marker system during typing tasks for different combinations of device orientation (portrait and landscape), keyboard layout (standard and split), and keyboard location (bottom, middle, top). The narrower standard keyboard with the device in the portrait orientation was associated with less discomfort (least squares mean (and S.E.) 2.9±0.6) than the landscape orientation (4.5±0.7). Additionally, the split keyboard decreased the amount of reaching required by the thumb in the landscape orientation as defined by a reduced range of motion and less MCP extension, which may have led to reduced discomfort (2.7±0.6) compared to the standard layout (4.5±0.7). However, typing speed was greater for the standard layout (127±5 char./min.) compared to the split layout (113±4 char./min.) regardless of device orientation and keyboard location. Usage guidelines and designers can incorporate these findings to optimize keyboard design parameters and form factors that promote user performance and

  18. 47 CFR 15.102 - CPU boards and power supplies used in personal computers.

    Science.gov (United States)

    2010-10-01

    ... computers. 15.102 Section 15.102 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Unintentional Radiators § 15.102 CPU boards and power supplies used in personal computers. (a... modifications that must be made to a personal computer, peripheral device, CPU board or power supply during...

  19. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  20. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  1. Workload-Aware and CPU Frequency Scaling for Optimal Energy Consumption in VM Allocation

    Directory of Open Access Journals (Sweden)

    Zhen Liu

    2014-01-01

    Full Text Available In the problem of VMs consolidation for cloud energy saving, different workloads will ask for different resources. Thus, considering workload characteristic, the VM placement solution will be more reasonable. In the real world, different workload works in a varied CPU utilization during its work time according to its task characteristics. That means energy consumption related to both the CPU utilization and CPU frequency. Therefore, only using the model of CPU frequency to evaluate energy consumption is insufficient. This paper theoretically verified that there will be a CPU frequency best suit for a certain CPU utilization in order to obtain the minimum energy consumption. According to this deduction, we put forward a heuristic CPU frequency scaling algorithm VP-FS (virtual machine placement with frequency scaling. In order to carry the experiments, we realized three typical greedy algorithms for VMs placement and simulate three groups of VM tasks. Our efforts show that different workloads will affect VMs allocation results. Each group of workload has its most suitable algorithm when considering the minimum used physical machines. And because of the CPU frequency scaling, VP-FS has the best results on the total energy consumption compared with the other three algorithms under any of the three groups of workloads.

  2. Dosimetric comparison of helical tomotherapy treatment plans for total marrow irradiation created using GPU and CPU dose calculation engines.

    Science.gov (United States)

    Nalichowski, Adrian; Burmeister, Jay

    2013-07-01

    To compare optimization characteristics, plan quality, and treatment delivery efficiency between total marrow irradiation (TMI) plans using the new TomoTherapy graphic processing unit (GPU) based dose engine and CPU/cluster based dose engine. Five TMI plans created on an anthropomorphic phantom were optimized and calculated with both dose engines. The planning treatment volume (PTV) included all the bones from head to mid femur except for upper extremities. Evaluated organs at risk (OAR) consisted of lung, liver, heart, kidneys, and brain. The following treatment parameters were used to generate the TMI plans: field widths of 2.5 and 5 cm, modulation factors of 2 and 2.5, and pitch of either 0.287 or 0.43. The optimization parameters were chosen based on the PTV and OAR priorities and the plans were optimized with a fixed number of iterations. The PTV constraint was selected to ensure that at least 95% of the PTV received the prescription dose. The plans were evaluated based on D80 and D50 (dose to 80% and 50% of the OAR volume, respectively) and hotspot volumes within the PTVs. Gamma indices (Γ) were also used to compare planar dose distributions between the two modalities. The optimization and dose calculation times were compared between the two systems. The treatment delivery times were also evaluated. The results showed very good dosimetric agreement between the GPU and CPU calculated plans for any of the evaluated planning parameters indicating that both systems converge on nearly identical plans. All D80 and D50 parameters varied by less than 3% of the prescription dose with an average difference of 0.8%. A gamma analysis Γ(3%, 3 mm) CPU plan. The average number of voxels meeting the Γ CPU/cluster based system was 579 vs 26.8 min for the GPU based system. There was no difference in the calculated treatment delivery time per fraction. Beam-on time varied based on field width and pitch and ranged between 15 and 28 min. The TomoTherapy GPU based dose engine

  3. Proposed Fuzzy CPU Scheduling Algorithm (PFCS for Real Time Operating Systems

    Directory of Open Access Journals (Sweden)

    Prerna Ajmani

    2013-12-01

    Full Text Available In the era of supercomputers multiprogramming operating system has emerged. Multiprogramming operating system allows more than one ready to execute processes to be loaded into memory. CPU scheduling is the process of selecting from among the processes in memory that are ready to execute and allocate the processor time (CPU to it. Many conventional algorithms have been proposed for scheduling CPU such as FCFS, shortest job first (SJF, priority scheduling etc. But no algorithm is absolutely ideal in terms of increased throughput, decreased waiting time, decreased turnaround time etc. In this paper, a new fuzzy logic based CPU scheduling algorithm has been proposed to overcome the drawbacks of conventional algorithms for efficient utilization of CPU.

  4. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Samfass, Philipp [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-26

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. One of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).

  5. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  6. Tempest: GPU-CPU computing for high-throughput database spectral matching.

    Science.gov (United States)

    Milloy, Jeffrey A; Faherty, Brendan K; Gerber, Scott A

    2012-07-06

    Modern mass spectrometers are now capable of producing hundreds of thousands of tandem (MS/MS) spectra per experiment, making the translation of these fragmentation spectra into peptide matches a common bottleneck in proteomics research. When coupled with experimental designs that enrich for post-translational modifications such as phosphorylation and/or include isotopically labeled amino acids for quantification, additional burdens are placed on this computational infrastructure by shotgun sequencing. To address this issue, we have developed a new database searching program that utilizes the massively parallel compute capabilities of a graphical processing unit (GPU) to produce peptide spectral matches in a very high throughput fashion. Our program, named Tempest, combines efficient database digestion and MS/MS spectral indexing on a CPU with fast similarity scoring on a GPU. In our implementation, the entire similarity score, including the generation of full theoretical peptide candidate fragmentation spectra and its comparison to experimental spectra, is conducted on the GPU. Although Tempest uses the classical SEQUEST XCorr score as a primary metric for evaluating similarity for spectra collected at unit resolution, we have developed a new "Accelerated Score" for MS/MS spectra collected at high resolution that is based on a computationally inexpensive dot product but exhibits scoring accuracy similar to that of the classical XCorr. In our experience, Tempest provides compute-cluster level performance in an affordable desktop computer.

  7. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling.

    Science.gov (United States)

    Hoang, Roger V; Tanna, Devyani; Jayet Bray, Laurence C; Dascalu, Sergiu M; Harris, Frederick C

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards.

  8. CPU 86017, p-chlorobenzyltetrahydroberberine chloride, attenuates monocrotaline-induced pulmonary hypertension by suppressing endothelin pathway.

    Science.gov (United States)

    Zhang, Tian-tai; Cui, Bing; Dai, De-zai; Su, Wei

    2005-11-01

    To elucidate the involvement of the endothelin (ET) pathway in the pathogenesis of monocrotaline (MCT)-induced pulmonary arterial hypertension (PAH) and the therapeutic effect of CPU 86017 (p-chlorobenzyltetrahydroberberine chloride) in rats. Rats were injected with a single dose (60 mg/kg, sc) of MCT and given CPU 86017 (20, 40, and 80 mg/kg-1/d-1, po) or saline for 28 d. The hemodynamics, mRNA expression, and vascular activity were evaluated. Right ventricular systolic pressure and central venous pressures were elevated markedly in the PAH model and decreased by CPU 86017. In the PAH group, the endothelin-1 (ET-1) in serum and lungs was dramatically increased by 54% (79.9 pg/mL, PCPU 86017 decreased the content of ET-1 to the normal level in lung tissue, but was less effective in serum. The level of NO was significantly increased in CPU 86017 at 80 and 40 mg/kg-1/d-1 groups in tissue, whereas the difference in serum was not significant. A significant reduction in MDA production and an increase in the SOD activity in the serum and lungs was observed in all three CPU 86017 groups. CPU 86017 80 mg/kg-1/d-1 po increased the activity of cNOS by 33% (PCPU 86017 groups, and preproET-1 mRNA abundance was also reduced notably in CPU 86017 80 mg/kg-1/d-1 group vs the PAH group. The KCl-induced vasoconstrictions in the calcium-free medium decreased markedly in PAH group but recovered partially after CPU 86017 intervention. The constrictions in the presence of Ca(2+) was not improved by CPU 86017. The phenylephrine-induced vasoconstrictions in the calcium-free medium decreased markedly in PAH group but not recovered after CPU 86017 intervention. The constrictions in the presence of Ca(2+) completely returned to the normal after CPU 86017 intervention. CPU 86017 suppressed MCT-induced PAH mainly through an indirect suppression of the ET-1 system, which was involved in the pathogenesis of the disease.

  9. Born to Conquer: The Fortepiano’s Revolution of Keyboard Technique and Style

    Directory of Open Access Journals (Sweden)

    Rachel A. Lowrance

    2014-06-01

    Full Text Available The fortepiano had a rough beginning. In 1709 it entered a world that was not quite ready for it; a world that was very comfortable with the earlier keyboard instruments, especially the harpsichord. Pianists and composers were used to the harpsichord technique and style, which is drastically different from the piano. This is because the harpsichord was actually a very different instrument than the piano, as is explained in this paper. This paper traces the history of the piano's rise to dominance over the harpsichord, and how its unique hammer action began creating an idiomatic piano style. The piano also revolutionized keyboard repertoire, taking some genrs from the harpsichord and also creating completely new genres of compositions. Despite its slow start in the early eighteenth century, the piano completely revolutionized the musical world into which it was born. The rise of the fortepiano throughout the late eighteenth and nineteenth centuries transformed traditional keyboard technique, style and compositions.

  10. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    Science.gov (United States)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  11. Performed or observed keyboard actions affect pianists' judgements of relative pitch.

    Science.gov (United States)

    Repp, Bruno H; Knoblich, Gunther

    2009-11-01

    Action can affect visual perception if the action's expected sensory effects resemble a concurrent unstable or deviant event. To determine whether action can also change auditory perception, participants were required to play pairs of octave-ambiguous tones by pressing successive keys on a piano or computer keyboard and to judge whether each pitch interval was rising or falling. Both pianists and nonpianist musicians gave significantly more "rising" responses when the order of key presses was left-to-right than when it was right-to-left, in accord with the pitch mapping of the piano. However, the effect was much larger in pianists. Pianists showed a similarly large effect when they passively observed the experimenter pressing keys on a piano keyboard, as long as the keyboard faced the participant. The results suggest that acquired action-effect associations can affect auditory perceptual judgement.

  12. A Survey of Techniques of CPU-GPGPU Heterogeneous Architecture%CPU-GPGPU异构体系结构相关技术综述

    Institute of Scientific and Technical Information of China (English)

    徐新海; 林宇斐; 易伟

    2009-01-01

    GPU has better performance than CPU in both computing ability and memory bandwidth with the fast development of GPU technology. Therefore general computing on GPU has become increasingly popular, bringing forth an emerging CPU-GPGPU heterogeneous architecture. Although the new architecture demonstrates high performance and are currently being the highlight in academe and industry, how to write and execute programs on it efficiently still remains a big challenge. This paper summarizes the techniques of programmability, reliability and low power for GPU, and discusses the development trend of the CPU-GPGPU heterogeneous architecture.%随着GPU的发展,其计算能力和访存带宽都超过了CPU,在GPU上进行通用计算也变得越来越流行,这样就构成了CPU-GPGPU的新型异构体系结构.虽然这种新型体系结构表现出了强大的性能优势并受到了学术界和产业界的广泛关注,但如何更好地在这种结构上高效地编写和运行程序仍然存在很大的挑战.本文综述了针对这一体系结构现有的可编程性技术、可靠性技术和低功耗技术,并结合这些技术展望了CPU-GPGPU这种异构系统的发展趋势.

  13. Piano Keyboard Training and the Spatial-Temporal Development of Young Children Attending Kindergarten Classes in Greece

    Science.gov (United States)

    Zafranas, Nikolaos

    2004-01-01

    This research had three main goals: to control whether children would show significant improvement in cognitive test scores following piano/keyboard instruction; to compare whether the spatial tasks would show greater improvement than other tasks; and to examine whether the effects of piano/keyboard training on spatial tasks are gender…

  14. Piano Keyboard Training and the Spatial-Temporal Development of Young Children Attending Kindergarten Classes in Greece

    Science.gov (United States)

    Zafranas, Nikolaos

    2004-01-01

    This research had three main goals: to control whether children would show significant improvement in cognitive test scores following piano/keyboard instruction; to compare whether the spatial tasks would show greater improvement than other tasks; and to examine whether the effects of piano/keyboard training on spatial tasks are gender…

  15. A Spiking Neural Simulator Integrating Event-Driven and Time-Driven Computation Schemes Using Parallel CPU-GPU Co-Processing: A Case Study.

    Science.gov (United States)

    Naveros, Francisco; Luque, Niceto R; Garrido, Jesús A; Carrillo, Richard R; Anguita, Mancia; Ros, Eduardo

    2015-07-01

    Time-driven simulation methods in traditional CPU architectures perform well and precisely when simulating small-scale spiking neural networks. Nevertheless, they still have drawbacks when simulating large-scale systems. Conversely, event-driven simulation methods in CPUs and time-driven simulation methods in graphic processing units (GPUs) can outperform CPU time-driven methods under certain conditions. With this performance improvement in mind, we have developed an event-and-time-driven spiking neural network simulator suitable for a hybrid CPU-GPU platform. Our neural simulator is able to efficiently simulate bio-inspired spiking neural networks consisting of different neural models, which can be distributed heterogeneously in both small layers and large layers or subsystems. For the sake of efficiency, the low-activity parts of the neural network can be simulated in CPU using event-driven methods while the high-activity subsystems can be simulated in either CPU (a few neurons) or GPU (thousands or millions of neurons) using time-driven methods. In this brief, we have undertaken a comparative study of these different simulation methods. For benchmarking the different simulation methods and platforms, we have used a cerebellar-inspired neural-network model consisting of a very dense granular layer and a Purkinje layer with a smaller number of cells (according to biological ratios). Thus, this cerebellar-like network includes a dense diverging neural layer (increasing the dimensionality of its internal representation and sparse coding) and a converging neural layer (integration) similar to many other biologically inspired and also artificial neural networks.

  16. The immediate effects of keyboard-based music therapy on probe reaction time

    Science.gov (United States)

    Zhang, Xiaoying; Zhou, Yue; Liu, Songhuai

    2016-01-01

    [Purpose] This study examined the immediate effects of keyboard-based music therapy on Probe Reaction Time. [Subjects and Methods] Probe Reaction Time was determined in 10 subjects by self-evaluation before and after music therapy intervention. The Probe Reaction Time was separately measured 4 times. [Results] After completion of music therapy intervention, the Probe Reaction Time in the 10 subjects was significantly decreased. [Conclusion] The results suggest that keyboard-based music therapy is an effective and novel treatment, and should be applied in clinical practice. PMID:27512274

  17. The immediate effects of keyboard-based music therapy on probe reaction time.

    Science.gov (United States)

    Zhang, Xiaoying; Zhou, Yue; Liu, Songhuai

    2016-07-01

    [Purpose] This study examined the immediate effects of keyboard-based music therapy on Probe Reaction Time. [Subjects and Methods] Probe Reaction Time was determined in 10 subjects by self-evaluation before and after music therapy intervention. The Probe Reaction Time was separately measured 4 times. [Results] After completion of music therapy intervention, the Probe Reaction Time in the 10 subjects was significantly decreased. [Conclusion] The results suggest that keyboard-based music therapy is an effective and novel treatment, and should be applied in clinical practice.

  18. Design of a diffractive optical element for pattern formation in a bilingual virtual keyboard

    Science.gov (United States)

    Manouchehri, Sohrab; Rahimi, Mojtaba; Oboudiat, Mohammad

    2016-03-01

    Pattern formation is one of the many applications of diffractive optical elements (DOEs) for display. Since DOEs have lightweight and slim nature compared to other optical devices, using them as image projection device in virtual keyboards is suggested. In this paper, we present an approach to designing elements that produce distinct intensity patterns, in the far field, for two wavelengths. These two patterns are images of bilingual virtual keyboard. To achieve this with DOEs is not simple, as they are inherently wavelength specific. Our technique is based on phase periodic characteristic of wavefront using iterative algorithm to design the phase profiles.

  19. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU; Reconstruccion del espectro de neutrones usando una red neuronal artificial (RNA) en CPU y GPU

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A. [Universidad de Cordoba, 14002 Cordoba (Spain); Vega C, H. R.; Alonso M, O. E., E-mail: vic.mc68010@gmail.com [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  20. Multi-threaded acceleration of ORBIT code on CPU and GPU with minimal modifications

    Science.gov (United States)

    Qu, Ante; Ethier, Stephane; Feibush, Eliot; White, Roscoe

    2013-10-01

    The guiding center code ORBIT was originally developed 30 years ago to study the drift orbit effects of charged particles in the strong equilibrium magnetic fields of tokamaks. Today, ORBIT remains a very active tool in magnetic confinement fusion research and continues to adapt to the latest toroidal devices, such as the NSTX-Upgrade, for which it plays a very important role in the study of energetic particle effects. Although the capabilities of ORBIT have improved throughout the years, the code still remains a serial application, which has now become an impediment to the lengthy simulations required for the NSTX-U project. In this work, multi-threaded parallelism is introduced in the core of the code with the goal of achieving the largest performance improvement while minimizing changes made to the source code. To that end, we introduce preprocessor directives in the most compute-intensive parts of the code, which constitute the stable core that seldom changes. Standard OpenMP directives are used for shared-memory CPU multi-threading while newly developed OpenACC (www.openacc.org) directives are used for GPU (Graphical Processing Unit) multi-threading. Implementation details and performance results are presented.

  1. Discrepancy Between Clinician and Research Assistant in TIMI Score Calculation (TRIAGED CPU

    Directory of Open Access Journals (Sweden)

    Taylor, Brian T.

    2014-11-01

    Full Text Available Introduction: Several studies have attempted to demonstrate that the Thrombolysis in Myocardial Infarction (TIMI risk score has the ability to risk stratify emergency department (ED patients with potential acute coronary syndromes (ACS. Most of the studies we reviewed relied on trained research investigators to determine TIMI risk scores rather than ED providers functioning in their normal work capacity. We assessed whether TIMI risk scores obtained by ED providers in the setting of a busy ED differed from those obtained by trained research investigators. Methods: This was an ED-based prospective observational cohort study comparing TIMI scores obtained by 49 ED providers admitting patients to an ED chest pain unit (CPU to scores generated by a team of trained research investigators. We examined provider type, patient gender, and TIMI elements for their effects on TIMI risk score discrepancy. Results: Of the 501 adult patients enrolled in the study, 29.3% of TIMI risk scores determined by ED providers and trained research investigators were generated using identical TIMI risk score variables. In our low-risk population the majority of TIMI risk score differences were small; however, 12% of TIMI risk scores differed by two or more points. Conclusion: TIMI risk scores determined by ED providers in the setting of a busy ED frequently differ from scores generated by trained research investigators who complete them while not under the same pressure of an ED provider. [West J Emerg Med. 2015;16(1:24–33.

  2. Discrepancy between clinician and research assistant in TIMI score calculation (TRIAGED CPU).

    Science.gov (United States)

    Taylor, Brian T; Mancini, Michelino

    2015-01-01

    Several studies have attempted to demonstrate that the Thrombolysis in Myocardial Infarction (TIMI) risk score has the ability to risk stratify emergency department (ED) patients with potential acute coronary syndromes (ACS). Most of the studies we reviewed relied on trained research investigators to determine TIMI risk scores rather than ED providers functioning in their normal work capacity. We assessed whether TIMI risk scores obtained by ED providers in the setting of a busy ED differed from those obtained by trained research investigators. This was an ED-based prospective observational cohort study comparing TIMI scores obtained by 49 ED providers admitting patients to an ED chest pain unit (CPU) to scores generated by a team of trained research investigators. We examined provider type, patient gender, and TIMI elements for their effects on TIMI risk score discrepancy. Of the 501 adult patients enrolled in the study, 29.3% of TIMI risk scores determined by ED providers and trained research investigators were generated using identical TIMI risk score variables. In our low-risk population the majority of TIMI risk score differences were small; however, 12% of TIMI risk scores differed by two or more points. TIMI risk scores determined by ED providers in the setting of a busy ED frequently differ from scores generated by trained research investigators who complete them while not under the same pressure of an ED provider.

  3. cuBLASTP: Fine-Grained Parallelization of Protein Sequence Search on CPU+GPU.

    Science.gov (United States)

    Zhang, Jing; Wang, Hao; Feng, Wu-Chun

    2017-01-01

    BLAST, short for Basic Local Alignment Search Tool, is a ubiquitous tool used in the life sciences for pairwise sequence search. However, with the advent of next-generation sequencing (NGS), whether at the outset or downstream from NGS, the exponential growth of sequence databases is outstripping our ability to analyze the data. While recent studies have utilized the graphics processing unit (GPU) to speedup the BLAST algorithm for searching protein sequences (i.e., BLASTP), these studies use coarse-grained parallelism, where one sequence alignment is mapped to only one thread. Such an approach does not efficiently utilize the capabilities of a GPU, particularly due to the irregularity of BLASTP in both execution paths and memory-access patterns. To address the above shortcomings, we present a fine-grained approach to parallelize BLASTP, where each individual phase of sequence search is mapped to many threads on a GPU. This approach, which we refer to as cuBLASTP, reorders data-access patterns and reduces divergent branches of the most time-consuming phases (i.e., hit detection and ungapped extension). In addition, cuBLASTP optimizes the remaining phases (i.e., gapped extension and alignment with trace back) on a multicore CPU and overlaps their execution with the phases running on the GPU.

  4. A hybrid CPU-GPGPU approach for real-time elastography.

    Science.gov (United States)

    Yang, Xu; Deka, Sthiti; Righetti, Raffaella

    2011-12-01

    Ultrasound elastography is becoming a widely available clinical imaging tool. In recent years, several real- time elastography algorithms have been proposed; however, most of these algorithms achieve real-time frame rates through compromises in elastographic image quality. Cross-correlation- based elastographic techniques are known to provide high- quality elastographic estimates, but they are computationally intense and usually not suitable for real-time clinical applications. Recently, the use of massively parallel general purpose graphics processing units (GPGPUs) for accelerating computationally intense operations in biomedical applications has received great interest. In this study, we investigate the use of the GPGPU to speed up generation of cross-correlation-based elastograms and achieve real-time frame rates while preserving elastographic image quality. We propose and statistically analyze performance of a new hybrid model of computation suitable for elastography applications in which sequential code is executed on the CPU and parallel code is executed on the GPGPU. Our results indicate that the proposed hybrid approach yields optimal results and adequately addresses the trade-off between speed and quality.

  5. Pharmacokinetics of CPU0213, a novel endothelin receptor antagonist, after intravenous administration in mice

    Institute of Scientific and Technical Information of China (English)

    Li GUAN; Yu FENG; Min JI; De-zai DAI

    2006-01-01

    Aim: To determine the pharmacokinetics associated with acute toxic doses of CPU0213, a novel endothelin receptor antagonist in mice after a single intravenous administration. Methods: Concentrations in serum and the pharmacokinetic parameters of CPU0213 were assayed by high pressure liquid chromatography (HPLC) following a single intravenous bolus of CPU0213 at concentrations of 25, 50, and 100 mg/kg in mice. The intravenous acute toxicity of CPU0213 was also assessed in mice. Results: A simple, sensitive and selective HPLC method was developed for quantitative determination of CPU0213 in mouse serum. The concentration-time data conform to a 2-compartment model after iv administration of CPU0213 at concentrations of 25, 50,100 mg/kg. The corresponding distribution half-lives (T1/2α) were 3.6, 4.2, 1.1 min and the elimination half-lives (T1/2β) were 39.4,70.3,61.9 min. There was a linear increase in C0 proportional to dose, and the same as AUC0-t and AUC0-∞. AUC0-t and AUC0-∞ were 4.511,13.070,23.666 g·min·L-1 and 4.596,13.679,24.115 g·min·L-1, respectively. The intravenous LD50 was 315.5 mg/kg. Conclusion: First order rate pharmacokinetics were observed for CPU0213 within the range of doses used, and the acute toxicity of CPU0213 is mild.

  6. CPU--Constructing Physics Understanding[TM]. [CD-ROM].

    Science.gov (United States)

    2000

    This CD-ROM consists of simulation software that allows students to conduct countless experiments using 20 Java simulators and curriculum units that explore light and color, forces and motion, sound and waves, static electricity and magnetism, current electricity, the nature of matter, and a unit on underpinnings. Setups can be designed by the…

  7. Comparing the Consumption of CPU Hours with Scientific Output for the Extreme Science and Engineering Discovery Environment (XSEDE).

    Science.gov (United States)

    Knepper, Richard; Börner, Katy

    2016-01-01

    This paper presents the results of a study that compares resource usage with publication output using data about the consumption of CPU cycles from the Extreme Science and Engineering Discovery Environment (XSEDE) and resulting scientific publications for 2,691 institutions/teams. Specifically, the datasets comprise a total of 5,374,032,696 central processing unit (CPU) hours run in XSEDE during July 1, 2011 to August 18, 2015 and 2,882 publications that cite the XSEDE resource. Three types of studies were conducted: a geospatial analysis of XSEDE providers and consumers, co-authorship network analysis of XSEDE publications, and bi-modal network analysis of how XSEDE resources are used by different research fields. Resulting visualizations show that a diverse set of consumers make use of XSEDE resources, that users of XSEDE publish together frequently, and that the users of XSEDE with the highest resource usage tend to be "traditional" high-performance computing (HPC) community members from astronomy, atmospheric science, physics, chemistry, and biology.

  8. Comparing the Consumption of CPU Hours with Scientific Output for the Extreme Science and Engineering Discovery Environment (XSEDE.

    Directory of Open Access Journals (Sweden)

    Richard Knepper

    Full Text Available This paper presents the results of a study that compares resource usage with publication output using data about the consumption of CPU cycles from the Extreme Science and Engineering Discovery Environment (XSEDE and resulting scientific publications for 2,691 institutions/teams. Specifically, the datasets comprise a total of 5,374,032,696 central processing unit (CPU hours run in XSEDE during July 1, 2011 to August 18, 2015 and 2,882 publications that cite the XSEDE resource. Three types of studies were conducted: a geospatial analysis of XSEDE providers and consumers, co-authorship network analysis of XSEDE publications, and bi-modal network analysis of how XSEDE resources are used by different research fields. Resulting visualizations show that a diverse set of consumers make use of XSEDE resources, that users of XSEDE publish together frequently, and that the users of XSEDE with the highest resource usage tend to be "traditional" high-performance computing (HPC community members from astronomy, atmospheric science, physics, chemistry, and biology.

  9. Comparison of GPU- and CPU-implementations of mean-firing rate neural networks on parallel hardware.

    Science.gov (United States)

    Dinkelbach, Helge Ülo; Vitay, Julien; Beuth, Frederik; Hamker, Fred H

    2012-01-01

    Modern parallel hardware such as multi-core processors (CPUs) and graphics processing units (GPUs) have a high computational power which can be greatly beneficial to the simulation of large-scale neural networks. Over the past years, a number of efforts have focused on developing parallel algorithms and simulators best suited for the simulation of spiking neural models. In this article, we aim at investigating the advantages and drawbacks of the CPU and GPU parallelization of mean-firing rate neurons, widely used in systems-level computational neuroscience. By comparing OpenMP, CUDA and OpenCL implementations towards a serial CPU implementation, we show that GPUs are better suited than CPUs for the simulation of very large networks, but that smaller networks would benefit more from an OpenMP implementation. As this performance strongly depends on data organization, we analyze the impact of various factors such as data structure, memory alignment and floating precision. We then discuss the suitability of the different hardware depending on the networks' size and connectivity, as random or sparse connectivities in mean-firing rate networks tend to break parallel performance on GPUs due to the violation of coalescence.

  10. Research on Programming Based on CPU/GPU Heterogeneous Computing Cluster%基于CPU/GPU集群的编程的研究

    Institute of Scientific and Technical Information of China (English)

    刘钢锋

    2013-01-01

    随着微处理器技术的发展,GPU/CPU的混合计算已经成为是科学计算的主流趋势.本文从编程的层面,介绍了如何利用已有的并行编程语言来,调度GPU的计算功能,主要以MPI(一种消息传递编程模型)与基于GPU的CUDA(统一计算设备架构)编程模型相结合的方式进行GPU集群程序的测试,并分析了CPU/GPU集群并行环境下的运行特点.从分析的特点中总结出GPU集群较优策略,从而为提高CPU/GPU并行程序性能提供科学依据.%With the fast development in computer and microprocessor, Scientific Computing using CPU/GPU hybrid computing cluster has become a tendency. In this paper, from programming point of view, we propose the method of GPU scheduling to improve calculation efficiency. The main methods are through the combination of MPI (Message Passing Interface) and CUDA (Compute Unified Device Architecture) based on GPU to program. According to running condition of the parallel program, the characteristic of CPU/GPU hybrid computing cluster is analyzed. From the characteristic, the optimization strategy of parallel programs is found. So, the strategy will provide basis for improving the CPU/GPU parallel program.

  11. Differences in Middle School TCAP Writing Assessment Scores Based on Keyboarding Skill

    Science.gov (United States)

    Parker, Carol A.

    2016-01-01

    The purpose of this study was to determine if there was a difference in the writing assessment scores for each of the four traits--development, focus and organization, language, and conventions--as measured by the Tennessee Comprehensive Assessment Program (TCAP) of students who had a formal keyboarding course compared to those who did not. A…

  12. Effects of Microcomputer versus Electric Element Typewriter Instruction on Straight Copy and Production Keyboarding Performance.

    Science.gov (United States)

    Davison, Leslie J.

    1990-01-01

    One group of secondary keyboarding students was taught on typewriters and switched to microcomputers after six weeks, the other used microcomputers first, then typewriters. Using computers, students showed faster completion times and fewer typographical errors. Transfer from computers to typewriters slowed times and increased errors. Overall,…

  13. The Meaning of Learning Piano Keyboard in the Lives of Older Chinese People

    Science.gov (United States)

    Li, Sicong; Southcott, Jane

    2015-01-01

    Across the globe populations are ageing and living longer. Older people seek meaningful ways of occupying and enjoying their later years. Frequently, this takes the form of learning a new skill, in this case playing the piano keyboard. From the initial act of commitment to learning comes a raft of related aspects that influence the learner, their…

  14. CLSI On-Line Public Catalog Keyboard Terminal Manual: Training Manual.

    Science.gov (United States)

    California State Univ., Chico.

    This training manual developed by the Public Access Subcommittee of the Reference Department of Meriam Library (California State University, Chico) provides instructions for using the library's online public catalog by means of a keyboard terminal. An introduction describes the Boolean searching capability of the online catalog and gives examples…

  15. Metronome LKM: An open source virtual keyboard driver to measure experiment software latencies.

    Science.gov (United States)

    Garaizar, Pablo; Vadillo, Miguel A

    2017-09-15

    Experiment software is often used to measure reaction times gathered with keyboards or other input devices. In previous studies, the accuracy and precision of time stamps has been assessed through several means: (a) generating accurate square wave signals from an external device connected to the parallel port of the computer running the experiment software, (b) triggering the typematic repeat feature of some keyboards to get an evenly separated series of keypress events, or (c) using a solenoid handled by a microcontroller to press the input device (keyboard, mouse button, touch screen) that will be used in the experimental setup. Despite the advantages of these approaches in some contexts, none of them can isolate the measurement error caused by the experiment software itself. Metronome LKM provides a virtual keyboard to assess an experiment's software. Using this open source driver, researchers can generate keypress events using high-resolution timers and compare the time stamps collected by the experiment software with those gathered by Metronome LKM (with nanosecond resolution). Our software is highly configurable (in terms of keys pressed, intervals, SysRq activation) and runs on 2.6-4.8 Linux kernels.

  16. The Meaning of Learning Piano Keyboard in the Lives of Older Chinese People

    Science.gov (United States)

    Li, Sicong; Southcott, Jane

    2015-01-01

    Across the globe populations are ageing and living longer. Older people seek meaningful ways of occupying and enjoying their later years. Frequently, this takes the form of learning a new skill, in this case playing the piano keyboard. From the initial act of commitment to learning comes a raft of related aspects that influence the learner, their…

  17. CLSI On-Line Public Catalog Keyboard Terminal Manual: Training Manual.

    Science.gov (United States)

    California State Univ., Chico.

    This training manual developed by the Public Access Subcommittee of the Reference Department of Meriam Library (California State University, Chico) provides instructions for using the library's online public catalog by means of a keyboard terminal. An introduction describes the Boolean searching capability of the online catalog and gives examples…

  18. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    Science.gov (United States)

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  19. Comparison of Pen and Keyboard Transcription Modes in Children with and without Learning Disabilities

    Science.gov (United States)

    Berninger, Virginia W.; Abbott, Robert D.; Augsburger, Amy; Garcia, Noelia

    2009-01-01

    Fourth graders with learning disabilities in transcription (handwriting and spelling), LD-TD, and without LD-TD (non-LD), were compared on three writing tasks (letters, sentences, and essays), which differed by level of language, when writing by pen and by keyboard. The two groups did not differ significantly in Verbal IQ but did in handwriting,…

  20. What skilled typists don't know about the QWERTY keyboard.

    Science.gov (United States)

    Snyder, Kristy M; Ashitaka, Yuki; Shimada, Hiroyuki; Ulrich, Jana E; Logan, Gordon D

    2014-01-01

    We conducted four experiments to investigate skilled typists' explicit knowledge of the locations of keys on the QWERTY keyboard, with three procedures: free recall (Exp. 1), cued recall (Exp. 2), and recognition (Exp. 3). We found that skilled typists' explicit knowledge of key locations is incomplete and inaccurate. The findings are consistent with theories of skilled performance and automaticity that associate implicit knowledge with skilled performance and explicit knowledge with novice performance. In Experiment 4, we investigated whether novice typists acquire more complete explicit knowledge of key locations when learning to touch-type. We had skilled QWERTY typists complete a Dvorak touch-typing tutorial. We then tested their explicit knowledge of the Dvorak and QWERTY key locations with the free recall task. We found no difference in explicit knowledge of the two keyboards, suggesting that typists know little about key locations on the keyboard, whether they are exposed to the keyboard for 2 h or 12 years.

  1. Size effects on the touchpad, touchscreen, and keyboard tasks of netbooks.

    Science.gov (United States)

    Lai, Chih-Chun; Wu, Chih-Fu

    2012-10-01

    The size of a netbook plays an important role in its success. Somehow, the viewing area on screen and ability to type fast were traded off for portability. To further investigate, this study compared the performances of different-sized touchpads, touchscreens, and keyboards of four-sized netbooks for five application tasks. Consequently, the 7" netbook was significantly slower than larger netbooks in all the tasks except the 8.9" netbook touchpad (successive selecting and clicking) or keyboard tasks. Differences were non-significant for the operating times among the 8.9", 10.1", and 11.6" netbooks in all the tasks except between the 8.9" and 11.6" netbooks in keyboards tasks. For error rates, device-type effects rather than size effects were significant. Gender effects were not significant for operating times in all the tasks but for error rates in touchscreen (multi-direction touching) and keyboard tasks. Considering size effects, the 10.1" netbooks seemed to optimally balance between portability and productivity.

  2. Looking at the Keyboard or the Monitor: Relationship with Text Production Processes

    Science.gov (United States)

    Johansson, Roger; Wengelin, Asa; Johansson, Victoria; Holmqvist, Kenneth

    2010-01-01

    In this paper we explored text production differences in an expository text production task between writers who looked mainly at the keyboard and writers who looked mainly at the monitor. Eye-tracking technology and keystroke-logging were combined to systematically describe and define these two groups in respect of the complex interplay between…

  3. Understanding What It Means for Older Students to Learn Basic Musical Skills on a Keyboard Instrument

    Science.gov (United States)

    Taylor, Angela; Hallam, Susan

    2008-01-01

    Although many adults take up or return to instrumental and vocal tuition every year, we know very little about how they experience it. As part of ongoing case study research, eight older learners with modest keyboard skills explored what their musical skills meant to them during conversation-based repertory grid interviews. The data were…

  4. Children Composing and Their Visual-Spatial Approach to the Keyboard

    Science.gov (United States)

    Roels, Johanna Maria; Van Petegem, Peter

    2015-01-01

    This study aims to contribute to the already existing findings of children's compositional strategies and products. Despite the abundance of research provided regarding the manner in which children approach composing, little has been found about how children deal, specifically, with the structure of the keyboard. Therefore, from a context in which…

  5. Inhibition of CPU0213, a Dual Endothelin Receptor Antagonist, on Apoptosis via Nox4-Dependent ROS in HK-2 Cells

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-06-01

    Full Text Available Background/Aims: Our previous studies have indicated that a novel endothelin receptor antagonist CPU0213 effectively normalized renal function in diabetic nephropathy. However, the molecular mechanisms mediating the nephroprotective role of CPU0213 remain unknown. Methods and Results: In the present study, we first detected the role of CPU0213 on apoptosis in human renal tubular epithelial cell (HK-2. It was shown that high glucose significantly increased the protein expression of Bax and decreased Bcl-2 protein in HK-2 cells, which was reversed by CPU0213. The percentage of HK-2 cells that showed Annexin V-FITC binding was markedly suppressed by CPU0213, which confirmed the inhibitory role of CPU0213 on apoptosis. Given the regulation of endothelin (ET system to oxidative stress, we determined the role of redox signaling in the regulation of CPU0213 on apoptosis. It was demonstrated that the production of superoxide (O2-. was substantially attenuated by CPU0213 treatment in HK-2 cells. We further found that CPU0213 dramatically inhibited expression of Nox4 protein, which gene silencing mimicked the role of CPU0213 on the apoptosis under high glucose stimulation. We finally examined the role of CPU0213 on ET-1 receptors and found that high glucose-induced protein expression of endothelin A and B receptors was dramatically inhibited by CPU0213. Conclusion: Taken together, these results suggest that this Nox4-dependenet O2- production is critical for the apoptosis of HK-2 cells in high glucose. Endothelin receptor antagonist CPU0213 has an anti-apoptosis role through Nox4-dependent O2-.production, which address the nephroprotective role of CPU0213 in diabetic nephropathy.

  6. A hybrid stepping motor system with dual CPU

    Institute of Scientific and Technical Information of China (English)

    高晗璎; 赵克; 孙力

    2004-01-01

    An indirect method of measuring the rotor position based on the magnetic reluctance variation is presented in the paper. A single-chip microprocessor 80C196KC is utilized to compensate the phase shift produeed by the process of position signals. At the same time, a DSP (Data Signal Processor) unit is used to realize the speed and current closed-loops of the hybrid stepping motor system. At last, experimental results show the control system has excellent static and dynamic characteristics.

  7. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores.

    Science.gov (United States)

    Chikkagoudar, Satish; Wang, Kai; Li, Mingyao

    2011-05-26

    Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs) have multiple cores, whereas Graphics Processing Units (GPUs) also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1) the interaction of SNPs within it in parallel, and 2) the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  8. SwiftLink: parallel MCMC linkage analysis using multicore CPU and GPU.

    Science.gov (United States)

    Medlar, Alan; Głowacka, Dorota; Stanescu, Horia; Bryson, Kevin; Kleta, Robert

    2013-02-15

    Linkage analysis remains an important tool in elucidating the genetic component of disease and has become even more important with the advent of whole exome sequencing, enabling the user to focus on only those genomic regions co-segregating with Mendelian traits. Unfortunately, methods to perform multipoint linkage analysis scale poorly with either the number of markers or with the size of the pedigree. Large pedigrees with many markers can only be evaluated with Markov chain Monte Carlo (MCMC) methods that are slow to converge and, as no attempts have been made to exploit parallelism, massively underuse available processing power. Here, we describe SWIFTLINK, a novel application that performs MCMC linkage analysis by spreading the computational burden between multiple processor cores and a graphics processing unit (GPU) simultaneously. SWIFTLINK was designed around the concept of explicitly matching the characteristics of an algorithm with the underlying computer architecture to maximize performance. We implement our approach using existing Gibbs samplers redesigned for parallel hardware. We applied SWIFTLINK to a real-world dataset, performing parametric multipoint linkage analysis on a highly consanguineous pedigree with EAST syndrome, containing 28 members, where a subset of individuals were genotyped with single nucleotide polymorphisms (SNPs). In our experiments with a four core CPU and GPU, SWIFTLINK achieves a 8.5× speed-up over the single-threaded version and a 109× speed-up over the popular linkage analysis program SIMWALK. SWIFTLINK is available at https://github.com/ajm/swiftlink. All source code is licensed under GPLv3.

  9. MOIL-opt: Energy-Conserving Molecular Dynamics on a GPU/CPU system.

    Science.gov (United States)

    Ruymgaart, A Peter; Cardenas, Alfredo E; Elber, Ron

    2011-08-26

    We report an optimized version of the molecular dynamics program MOIL that runs on a shared memory system with OpenMP and exploits the power of a Graphics Processing Unit (GPU). The model is of heterogeneous computing system on a single node with several cores sharing the same memory and a GPU. This is a typical laboratory tool, which provides excellent performance at minimal cost. Besides performance, emphasis is made on accuracy and stability of the algorithm probed by energy conservation for explicit-solvent atomically-detailed-models. Especially for long simulations energy conservation is critical due to the phenomenon known as "energy drift" in which energy errors accumulate linearly as a function of simulation time. To achieve long time dynamics with acceptable accuracy the drift must be particularly small. We identify several means of controlling long-time numerical accuracy while maintaining excellent speedup. To maintain a high level of energy conservation SHAKE and the Ewald reciprocal summation are run in double precision. Double precision summation of real-space non-bonded interactions improves energy conservation. In our best option, the energy drift using 1fs for a time step while constraining the distances of all bonds, is undetectable in 10ns simulation of solvated DHFR (Dihydrofolate reductase). Faster options, shaking only bonds with hydrogen atoms, are also very well behaved and have drifts of less than 1kcal/mol per nanosecond of the same system. CPU/GPU implementations require changes in programming models. We consider the use of a list of neighbors and quadratic versus linear interpolation in lookup tables of different sizes. Quadratic interpolation with a smaller number of grid points is faster than linear lookup tables (with finer representation) without loss of accuracy. Atomic neighbor lists were found most efficient. Typical speedups are about a factor of 10 compared to a single-core single-precision code.

  10. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of)

    2017-02-15

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  11. Experimental and Transient Thermal Analysis of Heat Sink Fin for CPU processor for better performance

    Science.gov (United States)

    Ravikumar, S.; Subash Chandra, Parisaboina; Harish, Remella; Sivaji, Tallapaneni

    2017-05-01

    The advancement of the digital computer and its utilization day by day is rapidly increasing. But the reliability of electronic components is critically affected by the temperature at which the junction operates. The designers are forced to shorten the overall system dimensions, in extracting the heat and controlling the temperature which focus the studies of electronic cooling. In this project Thermal analysis is carried out with a commercial package provided by ANSYS. The geometric variables and design of heat sink for improving the thermal performance is experimented. This project utilizes thermal analysis to identify a cooling solution for a desktop computer, which uses a 5 W CPU. The design is able to cool the chassis with heat sink joined to the CPU is adequate to cool the whole system. This work considers the circular cylindrical pin fins and rectangular plate heat sink fins design with aluminium base plate and the control of CPU heat sink processes.

  12. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems.

    Science.gov (United States)

    Teodoro, George; Kurc, Tahsin M; Pan, Tony; Cooper, Lee A D; Kong, Jun; Widener, Patrick; Saltz, Joel H

    2012-05-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches.

  13. A Mechanism That Bounds Execution Performance for Process Group for Mitigating CPU Abuse

    Science.gov (United States)

    Yamauchi, Toshihiro; Hara, Takayuki; Taniguchi, Hideo

    Secure OS has been the focus of several studies. However, CPU resources, which are important resources for executing a program, are not the object of access control. For preventing the abuse of CPU resources, we had earlier proposed a new type of execution resource that controls the maximum CPU usage [5,6] The previously proposed mechanism can control only one process at a time. Because most services involve multiple processes, the mechanism should control all the processes in each service. In this paper, we propose an improved mechanism that helps to achieve a bound on the execution performance of a process group, in order to limit unnecessary processor usage. We report the results of an evaluation of our proposed mechanism.

  14. Accelerated event-by-event Monte Carlo microdosimetric calculations of electrons and protons tracks on a multi-core CPU and a CUDA-enabled GPU.

    Science.gov (United States)

    Kalantzis, Georgios; Tachibana, Hidenobu

    2014-01-01

    For microdosimetric calculations event-by-event Monte Carlo (MC) methods are considered the most accurate. The main shortcoming of those methods is the extensive requirement for computational time. In this work we present an event-by-event MC code of low projectile energy electron and proton tracks for accelerated microdosimetric MC simulations on a graphic processing unit (GPU). Additionally, a hybrid implementation scheme was realized by employing OpenMP and CUDA in such a way that both GPU and multi-core CPU were utilized simultaneously. The two implementation schemes have been tested and compared with the sequential single threaded MC code on the CPU. Performance comparison was established on the speed-up for a set of benchmarking cases of electron and proton tracks. A maximum speedup of 67.2 was achieved for the GPU-based MC code, while a further improvement of the speedup up to 20% was achieved for the hybrid approach. The results indicate the capability of our CPU-GPU implementation for accelerated MC microdosimetric calculations of both electron and proton tracks without loss of accuracy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Validity of questionnaire self-reports on computer, mouse and keyboard usage during a four-week period

    DEFF Research Database (Denmark)

    Mikkelsen, S.; Vilstrup, Imogen; Lassen, C. F.

    2007-01-01

    OBJECTIVE: To examine the validity and potential biases in self-reports of computer, mouse and keyboard usage times, compared with objective recordings. METHODS: A study population of 1211 people was asked in a questionnaire to estimate the average time they had worked with computer, mouse...... and keyboard during the past four working weeks. During the same period, a software program recorded these activities objectively. The study was part of a one-year follow-up study from 2000-1 of musculoskeletal outcomes among Danish computer workers. RESULTS: Self-reports on computer, mouse and keyboard usage...... times were positively associated with objectively measured activity, but the validity was low. Self-reports explained only between a quarter and a third of the variance of objectively measured activity, and were even lower for one measure (keyboard time). Self-reports overestimated usage times...

  16. Do you know where your fingers have been? Explicit knowledge of the spatial layout of the keyboard in skilled typists.

    Science.gov (United States)

    Liu, Xianyun; Crump, Matthew J C; Logan, Gordon D

    2010-06-01

    Two experiments evaluated skilled typists' ability to report knowledge about the layout of keys on a standard keyboard. In Experiment 1, subjects judged the relative direction of letters on the computer keyboard. One group of subjects was asked to imagine the keyboard, one group was allowed to look at the keyboard, and one group was asked to type the letter pair before judging relative direction. The imagine group had larger angular error and longer response time than both the look and touch groups. In Experiment 2, subjects placed one key relative to another. Again, the imagine group had larger angular error, larger distance error, and longer response time than the other groups. The two experiments suggest that skilled typists have poor explicit knowledge of key locations. The results are interpreted in terms of a model with two hierarchical parts in the system controlling typewriting.

  17. High Speed 3D Tomography on CPU, GPU, and FPGA

    Directory of Open Access Journals (Sweden)

    GAC Nicolas

    2008-01-01

    Full Text Available Abstract Back-projection (BP is a costly computational step in tomography image reconstruction such as positron emission tomography (PET. To reduce the computation time, this paper presents a pipelined, prefetch, and parallelized architecture for PET BP (3PA-PET. The key feature of this architecture is its original memory access strategy, masking the high latency of the external memory. Indeed, the pattern of the memory references to the data acquired hinders the processing unit. The memory access bottleneck is overcome by an efficient use of the intrinsic temporal and spatial locality of the BP algorithm. A loop reordering allows an efficient use of general purpose processor's caches, for software implementation, as well as the 3D predictive and adaptive cache (3D-AP cache, when considering hardware implementations. Parallel hardware pipelines are also efficient thanks to a hierarchical 3D-AP cache: each pipeline performs a memory reference in about one clock cycle to reach a computational throughput close to 100%. The 3PA-PET architecture is prototyped on a system on programmable chip (SoPC to validate the system and to measure its expected performances. Time performances are compared with a desktop PC, a workstation, and a graphic processor unit (GPU.

  18. High Speed 3D Tomography on CPU, GPU, and FPGA

    Directory of Open Access Journals (Sweden)

    Dominique Houzet

    2009-02-01

    Full Text Available Back-projection (BP is a costly computational step in tomography image reconstruction such as positron emission tomography (PET. To reduce the computation time, this paper presents a pipelined, prefetch, and parallelized architecture for PET BP (3PA-PET. The key feature of this architecture is its original memory access strategy, masking the high latency of the external memory. Indeed, the pattern of the memory references to the data acquired hinders the processing unit. The memory access bottleneck is overcome by an efficient use of the intrinsic temporal and spatial locality of the BP algorithm. A loop reordering allows an efficient use of general purpose processor's caches, for software implementation, as well as the 3D predictive and adaptive cache (3D-AP cache, when considering hardware implementations. Parallel hardware pipelines are also efficient thanks to a hierarchical 3D-AP cache: each pipeline performs a memory reference in about one clock cycle to reach a computational throughput close to 100%. The 3PA-PET architecture is prototyped on a system on programmable chip (SoPC to validate the system and to measure its expected performances. Time performances are compared with a desktop PC, a workstation, and a graphic processor unit (GPU.

  19. New Multithreaded Hybrid CPU/GPU Approach to Hartree-Fock.

    Science.gov (United States)

    Asadchev, Andrey; Gordon, Mark S

    2012-11-13

    In this article, a new multithreaded Hartree-Fock CPU/GPU method is presented which utilizes automatically generated code and modern C++ techniques to achieve a significant improvement in memory usage and computer time. In particular, the newly implemented Rys Quadrature and Fock Matrix algorithms, implemented as a stand-alone C++ library, with C and Fortran bindings, provides up to 40% improvement over the traditional Fortran Rys Quadrature. The C++ GPU HF code provides approximately a factor of 17.5 improvement over the corresponding C++ CPU code.

  20. Effect of horizontal position of the computer keyboard on upper extremity posture and muscular load during computer work.

    Science.gov (United States)

    Kotani, K; Barrero, L H; Lee, D L; Dennerlein, J T

    2007-09-01

    The distance of the keyboard from the edge of a work surface has been associated with hand and arm pain; however, the variation in postural and muscular effects with the horizontal position have not been explicitly explored in previous studies. It was hypothesized that the wrist approaches more of a neutral posture as the keyboard distance from the edge of table increases. In a laboratory setting, 20 adults completed computer tasks using four workstation configurations: with the keyboard at the edge of the work surface (NEAR), 8 cm from the edge and 15 cm from the edge, the latter condition also with a pad that raised the work surface proximal to the keyboard (FWP). Electrogoniometers and an electromagnetic motion analysis system measured wrist and upper arm postures and surface electromyography measured muscle activity of two forearm and two shoulder muscles. Wrist ulnar deviation decreased by 50% (4 degrees ) as the keyboard position moved away from the user. Without a pad, wrist extension increased by 20% (4 degrees ) as the keyboard moved away but when the pad was added, wrist extension did not differ from that in the NEAR configuration. Median values of wrist extensor muscle activity decreased by 4% maximum voluntary contraction for the farthest position with a pad (FWP). The upper arm followed suit: flexion increased while abduction and internal rotation decreased as the keyboard was positioned further away from the edge of the table. In order to achieve neutral postures of the upper extremity, the keyboard position in the horizontal plane has an important role and needs to be considered within the context of workstation designs and interventions.

  1. Design of virtual keyboard using blink control method for the severely disabled.

    Science.gov (United States)

    Yang, Shih-Wei; Lin, Chern-Sheng; Lin, Shir-Kuan; Lee, Chi-Hung

    2013-08-01

    In this paper, a human-machine interface with the concept of "blink control" is proposed. The human-machine interface is applied to an assistive device, namely "blink scanning keyboard", which is designed specifically for the severely physical disabled and people suffering from motor neuron diseases or severe cerebral palsy. The pseudo electromyography (EMG) signal of blinking eyes could be acquired by wearing a Bluetooth headset with one sensor on the forehead and the other three on the left ear of the user. Through a conscious blink, a clear and immediate variation will be formed in the pseudo EMG signal from the users' forehead. The occurrence of this variation in pseudo EMG signal could be detected and filtered by the algorithms proposed in this paper, acting like a trigger to activate the functions integrated in the scanning keyboard. The severely physical and visual disabled then can operate the proposed design by simply blinking their eyes, thus communicating with outside world.

  2. Postural dynamism during computer mouse and keyboard use: A pilot study.

    Science.gov (United States)

    Van Niekerk, S M; Fourie, S M; Louw, Q A

    2015-09-01

    Prolonged sedentary computer use is a risk factor for musculoskeletal pain. The aim of this study was to explore postural dynamism during two common computer tasks, namely mouse use and keyboard typing. Postural dynamism was described as the total number of postural changes that occurred during the data capture period. Twelve participants were recruited to perform a mouse and a typing task. The data of only eight participants could be analysed. A 3D motion analysis system measured the number of cervical and thoracic postural changes as well as, the range in which the postural changes occurred. The study findings illustrate that there is less postural dynamism of the cervical and thoracic spinal regions during computer mouse use, when compared to keyboard typing.

  3. Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    Science.gov (United States)

    Giancardo, L.; Sánchez-Ferro, A.; Butterworth, I.; Mendoza, C. S.; Hooker, J. M.

    2015-04-01

    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p language of the text typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population.

  4. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    Science.gov (United States)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  5. All-Elastomer-Based Triboelectric Nanogenerator as a Keyboard Cover To Harvest Typing Energy.

    Science.gov (United States)

    Li, Shengming; Peng, Wenbo; Wang, Jie; Lin, Long; Zi, Yunlong; Zhang, Gong; Wang, Zhong Lin

    2016-08-23

    The drastic expansion of consumer electronics (like personal computers, touch pads, smart phones, etc.) creates many human-machine interfaces and multiple types of interactions between human and electronics. Considering the high frequency of such operations in our daily life, an extraordinary amount of biomechanical energy from typing or pressing buttons is available. In this study, we have demonstrated a highly flexible triboelectric nanogenerator (TENG) solely made from elastomeric materials as a cover on a conventional keyboard to harvest biomechanical energy from typing. A dual-mode working mechanism is established with a high transferred charge density of ∼140 μC/m(2) due to both structural and material innovations. We have also carried out fundamental investigations of its performance dependence on various structural factors for optimizing the electric output in practice. The fully packaged keyboard-shaped TENG is further integrated with a horn-like polypyrrole-based supercapacitor as a self-powered system. Typing in normal speed for 1 h, ∼8 × 10(-4) J electricity could be stored, which is capable of driving an electronic thermometer/hydrometer. Our keyboard cover also performs outstanding long-term stability, water resistance, as well as insensitivity to surface conditions, and the last feature makes it useful to research the typing behaviors of different people.

  6. Psychomotor Impairment Detection via Finger Interactions with a Computer Keyboard During Natural Typing

    Science.gov (United States)

    Giancardo, L.; Sánchez-Ferro, A.; Butterworth, I.; Mendoza, C. S.; Hooker, J. M.

    2015-01-01

    Modern digital devices and appliances are capable of monitoring the timing of button presses, or finger interactions in general, with a sub-millisecond accuracy. However, the massive amount of high resolution temporal information that these devices could collect is currently being discarded. Multiple studies have shown that the act of pressing a button triggers well defined brain areas which are known to be affected by motor-compromised conditions. In this study, we demonstrate that the daily interaction with a computer keyboard can be employed as means to observe and potentially quantify psychomotor impairment. We induced a psychomotor impairment via a sleep inertia paradigm in 14 healthy subjects, which is detected by our classifier with an Area Under the ROC Curve (AUC) of 0.93/0.91. The detection relies on novel features derived from key-hold times acquired on standard computer keyboards during an uncontrolled typing task. These features correlate with the progression to psychomotor impairment (p typed, and perform consistently with different keyboards. The ability to acquire longitudinal measurements of subtle motor changes from a digital device without altering its functionality may allow for early screening and follow-up of motor-compromised neurodegenerative conditions, psychological disorders or intoxication at a negligible cost in the general population. PMID:25882641

  7. USABILITY TESTING OF VIRTUAL KEYBOARD IN TOUCH SCREEN WITH QUESTIONNAIRE METHOD

    Directory of Open Access Journals (Sweden)

    Harijanto Pangetsu

    2014-05-01

    Full Text Available A system consists of input, process, and output. User interface is a place for users to interact in the system where one of the functions is as input facility of the system. One of the input tools is keyboard. With the development of touchscreen, then virtual keyboard is required. A good interface must be able to be used easily especially for novice users. The usability testing is required to know the usability level of the interface. A good usability has characteristics of effective to use (effectiveness, efficient to use (efficiency, safe to use (safety, having good utility (utility, easy to learn (learnability and easy to remember how to use (memorability. Usability testing is an evaluation approach involving users. With usability testing, it is expected that it can give positive inputs in designing an interface especially virtual keyboard interface mainly with the characteristics of easy to learn (learnability, and easy to remember how to use (memorability so that the resulted design can have good usability level

  8. Can digital signals from the keyboard capture force exposures during typing?

    Science.gov (United States)

    Kim, Jeong Ho; Johnson, Peter W

    2012-01-01

    An exposure-response relationship has been shown between muscle fatigue and its effects on keystroke durations. Since keystroke durations can readily be measured by software programs, the method has the potential as a non-invasive exposure assessment tool. However, the software based keystroke durations may be affected by keyswitch force-displacement characteristics. Thus, this study used a force platform to measure the keystroke durations and compared them to software measured keystroke durations in order to determine whether the software based keystroke durations can be used as a surrogate force exposure measures. A total of 13 subjects (6 males and 7 females) typed for 15 minutes each on three keyboards with different force-displacement characteristics. The results showed that the software based keystroke durations were more sensitive to the keyboard force-displacement differences than the force based measures. Although the digital signal based keystroke durations depend on the force-displacement characteristics, the high correlation between the two measures indicated that the keystroke durations derived from the digital signal approximated the true force derived keystroke durations, regardless of the keyboard force-displacement characteristics. Therefore, the software based keystroke durations could be used as a non-invasive, surrogate force exposure measure in lieu of the more invasive actual force measurements.

  9. Inertia artefacts and their effect on the parameterisation of keyboard reaction forces.

    Science.gov (United States)

    Asundi, Krishna; Johnson, Peter W; Dennerlein, Jack T

    2009-10-01

    Reaction force measurements collected during typing on keyboard trays contain inertia artefacts due to dynamic movements of the supporting work surface. To evaluate the effect of these artefacts, vertical forces and accelerations were measured while nine volunteers touch-typed on a rigid desk and a compliant keyboard tray. Two signal processing methods were evaluated: 1) low pass filtering with 20 Hz cut-off; 2) inertial force cancellation by subtracting the accelerometer signal. High frequency artefacts in the force signal, present on both surfaces, were eliminated by low pass filtering. Low frequency artefacts, present only when subjects typed on the keyboard tray, were attenuated by subtracting the accelerometer signal. Attenuation of these artefacts altered the descriptive statistics of the force signal by as much as 7%. For field measurements of typing force, reduction of low frequency artefacts should be considered for making more accurate comparisons across groups using work surfaces with different compliances. Direct measures of physical risk factors in the workplace can improve understanding of the aetiology of musculoskeletal disorders. Findings from this study characterise inertia artefacts in typing force measures and provide a method for eliminating them. These artefacts can add variability to measures, masking possible differences between subject groups.

  10. Conserved-peptide upstream open reading frames (CPuORFs are associated with regulatory genes in angiosperms

    Directory of Open Access Journals (Sweden)

    Richard A Jorgensen

    2012-08-01

    Full Text Available Upstream open reading frames (uORFs are common in eukaryotic transcripts, but those that encode conserved peptides (CPuORFs occur in less than 1% of transcripts. The peptides encoded by three plant CPuORF families are known to control translation of the downstream ORF in response to a small signal molecule (sucrose, polyamines and phosphocholine. In flowering plants, transcription factors are statistically over-represented among genes that possess CPuORFs, and in general it appeared that many CPuORF genes also had other regulatory functions, though the significance of this suggestion was uncertain (Hayden and Jorgensen, 2007. Five years later the literature provides much more information on the functions of many CPuORF genes. Here we reassess the functions of 27 known CPuORF gene families and find that 22 of these families play a variety of different regulatory roles, from transcriptional control to protein turnover, and from small signal molecules to signal transduction kinases. Clearly then, there is indeed a strong association of CPuORFs with regulatory genes. In addition, 16 of these families play key roles in a variety of different biological processes. Most strikingly, the core sucrose response network includes three different CPuORFs, creating the potential for sophisticated balancing of the network in response to three different molecular inputs. We propose that the function of most CPuORFs is to modulate translation of a downstream major ORF (mORF in response to a signal molecule recognized by the conserved peptide and that because the mORFs of CPuORF genes generally encode regulatory proteins, many of them centrally important in the biology of plants, CPuORFs play key roles in balancing such regulatory networks.

  11. Toward Performance Portability of the FV3 Weather Model on CPU, GPU and MIC Processors

    Science.gov (United States)

    Govett, Mark; Rosinski, James; Middlecoff, Jacques; Schramm, Julie; Stringer, Lynd; Yu, Yonggang; Harrop, Chris

    2017-04-01

    The U.S. National Weather Service has selected the FV3 (Finite Volume cubed) dynamical core to become part of the its next global operational weather prediction model. While the NWS is preparing to run FV3 operationally in late 2017, NOAA's Earth System Research Laboratory is adapting the model to be capable of running on next-generation GPU and MIC processors. The FV3 model was designed in the 1990s, and while it has been extensively optimized for traditional CPU chips, some code refactoring has been required to expose sufficient parallelism needed to run on fine-grain GPU processors. The code transformations must demonstrate bit-wise reproducible results with the original CPU code, and between CPU, GPU and MIC processors. We will describe the parallelization and performance while attempting to maintain performance portability between CPU, GPU and MIC with the Fortran source code. Performance results will be shown using NOAA's new Pascal based fine-grain GPU system (800 GPUs), and for the Knights Landing processor on the National Science Foundation (NSF) Stampede-2 system.

  12. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  13. DSM vs. NSM: CPU Performance Tradeoffs in Block-Oriented Query Processing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); N.J. Nes (Niels); P.A. Boncz (Peter)

    2008-01-01

    textabstractComparisons between the merits of row-wise storage (NSM) and columnar storage (DSM) are typically made with respect to the persistent storage layer of database systems. In this paper, however, we focus on the CPU efficiency tradeoffs of tuple representations inside the query execution en

  14. Plasma levels of carboxypeptidase U (CPU, CPB2 or TAFIa) are elevated in patients with acute myocardial infarction.

    Science.gov (United States)

    Leenaerts, D; Bosmans, J M; van der Veken, P; Sim, Y; Lambeir, A M; Hendriks, D

    2015-12-01

    Two decades after its discovery, carboxypeptidase U (CPU, CPB2 or TAFIa) has become a compelling drug target in thrombosis research. However, given the difficulty of measuring CPU in the blood circulation and the demanding sample collecton requirements, previous clinical studies focused mainly on measuring its inactive precursor, proCPU (proCPB2 or TAFI). Using a sensitive and specific enzymatic assay, we investigated plasma CPU levels in patients presenting with acute myocardial infarction (AMI) and in controls. In this case-control study, peripheral arterial blood samples were collected from 45 patients with AMI (25 with ST segment elevation myocardial infarction [STEMI], 20 with non-ST segment elevation myocardial infarction [NSTEMI]) and 42 controls. Additionally, intracoronary blood samples were collected from 11 STEMI patients during thrombus aspiration. Subsequently, proCPU and CPU plasma concentrations in all samples were measured by means of an activity-based assay, using Bz-o-cyano-Phe-Arg as a selective substrate. CPU activity levels were higher in patients with AMI (median LOD-LOQ, range 0-1277 mU L(-1) ) than in controls (median CPU levels and AMI type (NSTEMI [median between LOD-LOQ, range 0-465 mU L(-1) ] vs. STEMI [median between LOD-LOQ, range 0-1277 mU L(-1) ]). Intracoronary samples (median 109 mU L(-1) , range 0-759 mU L(-1) ) contained higher CPU levels than did peripheral samples (median between LOD-LOQ, range 0-107 mU L(-1) ), indicating increased local CPU generation. With regard to proCPU, we found lower levels in AMI patients (median 910 U L(-1) , range 706-1224 U L(-1) ) than in controls (median 1010 U L(-1) , range 753-1396 U L(-1) ). AMI patients have higher plasma CPU levels and lower proCPU levels than controls. This finding indicates in vivo generation of functional active CPU in patients with AMI. © 2015 International Society on Thrombosis and Haemostasis.

  15. 一种基于CPU+GPU的AVS视频并行编码方法%Parallel Implementation of AVS Video Encoder Based on CPU+GPU

    Institute of Scientific and Technical Information of China (English)

    邹彬彬; 梁凡

    2013-01-01

    The video standard of audio video coding standard (AVS) has high compression performance and good network flexibility,which can be used in widespread applications of digital video.To accelerate the AVS encoding for the real-time implement of AVS encoder is an important issue.A parallel implementation of AVS video encoder based on CPU and GPU is proposed,in which motion estimation,integer transform and quantization are computed by a GPU.Experimental results show that the proposed method can achieve realtime encoding for 1 920×1 080 video sequences.%音视频编码标准(audio video coding standard,AVS)中的视频标准具有较高的压缩性能以及较好的网络适应性,能满足在数字视频领域广泛应用的需求.提高AVS视频编码的速度、实现实时编码是满足数字视频应用需求的重要环节.提出了一种基于CPU+GPU的AVS视频并行编码方法,利用GPU完成编码器的运动估值以及整数变换和量化.实验结果表明,该方法能实现对1920× 1080分辨率视频的实时编码.

  16. Multi-GPU and multi-CPU accelerated FDTD scheme for vibroacoustic applications

    Science.gov (United States)

    Francés, J.; Otero, B.; Bleda, S.; Gallego, S.; Neipp, C.; Márquez, A.; Beléndez, A.

    2015-06-01

    The Finite-Difference Time-Domain (FDTD) method is applied to the analysis of vibroacoustic problems and to study the propagation of longitudinal and transversal waves in a stratified media. The potential of the scheme and the relevance of each acceleration strategy for massively computations in FDTD are demonstrated in this work. In this paper, we propose two new specific implementations of the bi-dimensional scheme of the FDTD method using multi-CPU and multi-GPU, respectively. In the first implementation, an open source message passing interface (OMPI) has been included in order to massively exploit the resources of a biprocessor station with two Intel Xeon processors. Moreover, regarding CPU code version, the streaming SIMD extensions (SSE) and also the advanced vectorial extensions (AVX) have been included with shared memory approaches that take advantage of the multi-core platforms. On the other hand, the second implementation called the multi-GPU code version is based on Peer-to-Peer communications available in CUDA on two GPUs (NVIDIA GTX 670). Subsequently, this paper presents an accurate analysis of the influence of the different code versions including shared memory approaches, vector instructions and multi-processors (both CPU and GPU) and compares them in order to delimit the degree of improvement of using distributed solutions based on multi-CPU and multi-GPU. The performance of both approaches was analysed and it has been demonstrated that the addition of shared memory schemes to CPU computing improves substantially the performance of vector instructions enlarging the simulation sizes that use efficiently the cache memory of CPUs. In this case GPU computing is slightly twice times faster than the fine tuned CPU version in both cases one and two nodes. However, for massively computations explicit vector instructions do not worth it since the memory bandwidth is the limiting factor and the performance tends to be the same than the sequential version

  17. COMPARING AND ANALYZING THE SIMILARITIES AND DIFFERENCES BETWEEN CPU HYPER-THREADING AND DUAL-CORE TECHNOLOGIES%比较分析CPU超线程技术与双核技术的异同

    Institute of Scientific and Technical Information of China (English)

    林杰; 余建坤

    2011-01-01

    Hyper-threading and dual-core are two important technologies during the CPU evolution. Hyper-threading technology simulates a physical processor as two "virtual" processors to reduce the idle time of the execution units and some resources, thus increasing CPU utilization. Dual-core technology encapsulates two physical processing cores into one CPU to improve the performance of programs. The paper describes the basic model of CPU, analyzes Hyper-threading and dual-core technology principles, and compares their similarities and differences from three perspectives of system architecture, parallel degree and improved efficiency.%超线程技术和双核技术是CPU发展历程中的重要技术.超线程技术把一个物理处理器模拟成两个“虚拟”的处理器,减少执行单元和一些资源的闲置时间,提高CPU的利用率.双核技术是将两个物理处理核心封装在一个CPU中,提高程序的执行效率.介绍CPU的基本模型,分析超线程和双核的技术原理,并从系统架构、并行程度和提升的效率三个方面比较它们的异同点.

  18. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  19. Unit 03 - Introduction to Computers

    OpenAIRE

    Unit 74, CC in GIS; National Center for Geographic Information and Analysis

    1990-01-01

    This unit provides a brief introduction to computer hardware and software. It discusses binary notation, the ASCII coding system and hardware components including the central processing unit (CPU), memory, peripherals and storage media. Software including operating systems, word processors database packages, spreadsheets and statistical packages are briefly described.

  20. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-28

    Many database applications, such as sequence comparing, sequence searching, and sequence matching, etc, process large database sequences. we introduce a novel and efficient technique to improve the performance of database applica- tions by using a Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm. The experimental results show that our Hybrid GPU/CPU technique improves the average performance by a factor of 2.2, and improves the peak performance by a factor of 2.8 when compared to earlier implementations. Copyright © 2011 by ASME.

  1. Performance Engineering of the Kernel Polynomial Method on Large-Scale CPU-GPU Systems

    CERN Document Server

    Kreutzer, Moritz; Wellein, Gerhard; Pieper, Andreas; Alvermann, Andreas; Fehske, Holger

    2014-01-01

    The Kernel Polynomial Method (KPM) is a well-established scheme in quantum physics and quantum chemistry to determine the eigenvalue density and spectral properties of large sparse matrices. In this work we demonstrate the high optimization potential and feasibility of peta-scale heterogeneous CPU-GPU implementations of the KPM. At the node level we show that it is possible to decouple the sparse matrix problem posed by KPM from main memory bandwidth both on CPU and GPU. To alleviate the effects of scattered data access we combine loosely coupled outer iterations with tightly coupled block sparse matrix multiple vector operations, which enables pure data streaming. All optimizations are guided by a performance analysis and modelling process that indicates how the computational bottlenecks change with each optimization step. Finally we use the optimized node-level KPM with a hybrid-parallel framework to perform large scale heterogeneous electronic structure calculations for novel topological materials on a pet...

  2. Promise of a Low Power Mobile CPU based Embedded System in Artificial Leg Control

    Science.gov (United States)

    Hernandez, Robert; Zhang, Fan; Zhang, Xiaorong; Huang, He; Yang, Qing

    2013-01-01

    This paper presents the design and implementation of a low power embedded system using mobile processor technology (Intel Atom™ Z530 Processor) specifically tailored for a neural-machine interface (NMI) for artificial limbs. This embedded system effectively performs our previously developed NMI algorithm based on neuromuscular-mechanical fusion and phase-dependent pattern classification. The analysis shows that NMI embedded system can meet real-time constraints with high accuracies for recognizing the user's locomotion mode. Our implementation utilizes the mobile processor efficiently to allow a power consumption of 2.2 watts and low CPU utilization (less than 4.3%) while executing the complex NMI algorithm. Our experiments have shown that the highly optimized C program implementation on the embedded system has superb advantages over existing PC implementations on MATLAB. The study results suggest that mobile-CPU-based embedded system is promising for implementing advanced control for powered lower limb prostheses. PMID:23367113

  3. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  4. Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Fork/Join is a framework for parallelizing calculations using recursive decomposition, also called divide and conquer. These algorithms occasionally end up duplicating work, especially at the beginning of the run. We can reduce wasted CPU cycles by implementing a reserved caching scheme. Before a task starts its calculation, it tries to reserve an entry in the shared map. If it is successful, it immediately begins. If not, it blocks until the other thread has finished its calculation. Unfortunately this might result in a significant number of blocked threads, decreasing CPU utilization. In this talk we will demonstrate this issue and offer a solution in the form of the ManagedBlocker. Combined with the Fork/Join, it can keep parallelism at the desired level.

  5. Screening methods for linear-scaling short-range hybrid calculations on CPU and GPU architectures

    Science.gov (United States)

    Beuerle, Matthias; Kussmann, Jörg; Ochsenfeld, Christian

    2017-04-01

    We present screening schemes that allow for efficient, linear-scaling short-range exchange calculations employing Gaussian basis sets for both CPU and GPU architectures. They are based on the LinK [C. Ochsenfeld et al., J. Chem. Phys. 109, 1663 (1998)] and PreLinK [J. Kussmann and C. Ochsenfeld, J. Chem. Phys. 138, 134114 (2013)] methods, but account for the decay introduced by the attenuated Coulomb operator in short-range hybrid density functionals. Furthermore, we discuss the implementation of short-range electron repulsion integrals on GPUs. The introduction of our screening methods allows for speedups of up to a factor 7.8 as compared to the underlying linear-scaling algorithm, while retaining full numerical control over the accuracy. With the increasing number of short-range hybrid functionals, our new schemes will allow for significant computational savings on CPU and GPU architectures.

  6. Promise of a low power mobile CPU based embedded system in artificial leg control.

    Science.gov (United States)

    Hernandez, Robert; Zhang, Fan; Zhang, Xiaorong; Huang, He; Yang, Qing

    2012-01-01

    This paper presents the design and implementation of a low power embedded system using mobile processor technology (Intel Atom™ Z530 Processor) specifically tailored for a neural-machine interface (NMI) for artificial limbs. This embedded system effectively performs our previously developed NMI algorithm based on neuromuscular-mechanical fusion and phase-dependent pattern classification. The analysis shows that NMI embedded system can meet real-time constraints with high accuracies for recognizing the user's locomotion mode. Our implementation utilizes the mobile processor efficiently to allow a power consumption of 2.2 watts and low CPU utilization (less than 4.3%) while executing the complex NMI algorithm. Our experiments have shown that the highly optimized C program implementation on the embedded system has superb advantages over existing PC implementations on MATLAB. The study results suggest that mobile-CPU-based embedded system is promising for implementing advanced control for powered lower limb prostheses.

  7. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  8. Wrist dermatitis: contact allergy to neoprene in a keyboard wrist rest.

    Science.gov (United States)

    Johnson, R C; Elston, D M

    1997-09-01

    A case of allergic contact dermatitis to a keyboard wrist rest containing neoprene is reported. The patient, who had a history of sensitivity to rubber products, developed an acute vesicular reaction of the palmar aspects of her distal wrists, followed by eczematous patches of her extremities and face. Treatment with prednisone, a 3-week tapering dose (60, 40, 20 mg), cleared the dermatitis. The widespread uses of neoprene are discussed and suggest that neoprene will become a common source of contact dermatitis as the potential sources of exposure increase.

  9. 轻薄舒适 微软Bluetooth Mobile Keyboard 5000

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    微软发布了一款仅厚13mm的轻薄型无线移动键盘“Bluetooth Moblie Keyboard 5000”,它长355mm,宽165mm.厚度仅为13mm。重约434g(含内置电池),是一款轻薄型无线键盘。设计师依据人体工程学将按键按舒适的弯曲状排列设计出了该款键盘。

  10. DOE SBIR Phase-1 Report on Hybrid CPU-GPU Parallel Development of the Eulerian-Lagrangian Barracuda Multiphase Program

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Dale M. Snider

    2011-02-28

    This report gives the result from the Phase-1 work on demonstrating greater than 10x speedup of the Barracuda computer program using parallel methods and GPU processors (General-Purpose Graphics Processing Unit or Graphics Processing Unit). Phase-1 demonstrated a 12x speedup on a typical Barracuda function using the GPU processor. The problem test case used about 5 million particles and 250,000 Eulerian grid cells. The relative speedup, compared to a single CPU, increases with increased number of particles giving greater than 12x speedup. Phase-1 work provided a path for reformatting data structure modifications to give good parallel performance while keeping a friendly environment for new physics development and code maintenance. The implementation of data structure changes will be in Phase-2. Phase-1 laid the ground work for the complete parallelization of Barracuda in Phase-2, with the caveat that implemented computer practices for parallel programming done in Phase-1 gives immediate speedup in the current Barracuda serial running code. The Phase-1 tasks were completed successfully laying the frame work for Phase-2. The detailed results of Phase-1 are within this document. In general, the speedup of one function would be expected to be higher than the speedup of the entire code because of I/O functions and communication between the algorithms. However, because one of the most difficult Barracuda algorithms was parallelized in Phase-1 and because advanced parallelization methods and proposed parallelization optimization techniques identified in Phase-1 will be used in Phase-2, an overall Barracuda code speedup (relative to a single CPU) is expected to be greater than 10x. This means that a job which takes 30 days to complete will be done in 3 days. Tasks completed in Phase-1 are: Task 1: Profile the entire Barracuda code and select which subroutines are to be parallelized (See Section Choosing a Function to Accelerate) Task 2: Select a GPU consultant company and

  11. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  12. Exploring Heterogeneous NoC Design Space in Heterogeneous GPU-CPU Architectures

    Institute of Scientific and Technical Information of China (English)

    方娟; 姚治成; 冷镇宇; 隋秀峰; 刘思彤

    2015-01-01

    Computer architecture is transiting from the multicore era into the heterogeneous era in which heterogeneous architectures use on-chip networks to access shared resources and how a network is configured will likely have a significant impact on overall performance and power consumption. Recently, heterogeneous network on chip (NoC) has been proposed not only to achieve performance comparable to that of the NoCs with buffered routers but also to reduce buffer cost and energy consumption. However, heterogeneous NoC design for heterogeneous GPU-CPU architectures has not been studied in depth. This paper first evaluates the performance and power consumption of a variety of static hot-potato based heterogeneous NoCs with different buffered and bufferless router placements, which is helpful to explore the design space for heterogeneous GPU-CPU interconnection. Then it proposes Unidirectional Flow Control (UFC), a simple credit-based flow control mechanism for heterogeneous NoC in GPU-CPU architectures to control network congestion. UFC can guarantee that there are always unoccupied entries in buffered routers to receive flits coming from adjacent bufferless routers. Our evaluations show that when compared to hot-potato routing, UFC improves performance by an average of 14.1%with energy increased by an average of 5.3%only.

  13. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  14. Improving the Performance of CPU Architectures by Reducing the Operating System Overhead (Extended Version

    Directory of Open Access Journals (Sweden)

    Zagan Ionel

    2016-07-01

    Full Text Available The predictable CPU architectures that run hard real-time tasks must be executed with isolation in order to provide a timing-analyzable execution for real-time systems. The major problems for real-time operating systems are determined by an excessive jitter, introduced mainly through task switching. This can alter deadline requirements, and, consequently, the predictability of hard real-time tasks. New requirements also arise for a real-time operating system used in mixed-criticality systems, when the executions of hard real-time applications require timing predictability. The present article discusses several solutions to improve the performance of CPU architectures and eventually overcome the Operating Systems overhead inconveniences. This paper focuses on the innovative CPU implementation named nMPRA-MT, designed for small real-time applications. This implementation uses the replication and remapping techniques for the program counter, general purpose registers and pipeline registers, enabling multiple threads to share a single pipeline assembly line. In order to increase predictability, the proposed architecture partially removes the hazard situation at the expense of larger execution latency per one instruction.

  15. Novel automatic mapping technology on CPU-GPU heteroge-neous systems%面向CPU-GPU架构的源到源自动映射方法

    Institute of Scientific and Technical Information of China (English)

    朱正东; 刘袁; 魏洪昌; 颜康; 王寅峰; 董小社

    2015-01-01

    针对GPU上应用开发移植困难的问题,提出了一种串行计算源程序到并行计算源程序的映射方法。该方法从串行源程序中获得可并行化循环的层次信息,建立循环体结构与GPU线程的对应关系,生成GPU端核心函数代码;根据变量引用读写属性生成CPU端控制代码。基于该方法实现了一个编译原型系统,完成了C语言源程序到CUDA源程序的自动生成。对原型系统在功能和性能方面的测试结果表明,该系统生成的CUDA源程序与C语言源程序在功能上一致,其性能有显著提高,在一定程度上解决了计算密集型应用向CPU-GPU异构多核系统移植困难的问题。%Aiming at the developing and porting difficulties of GPU-based applications, a mapping approach is proposed, which converts serial computing source code into equivalent parallel computing source code. This approach acquires hier-archies of parallelizable loops from serial sources, establishes the correspondence between loop structures and GPU threads, and generates the core function code for GPU. Meanwhile, CPU control code is generated according to read/write attributes of variable references. A compiler prototype is implemented based on this approach, which translates C code into CUDA code automatically. Functionality and performance evaluations of the prototype show that the CUDA code generated is functionally equivalent to the original C code, with significant improvement in performance, thus overcomes the diffi-culty in porting compute-intensive applications to CPU-GPU heterogeneous systems.

  16. Design of Keyboard Based on LabVIEW%基于LabVIEW可弹奏电子琴的设计

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

      本文主要阐述利用LabVIEW软件实现可弹奏电子琴的设计,通过对声音的频率设置,以及LabVIEW中的常用编程控件,来实现电子琴发声,本设计能实现电子琴基本演奏功能,调试效果较好。%This paper describes the realization of keyboard design by using LabVIEW software. Through the sound frequency settings,and programming controls commonly used in LabVIEW to achieve the keyboard.This design can realize the basic functions of keyboard playing.Debugging result is good.

  17. A Finger-Pressing Position Detector for Assisting People with Developmental Disabilities to Control Their Environmental Stimulation through Fine Motor Activities with a Standard Keyboard

    Science.gov (United States)

    Shih, Ching-Hsiang

    2012-01-01

    This study used a standard keyboard with a newly developed finger-pressing position detection program (FPPDP), i.e. a new software program, which turns a standard keyboard into a finger-pressing position detector, to evaluate whether two people with developmental disabilities would be able to actively perform fine motor activities to control their…

  18. A Finger-Pressing Position Detector for Assisting People with Developmental Disabilities to Control Their Environmental Stimulation through Fine Motor Activities with a Standard Keyboard

    Science.gov (United States)

    Shih, Ching-Hsiang

    2012-01-01

    This study used a standard keyboard with a newly developed finger-pressing position detection program (FPPDP), i.e. a new software program, which turns a standard keyboard into a finger-pressing position detector, to evaluate whether two people with developmental disabilities would be able to actively perform fine motor activities to control their…

  19. Instruction of Keyboarding Skills: A Whole Language Approach to Teaching Functional Literacy Skills to Students Who are Blind and Have Additional Disabilities

    Science.gov (United States)

    Stauffer, Mary

    2008-01-01

    This article describes an unconventional method to teach un-contracted braille reading and writing skills to students who are blind and have additional disabilities. It includes a keyboarding curriculum that focuses on the whole language approach to literacy. A special feature is the keyboard that is adapted with braille symbols. Un-contracted…

  20. Keyboarding Compared with Handwriting on a High-Stakes Writing Assessment: Student Choice of Composing Medium, Raters' Perceptions, and Text Quality

    Science.gov (United States)

    Whithaus, Carl; Harrison, Scott B.; Midyette, Jeb

    2008-01-01

    This article examines the influence of keyboarding versus handwriting in a high-stakes writing assessment. Conclusions are based on data collected from a pilot project to move Old Dominion University's Exit Exam of Writing Proficiency from a handwritten format into a dual-option format (i.e., the students may choose to handwrite or keyboard the…

  1. Internal Structure and Development of Keyboard Skills in Spanish-Speaking Primary-School Children with and without LD in Writing

    Science.gov (United States)

    Jiménez, Juan E.; Marco, Isaac; Suárez, Natalia; González, Desirée

    2017-01-01

    This study had two purposes: examining the internal structure of the "Test Estandarizado para la Evaluación Inicial de la Escritura con Teclado" (TEVET; Spanish Keyboarding Writing Test), and analyzing the development of keyboarding skills in Spanish elementary school children with and without learning disabilities (LD) in writing. A…

  2. The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli.

    Directory of Open Access Journals (Sweden)

    Po-Ming Lee

    Full Text Available In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. The advantages of using this approach are that the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. Hence, this study aimed to examine the source of variance in keyboard typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Digitized Sounds (IADS was conducted. Two-way Valence (3 x Arousal (3 ANOVAs was used to examine the collected dataset. The results of the experiment indicate that the effect of arousal is significant in keystroke duration (p < .05, keystroke latency (p < .01, but not in the accuracy rate of keyboard typing. The size of the emotional effect is small, compared to the individual variability. Our findings support the conclusion that the keystroke duration and latency are influenced by arousal. The finding about the size of the effect suggests that the accuracy rate of emotion recognition technology could be further improved if personalized models are utilized. Notably, the experiment was conducted using standard instruments and hence is expected to be highly reproducible.

  3. A high-performance keyboard neural prosthesis enabled by task optimization.

    Science.gov (United States)

    Nuyujukian, Paul; Fan, Joline M; Kao, Jonathan C; Ryu, Stephen I; Shenoy, Krishna V

    2015-01-01

    Communication neural prostheses are an emerging class of medical devices that aim to restore efficient communication to people suffering from paralysis. These systems rely on an interface with the user, either via the use of a continuously moving cursor (e.g., mouse) or the discrete selection of symbols (e.g., keyboard). In developing these interfaces, many design choices have a significant impact on the performance of the system. The objective of this study was to explore the design choices of a continuously moving cursor neural prosthesis and optimize the interface to maximize information theoretic performance. We swept interface parameters of two keyboard-like tasks to find task and subject-specific optimal parameters as measured by achieved bitrate using two rhesus macaques implanted with multielectrode arrays. In this paper, we present the highest performing free-paced neural prosthesis under any recording modality with sustainable communication rates of up to 3.5 bits/s. These findings demonstrate that meaningful high performance can be achieved using an intracortical neural prosthesis, and that, when optimized, these systems may be appropriate for use as communication devices for those with physical disabilities.

  4. Effects of altered auditory feedback across effector systems: production of melodies by keyboard and singing.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T

    2012-01-01

    We report an experiment that tested whether effects of altered auditory feedback (AAF) during piano performance differ from its effects during singing. These effector systems differ with respect to the mapping between motor gestures and pitch content of auditory feedback. Whereas this action-effect mapping is highly reliable during phonation in any vocal motor task (singing or speaking), mapping between finger movements and pitch occurs only in limited situations, such as piano playing. Effects of AAF in both tasks replicated results previously found for keyboard performance (Pfordresher, 2003), in that asynchronous (delayed) feedback slowed timing whereas alterations to feedback pitch increased error rates, and the effect of asynchronous feedback was similar in magnitude across tasks. However, manipulations of feedback pitch had larger effects on singing than on keyboard production, suggesting effector-specific differences in sensitivity to action-effect mapping with respect to feedback content. These results support the view that disruption from AAF is based on abstract, effector independent, response-effect associations but that the strength of associations differs across effector systems.

  5. With practice, keyboard shortcuts become faster than menu selection: A crossover interaction.

    Science.gov (United States)

    Remington, Roger W; Yuen, Ho Wang Holman; Pashler, Harold

    2016-03-01

    It is widely believed that a graphical user interface (GUI) is superior to a command line interface (CLI) for novice users, but less efficient than the CLI after practice. However, there appears to be no detailed study of the crossover interaction that this implies. The rate of learning may shed light on the reluctance of experienced users to adopt keyboard shortcuts, even though, when mastered, shortcut use would reduce task completion times. We report 2 experiments examining changes in the efficiency of and preference for keyboard input versus GUI with practice. Experiment 1 had separate groups of subjects make speeded choice responses to words on a 20-item list either by clicking on a tab in a dropdown menu (GUI version) or by entering a preassigned keystroke combination (CLI version). The predicted crossover was observed after approximately 200 responses. Experiment 2 showed that following training all but 1 subject in the CLI-trained group chose to continue using shortcuts. These results suggest that frequency of shortcut use is a function of ease of retrieval, which develops over the course of multiple repetitions of the command. We discuss possible methods for promoting shortcut learning and the practical implications of our results.

  6. Brain responses to altered auditory feedback during musical keyboard production: an fMRI study.

    Science.gov (United States)

    Pfordresher, Peter Q; Mantell, James T; Brown, Steven; Zivadinov, Robert; Cox, Jennifer L

    2014-03-27

    Alterations of auditory feedback during piano performance can be profoundly disruptive. Furthermore, different alterations can yield different types of disruptive effects. Whereas alterations of feedback synchrony disrupt performed timing, alterations of feedback pitch contents can disrupt accuracy. The current research tested whether these behavioral dissociations correlate with differences in brain activity. Twenty pianists performed simple piano keyboard melodies while being scanned in a 3-T magnetic resonance imaging (MRI) scanner. In different conditions they experienced normal auditory feedback, altered auditory feedback (asynchronous delays or altered pitches), or control conditions that excluded movement or sound. Behavioral results replicated past findings. Neuroimaging data suggested that asynchronous delays led to increased activity in Broca's area and its right homologue, whereas disruptive alterations of pitch elevated activations in the cerebellum, area Spt, inferior parietal lobule, and the anterior cingulate cortex. Both disruptive conditions increased activations in the supplementary motor area. These results provide the first evidence of neural responses associated with perception/action mismatch during keyboard production.

  7. Optimisation of an exemplar oculomotor model using multi-objective genetic algorithms executed on a GPU-CPU combination.

    Science.gov (United States)

    Avramidis, Eleftherios; Akman, Ozgur E

    2017-03-24

    Parameter optimisation is a critical step in the construction of computational biology models. In eye movement research, computational models are increasingly important to understanding the mechanistic basis of normal and abnormal behaviour. In this study, we considered an existing neurobiological model of fast eye movements (saccades), capable of generating realistic simulations of: (i) normal horizontal saccades; and (ii) infantile nystagmus - pathological ocular oscillations that can be subdivided into different waveform classes. By developing appropriate fitness functions, we optimised the model to existing experimental saccade and nystagmus data, using a well-established multi-objective genetic algorithm. This algorithm required the model to be numerically integrated for very large numbers of parameter combinations. To address this computational bottleneck, we implemented a master-slave parallelisation, in which the model integrations were distributed across the compute units of a GPU, under the control of a CPU. While previous nystagmus fitting has been based on reproducing qualitative waveform characteristics, our optimisation protocol enabled us to perform the first direct fits of a model to experimental recordings. The fits to normal eye movements showed that although saccades of different amplitudes can be accurately simulated by individual parameter sets, a single set capable of fitting all amplitudes simultaneously cannot be determined. The fits to nystagmus oscillations systematically identified the parameter regimes in which the model can reproduce a number of canonical nystagmus waveforms to a high accuracy, whilst also identifying some waveforms that the model cannot simulate. Using a GPU to perform the model integrations yielded a speedup of around 20 compared to a high-end CPU. The results of both optimisation problems enabled us to quantify the predictive capacity of the model, suggesting specific modifications that could expand its repertoire of

  8. CUDA Based Performance Evaluation of the Computational Efficiency of the DCT Image Compression Technique on Both the CPU and GPU

    Directory of Open Access Journals (Sweden)

    Kgotlaetsile Mathews Modieginyane

    2013-06-01

    Full Text Available Recent advances in computing such as the massively parallel GPUs (Graphical Processing Units,coupledwith the need to store and deliver large quantities of digital data especially images, has brought a numberof challenges for Computer Scientists, the research community and other stakeholders. These challenges,such as prohibitively large costs to manipulate the digital data amongst others, have been the focus of theresearch community in recent years and has led to the investigation of image compression techniques thatcan achieve excellent results. One such technique is the Discrete Cosine Transform, which helps separatean image into parts of differing frequencies and has the advantage of excellent energy-compaction.This paper investigates the use of the Compute Unified Device Architecture (CUDA programming modelto implement the DCT based Cordic based Loeffler algorithm for efficient image compression. Thecomputational efficiency is analyzed and evaluated under both the CPU and GPU. The PSNR (Peak Signalto Noise Ratio is used to evaluate image reconstruction quality in this paper. The results are presentedand discussed.

  9. Fast computation of myelin maps from MRI T₂ relaxation data using multicore CPU and graphics card parallelization.

    Science.gov (United States)

    Yoo, Youngjin; Prasloski, Thomas; Vavasour, Irene; MacKay, Alexander; Traboulsee, Anthony L; Li, David K B; Tam, Roger C

    2015-03-01

    To develop a fast algorithm for computing myelin maps from multiecho T2 relaxation data using parallel computation with multicore CPUs and graphics processing units (GPUs). Using an existing MATLAB (MathWorks, Natick, MA) implementation with basic (nonalgorithm-specific) parallelism as a guide, we developed a new version to perform the same computations but using C++ to optimize the hybrid utilization of multicore CPUs and GPUs, based on experimentation to determine which algorithmic components would benefit from CPU versus GPU parallelization. Using 32-echo T2 data of dimensions 256 × 256 × 7 from 17 multiple sclerosis patients and 18 healthy subjects, we compared the two methods in terms of speed, myelin values, and the ability to distinguish between the two patient groups using Student's t-tests. The new method was faster than the MATLAB implementation by 4.13 times for computing a single map and 14.36 times for batch-processing 10 scans. The two methods produced very similar myelin values, with small and explainable differences that did not impact the ability to distinguish the two patient groups. The proposed hybrid multicore approach represents a more efficient alternative to MATLAB, especially for large-scale batch processing. © 2014 Wiley Periodicals, Inc.

  10. 网络环境下美式键盘上法语键盘的使用%The use of French keyboard on the American keyboard under the network environment

    Institute of Scientific and Technical Information of China (English)

    杨宁霞

    2015-01-01

    信息技术时代,数字化学习环境的营造是教学环节中的重要环节.我们处在一个习惯于使用美式键盘的环境中,绝大部分法语学习者不具备给电脑配备一个法语键盘的条件.由于习惯问题,即使有条件配备了法语键盘,也会给英语和汉语的输入带来不便.为了解决这个问题,本文主要研究和介绍法语字符在美式键盘上的输入.%In the era of information technology, the construction of digital learning environment, is an important link of teaching. We are in a habit of using American keyboard environment, most of the French learners do not have to a computer equipped with a keyboard. Due to the habit, even if there is condition equipped with French keyboard, also bring inconvenience to input in English and Chinese. In order to solve this problem, this paper mainly studies and introduces French characters in American on the keyboard input.

  11. Accelerating hyper-spectral data processing on the multi-CPU and multi-GPU heterogeneous computing platform

    Science.gov (United States)

    Zhang, Lei; Gao, Jiao Bo; Hu, Yu; Wang, Ying Hui; Sun, Ke Feng; Cheng, Juan; Sun, Dan Dan; Li, Yu

    2017-02-01

    During the research of hyper-spectral imaging spectrometer, how to process the huge amount of image data is a difficult problem for all researchers. The amount of image data is about the order of magnitude of several hundred megabytes per second. The only way to solve this problem is parallel computing technology. With the development of multi-core CPU and GPU parallel computing on multi-core CPU or GPU is increasingly applied in large-scale data processing. In this paper, we propose a new parallel computing solution of hyper-spectral data processing which is based on the multi-CPU and multi-GPU heterogeneous computing platform. We use OpenMP technology to control multi-core CPU, we also use CUDA to schedule the parallel computing on multi-GPU. Experimental results show that the speed of hyper-spectral data processing on the multi-CPU and multi-GPU heterogeneous computing platform is apparently faster than the traditional serial algorithm which is run on single core CPU. Our research has significant meaning for the engineering application of the windowing Fourier transform imaging spectrometer.

  12. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Improvement of heat pipe performance through integration of a coral biomaterial wick structure into the heat pipe of a CPU cooling system

    Science.gov (United States)

    Putra, Nandy; Septiadi, Wayan Nata

    2016-08-01

    The very high heat flux dissipated by a Central Processing Unit (CPU) can no longer be handled by a conventional, single-phased cooling system. Thermal management of a CPU is now moving towards two-phase systems to maintain CPUs below their maximum temperature. A heat pipe is one of the emerging cooling systems to address this issue because of its superior efficiency and energy input independence. The goal of this research is to improve the performance of a heat pipe by integrating a biomaterial as the wick structure. In this work, the heat pipe was made from copper pipe and the biomaterial wick structure was made from tabulate coral with a mean pore diameter of 52.95 μm. For comparison purposes, the wick structure was fabricated from sintered Cu-powder with a mean pore diameter of 58.57 µm. The working fluid for this experiment was water. The experiment was conducted using a processor as the heat source and a plate simulator to measure the heat flux. The utilization of coral as the wick structure can improve the performance of a heat pipe and can decrease the temperature of a simulator plate by as much as 38.6 % at the maximum heat load compared to a conventional copper heat sink. This method also decreased the temperature of the simulator plate by as much as 44.25 °C compared to a heat pipe composed of a sintered Cu-powder wick.

  14. Improvement of heat pipe performance through integration of a coral biomaterial wick structure into the heat pipe of a CPU cooling system

    Science.gov (United States)

    Putra, Nandy; Septiadi, Wayan Nata

    2017-04-01

    The very high heat flux dissipated by a Central Processing Unit (CPU) can no longer be handled by a conventional, single-phased cooling system. Thermal management of a CPU is now moving towards two-phase systems to maintain CPUs below their maximum temperature. A heat pipe is one of the emerging cooling systems to address this issue because of its superior efficiency and energy input independence. The goal of this research is to improve the performance of a heat pipe by integrating a biomaterial as the wick structure. In this work, the heat pipe was made from copper pipe and the biomaterial wick structure was made from tabulate coral with a mean pore diameter of 52.95 μm. For comparison purposes, the wick structure was fabricated from sintered Cu-powder with a mean pore diameter of 58.57 µm. The working fluid for this experiment was water. The experiment was conducted using a processor as the heat source and a plate simulator to measure the heat flux. The utilization of coral as the wick structure can improve the performance of a heat pipe and can decrease the temperature of a simulator plate by as much as 38.6 % at the maximum heat load compared to a conventional copper heat sink. This method also decreased the temperature of the simulator plate by as much as 44.25 °C compared to a heat pipe composed of a sintered Cu-powder wick.

  15. Development of a GPU and multi-CPU accelerated non-isothermal, multiphase, incompressible Navier-Stokes solver with phase-change

    Science.gov (United States)

    Forster, Christopher J.; Glezer, Ari; Smith, Marc K.

    2012-11-01

    Accurate 3D boiling simulations often use excessive computational resources - in many cases taking several weeks or months to solve. To alleviate this problem, a parallelized, multiphase fluid solver using a particle level-set (PLS) method was implemented. The PLS method offers increased accuracy in interface location tracking, the ability to capture sharp interfacial features with minimal numerical diffusion, and significantly improved mass conservation. The independent nature of the particles is amenable to parallelization using graphics processing unit (GPU) and multi-CPU implementations, since each particle can be updated simultaneously. The present work will explore the speedup provided by GPU and multi-CPU implementations and determine the effectiveness of PLS for accurately capturing sharp interfacial features. The numerical model will be validated by comparison to experimental data for vibration-induced droplet atomization. Further development will add the physics of boiling in the presence of acoustic fields. It is hoped that the resultant boiling simulations will be sufficiently improved to allow for optimization studies of various boiling configurations to be performed in a timely manner. Supported by ONR.

  16. Commodity CPU-GPU System for Low-Cost , High-Performance Computing

    Science.gov (United States)

    Wang, S.; Zhang, S.; Weiss, R. M.; Barnett, G. A.; Yuen, D. A.

    2009-12-01

    We have put together a desktop computer system for under 2.5 K dollars from commodity components that consist of one quad-core CPU (Intel Core 2 Quad Q6600 Kentsfield 2.4GHz) and two high end GPUs (nVidia's GeForce GTX 295 and Tesla C1060). A 1200 watt power supply is required. On this commodity system, we have constructed an easy-to-use hybrid computing environment, in which Message Passing Interface (MPI) is used for managing the working loads, for transferring the data among different GPU devices, and for minimizing the need of CPU’s memory. The test runs using the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library show that the speed ups for double precision calculations can be greater than 10 (GPU vs. CPU) and they are bigger (> 20) for single precision calculations. In addition we have enabled the combination of Matlab with CUDA for interactive visualization through MPI, i.e., two GPU devices are used for simulation and one GPU device is used for visualizing the computing results as the simulation goes. Our experience with this commodity system has shown that running multiple applications on one GPU device or running one application across multiple GPU devices can be done as conveniently as on CPUs. With NVIDIA CEO Jen-Hsun Huang's claim that over the next 6 years GPU processing power will increase by 570x compared to the 3x for CPUs, future low-cost commodity computers such as ours may be a remedy for the long wait queues of the world's supercomputers, especially for small- and mid-scale computation. Our goal here is to explore the limits and capabilities of this emerging technology and to get ourselves ready to run large-scale simulations on the next generation of computing environment, which we believe will hybridize CPU and GPU architectures.

  17. The effect of a touch-typing program on keyboarding skills of higher education students with and without learning disabilities.

    Science.gov (United States)

    Weigelt Marom, H; Weintraub, N

    2015-12-01

    This study examined the effect of a touch-typing instructional program on keyboarding skills of higher education students. One group included students with developmental learning disabilities (LD, n=44), consisting of students with reading and/or handwriting difficulties. The second group included normally achieving students (NA, n=30). The main goal of the program was to increase keyboarding speed while maintaining accuracy. The program included 14 bi-weekly touch-typing lessons, using the "Easy-Fingers" software (Weigelt Marom & Weintraub, 2010a), that combines a touch-typing instructional program and a keystroke logging program, to document the time and accuracy of each typed key. The effect of the program was examined by comparing keyboarding skills between the beginning (pre-test), the end of the program (post-test) and 3 months after termination of the program (long-term). Results showed that at the end of the program, keyboarding speed of the NA students decreased while the speed of the students with LD somewhat increased. In the long-term evaluation, both groups significantly improved their speed compared to pre-test. In both cases high accuracy (above 95%) was maintained. These results suggest that touch-typing instruction may benefit students in general, and more specific, students with LD studying in higher education, which often use computers in order to circumvent their handwriting difficulties.

  18. 一种基于单摄像头的虚拟键盘%A virtual keyboard based on camera

    Institute of Scientific and Technical Information of China (English)

    杨骋

    2012-01-01

    A design of virtual keyboard based on camera is proposed. It can interact with people and computer by action of finger under camera as keyboard. In this design it only need signal camera to track the hand, confirm the action and position of the finger by the outline and color information of the hand. It finished a virtual keyboard with no material object by the behavior as knock keyboard.%提出了一种基于单摄像头的虚拟键盘的实现方法,利用手指在摄像头下的位置和动作,完成类似于键盘的人机交互.该方法只使用一个普通的摄像头,结合人手的颜色和轮廓信息,实现对人手的跟踪,实时判断人手指的位置和动作,以手指敲击键盘的动作为标准,实现一种无需实体的虚拟键盘.

  19. Microswitch and Keyboard-Emulator Technology to Facilitate the Writing Performance of Persons with Extensive Motor Disabilities

    Science.gov (United States)

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Green, Vanessa; Oliva, Doretta; Lang, Russell

    2011-01-01

    This study assessed the effectiveness of microswitches for simple responses (i.e., partial hand closure, vocalization, and hand stroking) and a keyboard emulator to facilitate the writing performance of three participants with extensive motor disabilities. The study was carried out according to an ABAB design. During the A phases, the participants…

  20. The Feasibility of Using OpenCL Instead of OpenMP for Parallel CPU Programming

    OpenAIRE

    Karimi, Kamran

    2015-01-01

    OpenCL, along with CUDA, is one of the main tools used to program GPGPUs. However, it allows running the same code on multi-core CPUs too, making it a rival for the long-established OpenMP. In this paper we compare OpenCL and OpenMP when developing and running compute-heavy code on a CPU. Both ease of programming and performance aspects are considered. Since, unlike a GPU, no memory copy operation is involved, our comparisons measure the code generation quality, as well as thread management e...

  1. 亿维发布UN CPU224 DC/DC/DC

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    2011年12月.亿维(UniMAT)正式发布了UN CPU224DC/DC/DC.订货号为UN214-1AD23—0XB0。这是一款由亿维研发部根据多年PLC模块的市场经验.历时1年研发出来的200系列自主品牌PLC控制器模块.在兼容西门子PLC功能的前提下,做了部分优化及功能增强。

  2. VMware vSphere performance designing CPU, memory, storage, and networking for performance-intensive workloads

    CERN Document Server

    Liebowitz, Matt; Spies, Rynardt

    2014-01-01

    Covering the latest VMware vSphere software, an essential book aimed at solving vSphere performance problems before they happen VMware vSphere is the industry's most widely deployed virtualization solution. However, if you improperly deploy vSphere, performance problems occur. Aimed at VMware administrators and engineers and written by a team of VMware experts, this resource provides guidance on common CPU, memory, storage, and network-related problems. Plus, step-by-step instructions walk you through techniques for solving problems and shed light on possible causes behind the problems. Divu

  3. CMSA: a heterogeneous CPU/GPU computing system for multiple similar RNA/DNA sequence alignment.

    Science.gov (United States)

    Chen, Xi; Wang, Chen; Tang, Shanjiang; Yu, Ce; Zou, Quan

    2017-06-24

    The multiple sequence alignment (MSA) is a classic and powerful technique for sequence analysis in bioinformatics. With the rapid growth of biological datasets, MSA parallelization becomes necessary to keep its running time in an acceptable level. Although there are a lot of work on MSA problems, their approaches are either insufficient or contain some implicit assumptions that limit the generality of usage. First, the information of users' sequences, including the sizes of datasets and the lengths of sequences, can be of arbitrary values and are generally unknown before submitted, which are unfortunately ignored by previous work. Second, the center star strategy is suited for aligning similar sequences. But its first stage, center sequence selection, is highly time-consuming and requires further optimization. Moreover, given the heterogeneous CPU/GPU platform, prior studies consider the MSA parallelization on GPU devices only, making the CPUs idle during the computation. Co-run computation, however, can maximize the utilization of the computing resources by enabling the workload computation on both CPU and GPU simultaneously. This paper presents CMSA, a robust and efficient MSA system for large-scale datasets on the heterogeneous CPU/GPU platform. It performs and optimizes multiple sequence alignment automatically for users' submitted sequences without any assumptions. CMSA adopts the co-run computation model so that both CPU and GPU devices are fully utilized. Moreover, CMSA proposes an improved center star strategy that reduces the time complexity of its center sequence selection process from O(mn (2)) to O(mn). The experimental results show that CMSA achieves an up to 11× speedup and outperforms the state-of-the-art software. CMSA focuses on the multiple similar RNA/DNA sequence alignment and proposes a novel bitmap based algorithm to improve the center star strategy. We can conclude that harvesting the high performance of modern GPU is a promising approach to

  4. Brownian dynamics simulations on CPU and GPU with BD_BOX.

    Science.gov (United States)

    Długosz, Maciej; Zieliński, Paweł; Trylska, Joanna

    2011-09-01

    There has been growing interest in simulating biological processes under in vivo conditions due to recent advances in experimental techniques dedicated to study single particle behavior in crowded environments. We have developed a software package, BD_BOX, for multiscale Brownian dynamics simulations. BD_BOX can simulate either single molecules or multicomponent systems of diverse, interacting molecular species using flexible, coarse-grained bead models. BD_BOX is written in C and employs modern computer architectures and technologies; these include MPI for distributed-memory architectures, OpenMP for shared-memory platforms, NVIDIA CUDA framework for GPGPU, and SSE vectorization for CPU. Copyright © 2011 Wiley Periodicals, Inc.

  5. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  6. Vectorized K-Means Algorithm on Heterogeneous CPU/MIC Architecture%面向CPU/MIC异构架构的K-Means向量化算法

    Institute of Scientific and Technical Information of China (English)

    谭郁松; 伍复慧; 吴庆波; 陈微; 孙晓利

    2014-01-01

    In the context of big data era, K-Means is an important algorithm of cluster analysis of data mining. The massive high-dimensional data processing brings strong performance demand on K-Means algorithms. The newly proposed MIC (many integrated core) architecture provides both thread-level parallel between cores and instruction-level parallel in each core, which make MIC good choice for algorithm acceleration. Firstly, this paper describes the basic K-Means algorithm and analyzes its bottleneck. Then it proposes a novel vectorized K-Means algorithm which optimizes vector data layout strategy and gets higher parallel performance. Moreover, it implements the vectorized algorithm on CPU/MIC heterogeneous platform, and explores the MIC optimization strategy in non-traditional HPC (high performance computing) applications. The experimental results prove that the vectorized K-Means algorithm has excellent performance and scalability.%在大数据背景下,以K-Means为代表的聚类分析对于数据分析和挖掘十分重要。海量高维数据的处理给K-Means算法带来了性能方面的强烈需求。最新提出的众核体系结构MIC(many integrated core)能够为算法加速提供众核间线程级和核内指令级并行,使其成为K-Means算法加速的很好选择。在分析K-Means基本算法特点的基础上,分析了K-Means算法的瓶颈,提出了可利用数据并行的K-Means向量化算法,优化了向量化算法的数据布局方案。最后,基于CPU/MIC的异构架构实现了向量化K-Means算法,并且探索了MIC在非传统HPC(high performance computing)应用领域的优化策略。测试结果表明,K-Means向量化算法具有良好的计算性能和扩展性。

  7. Finger exercise with keyboard playing in adults with cerebral palsy: A preliminary study.

    Science.gov (United States)

    Chong, Hyun Ju; Cho, Sung-Rae; Jeong, Eunju; Kim, Soo Ji

    2013-01-01

    The purpose of this study is to examine the effects of Therapeutic Instrument Music Performance (TIMP) for fine motor exercises in adults with cerebral palsy (CP). Individuals with CP (n = 5) received a total of twelve, 30-min TIMP sessions, two days per week for six to nine weeks. Pre- and post-Music Instrument Digital Interface (MIDI) data were used as a measure of hand function. Pre-velocity was significantly different from the normative data obtained from typical adults (n = 20); however, post-velocity did not yield significance, specifically in the second and fifth fingers, indicating improvement in hand function for the adults with cerebral palsy. The finding implies that TIMP using keyboard playing may effectively improve manual dexterity and velocity of finger movement. Based on these results, future program development of instrumental playing for adults with CP is called for to enhance both their independent living skills and quality of life.

  8. Localised strain sensing of dielectric elastomers in a stretchable soft-touch musical keyboard

    Science.gov (United States)

    Xu, Daniel; Tairych, Andreas; Anderson, Iain A.

    2015-04-01

    We present a new sensing method that can measure the strain at different locations in a dielectric elastomer. The method uses multiple sensing frequencies to target different regions of the same dielectric elastomer to simultaneously detect position and pressure using only a single pair of connections. The dielectric elastomer is modelled as an RC transmission line and its internal voltage and current distribution used to determine localised capacitance changes resulting from contact and pressure. This sensing method greatly simplifies high degree of freedom systems and does not require any modifications to the dielectric elastomer or sensing hardware. It is demonstrated on a multi-touch musical keyboard made from a single low cost carbon-based dielectric elastomer with 4 distinct musical tones mapped along a length of 0.1m. Loudness was controlled by the amount of pressure applied to each of these 4 positions.

  9. CPU-12, a novel synthesized oxazolo[5,4-d]pyrimidine derivative, showed superior anti-angiogenic activity.

    Science.gov (United States)

    Liu, Jiping; Deng, Ya-Hui; Yang, Ling; Chen, Yijuan; Lawali, Manzo; Sun, Li-Ping; Liu, Yu

    2015-09-01

    Angiogenesis is a crucial requirement for malignant tumor growth, progression and metastasis. Tumor-derived factors stimulate formation of new blood vessels which actively support tumor growth and spread. Various of drugs have been applied to inhibit tumor angiogenesis. CPU-12, 4-chloro-N-(4-((2-(4-methoxyphenyl)-5-methyloxazolo[5,4-d] pyrimidin-7-yl)amino)phenyl)benzamide, is a novel oxazolo[5,4-d]pyrimidine derivative that showed potent activity in inhibiting VEGF-induced angiogenesis in vitro and ex-vivo. In cell toxicity experiments, CPU-12 significantly inhibited the human umbilical vein endothelial cell (HUVEC) proliferation in a dose-dependent manner with a low IC50 value at 9.30 ± 1.24 μM. In vitro, CPU-12 remarkably inhibited HUVEC's migration, chemotactic invasion and capillary-like tube formation in a dose-dependent manner. In ex-vivo, CPU-12 effectively inhibited new microvessels sprouting from the rat aortic ring. In addition, the downstream signalings of vascular endothelial growth factor receptor-2 (VEGFR-2), including the phosphorylation of PI3K, ERK1/2 and p38 MAPK, were effectively down-regulated by CPU-12. These evidences suggested that angiogenic response via the induction of VEGFR through distinct signal transduction pathways regulating proliferation, migration and tube formation of endothelial cells was significantly inhibited by the novel small molecule compound CPU-12 in vitro and ex-vivo. In conclusion, CPU-12 showed superior anti-angiogenic activity in vitro. Copyright © 2015 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  10. Designing of Vague Logic Based 2-Layered Framework for CPU Scheduler

    Directory of Open Access Journals (Sweden)

    Supriya Raheja

    2016-01-01

    Full Text Available Fuzzy based CPU scheduler has become of great interest by operating system because of its ability to handle imprecise information associated with task. This paper introduces an extension to the fuzzy based round robin scheduler to a Vague Logic Based Round Robin (VBRR scheduler. VBRR scheduler works on 2-layered framework. At the first layer, scheduler has a vague inference system which has the ability to handle the impreciseness of task using vague logic. At the second layer, Vague Logic Based Round Robin (VBRR scheduling algorithm works to schedule the tasks. VBRR scheduler has the learning capability based on which scheduler adapts intelligently an optimum length for time quantum. An optimum time quantum reduces the overhead on scheduler by reducing the unnecessary context switches which lead to improve the overall performance of system. The work is simulated using MATLAB and compared with the conventional round robin scheduler and the other two fuzzy based approaches to CPU scheduler. Given simulation analysis and results prove the effectiveness and efficiency of VBRR scheduler.

  11. CPU Cooling of Desktop PC by Closed-end Oscillating Heat-pipe (CEOHP

    Directory of Open Access Journals (Sweden)

    S. Rittidech

    2005-01-01

    Full Text Available The CEOHP cooling module consisted of two main parts, i.e., the aluminum housing and the CEOHP. The house casing was designed to be suitable for CEOHP. The housing to drilling for insert the CEOHP. The CEOHP design employed copper tubes: Two sets of capillary tubes with an inner diameter of 0.002 m, an evaporator length of 0.05 and a condenser length of 0.16 m and each of which has six meandering turns. The evaporator section was embraced in the aluminum housing and attached to the thermal pad of Pentium 4 CPU, model SL 6 PB, 2.26 GHZ. While the condenser section was embraced in the cooling fin housing and cooled by forced convection. R134a was used as the working fluid with filling ratio of 50%. In the experiment, the CPU chip with a power of 58 W was 70°C. Fan speed of 2000 and 4000 rpm. It was found that, if fan speed increases the cooling performance increases. The CEOHP cooling module had better thermal performance than conventional heat sink.

  12. Influence of the compiler on multi-CPU performance of WRFv3

    Directory of Open Access Journals (Sweden)

    T. Langkamp

    2011-07-01

    Full Text Available The Weather Research and Forecasting system version 3 (WRFv3 is an open source and state of the art numerical Regional Climate Model used in climate related sciences. These years the model has been successfully optimized on a wide variety of clustered compute nodes connected with high speed interconnects. This is currently the most used hardware architecture for high-performance computing (Shainer et al., 2009. As such, understanding the influence of hardware like the CPU, its interconnects, or the software on WRFs performance is crucial for saving computing time. This is important because computing time in general is rare, resource intensive, and hence very expensive.

    This paper evaluates the influence of different compilers on WRFs performance, which was found to differ up to 26 %. The paper also evaluates the performance of different Message Passing Interface library versions, a software which is needed for multi CPU runs, and of different WRF versions. Both showed no significant influence on the performance for this test case on the used High Performance Cluster (HPC hardware.

    Emphasis is also laid on the applied non-standard method of performance measuring, which was required because of performance fluctuations between identical runs on the used HPC. Those are caused by contention for network resources, a phenomenon examined for many HPCs (Wright et al., 2009.

  13. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Agulleiro, J.I.; Vazquez, F.; Garzon, E.M. [Supercomputing and Algorithms Group, Associated Unit CSIC-UAL, University of Almeria, 04120 Almeria (Spain); Fernandez, J.J., E-mail: JJ.Fernandez@csic.es [National Centre for Biotechnology, National Research Council (CNB-CSIC), Campus UAM, C/Darwin 3, Cantoblanco, 28049 Madrid (Spain)

    2012-04-15

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. -- Highlights: Black-Right-Pointing-Pointer Hybrid computing allows full exploitation of the power (CPU+GPU) in a computer. Black-Right-Pointing-Pointer Proper orchestration of workload is managed by an on-demand strategy. Black-Right-Pointing-Pointer Total number of threads running in the system should be limited to the number of CPUs.

  14. The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

    Directory of Open Access Journals (Sweden)

    Ra Inta

    2012-01-01

    Full Text Available The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”, built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard. Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

  15. Multithreading Image Processing in Single-core and Multi-core CPU using Java

    Directory of Open Access Journals (Sweden)

    Alda Kika

    2013-10-01

    Full Text Available Multithreading has been shown to be a powerful approach for boosting a system performance. One of the good examples of applications that benefits from multithreading is image processing. Image processing requires many resources and processing run time because the calculations are often done on a matrix of pixels. The programming language Java supports the multithreading programming as part of the language itself instead of treating threads through the operating system. In this paper we explore the performance of Java image processing applications designed with multithreading approach. In order to test how the multithreading influences on the performance of the program, we tested several image processing algorithms implemented with Java language using the sequential one thread and multithreading approach on single and multi-core CPU. The experiments were based not only on different platforms and algorithms that differ from each other from the level of complexity, but also on changing the sizes of the images and the number of threads when multithreading approach is applied. Performance is increased on single core and multiple core CPU in different ways in relation with image size, complexity of the algorithm and the platform.

  16. Influence of the compiler on multi-CPU performance of WRFv3

    Directory of Open Access Journals (Sweden)

    T. Langkamp

    2011-03-01

    Full Text Available The Weather Research and Forecasting system version 3 (WRFv3 is an open source and state of the art numerical regional climate model used in climate related sciences. Over the years the model has been successfully optimized on a wide variety of clustered compute nodes connected with high speed interconnects. This is currently the most used hardware architecture for high-performance computing. As such, understanding WRFs dependency on the various hardware elements like the CPU, its interconnects, and the software is crucial for saving computing time. This is important because computing time in general is rare, resource intensive, and hence very expensive.

    This paper evaluates the influence of different compilers on WRFs performance, which was found to differ up to 26%. The paper also evaluates the performance of different message passing interface library versions, a software which is needed for multi CPU runs, and of different WRF versions. Both showed no significant influence on the performance for this test case on the used High Performance Cluster (HPC hardware.

    Some emphasis is also laid on the applied non-standard method of performance measuring, which was required because of performance fluctuations between identical runs on the used HPC. Those are caused by contention for network resources, a phenomenon examined for many HPCs.

  17. Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

    CERN Document Server

    Frank, Markus; v.Herwijnen, E; Jost, B; Neufeld, N

    2014-01-01

    The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with output of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1600 physical nodes each equipped with at least 1 TB of local storage space. This work describes the architecture to treble the available CPU power of the HLT farm given that the LHC collider in previous years delivered stable physics beams about 30% of the time. The gain is achieved by splitting the event selection process in two, a first stage reducing the data taken during stable beams and buffering the preselected particle collisions locally. A second processing stage running constantly at lower priority will then finalize the event filtering process and benefits fully from the time when LHC does not deliver stable beams e.g. while preparing a ne...

  18. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation.

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Ph Kamerling, Cornelis; Oelfke, Uwe

    2015-08-07

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37[Formula: see text] compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25[Formula: see text] and 1.95[Formula: see text] faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  19. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    Science.gov (United States)

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  20. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    Science.gov (United States)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  1. Connection of PS/2 Standard Keyboard and MCU%PS/2标准键盘与单片机的连接

    Institute of Scientific and Technical Information of China (English)

    邬法磊; 李晓阳; 尘源

    2012-01-01

    分析了PS/2标准键盘的通信协议,阐述了其编码的扫描原理,提出了51单片机与PS/2键盘的接口设计,并通过LCD1602显示了键盘的输入情况。%This paper analyzes the communication protocol of PS/2 standard keyboard and describes the encoding of the scanning principle.Besides,the paper also introduces the interface design of 51 MCU and PS/2 keyboard and displays how the keyboard inputs by using 1602 LCD.

  2. Linux0.01键盘中断的C语言实现%Implementation of Linux 0. 01 Keyboard Interrupt Functions in C

    Institute of Scientific and Technical Information of China (English)

    于江涛; 曲波

    2011-01-01

    The paper describes the main techniques of adapting keyboard interrupt functions of Linux O. O1 in C, including programming methods and key codes of ASCII map tables, routines calling table, keyboard manipulating routines, keyboard interrupt handler, etc.%文章阐述了用C语言改写Linux0.01键盘模块的技术要点,包括ASCⅡ码映射表、键盘处理子程序跳转表、键盘处理程序、键盘中断程序的编程方法及关键代码.

  3. 16位中央处理器设计与现场可编程门阵列实现%16-bit CPU Design and FPGA Implementation

    Institute of Scientific and Technical Information of China (English)

    白广治; 陈泉根

    2007-01-01

    为了自主开发中央处理器(Central Processing Unit,CPU),对16位CPU进行了研究,提出了以执行周期尽量最少的译码执行方式,采用Top-Down的方法进行设计,用硬件描述语言Verilog进行代码编写,并对编写的CPU代码进行仿真验证和现场可编程门阵列(Field Programmable Gate Array,FPGA)验证.结果表明,该CPU运行效率较INTEL等通用CPU有较大提高.该自主CPU可以作为IP核进行FPGA应用,也可进行SoC设计应用.

  4. 面向国产CPU SW-1600的向量重组%DOMESTIC PRODUCED CPU SW-1600 ORIENTED VECTOR REGROUP

    Institute of Scientific and Technical Information of China (English)

    魏帅; 赵荣彩; 姚远

    2011-01-01

    Since vectorized regroup instructions sire comparatively complex whereas different instructions correspond to different delays, it is hard to find out a uniform and efficient vector regroup algorithm. The paper analyzes shifting and insertion/extraction instructions that are offered by domestic produced CPU SW-1600, and presents an optimal algorithm that only depends on shifting or insertion/extraction instructions to realize vector regroup as well as an efficient algorithm that integrates the two types of instructions to realize vector regroup. At last it is proven by experiments that the algorithms can better vectorize programs. The speedup ratio for integer type values reaches 7.31 while that for complex double precision float type programs reaches 1. 83.%由于向量化重组指令比较复杂并且不同指令有不同的延迟,从而难以寻找一种统一高效的向量重组算法.对国产CPUSW-1600提供的移位和插入提取指令进行了分析,提出单独依靠移位或插入提取指令实现向量重组的最优算法,并综合这两类指令实现向量重组的高效算法.最后通过实验证明该算法可以较好地对程序进行向量化,对整型数据的加速比达到7.31,对复杂的双精度浮点型程序的加速比也达到1.83.

  5. An hybrid CPU-GPU framework for quantitative follow-up of abdominal aortic aneurysm volume by CT angiography

    Science.gov (United States)

    Kauffmann, Claude; Tang, An; Therasse, Eric; Soulez, Gilles

    2010-03-01

    We developed a hybrid CPU-GPU framework enabling semi-automated segmentation of abdominal aortic aneurysm (AAA) on Computed Tomography Angiography (CTA) examinations. AAA maximal diameter (D-max) and volume measurements and their progression between 2 examinations can be generated by this software improving patient followup. In order to improve the workflow efficiency some segmentation tasks were implemented and executed on the graphics processing unit (GPU). A GPU based algorithm is used to automatically segment the lumen of the aneurysm within short computing time. In a second step, the user interacted with the software to validate the boundaries of the intra-luminal thrombus (ILT) on GPU-based curved image reformation. Automatic computation of D-max and volume were performed on the 3D AAA model. Clinical validation was conducted on 34 patients having 2 consecutive MDCT examinations within a minimum interval of 6 months. The AAA segmentation was performed twice by a experienced radiologist (reference standard) and once by 3 unsupervised technologists on all 68 MDCT. The ICC for intra-observer reproducibility was 0.992 (>=0.987) for D-max and 0.998 (>=0.994) for volume measurement. The ICC for inter-observer reproducibility was 0.985 (0.977-0.90) for D-max and 0.998 (0.996- 0.999) for volume measurement. Semi-automated AAA segmentation for volume follow-up was more than twice as sensitive than D-max follow-up, while providing an equivalent reproducibility.

  6. The PAMELA storage and control unit

    Energy Technology Data Exchange (ETDEWEB)

    Casolino, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy)]. E-mail: Marco.Casolino@roma2.infn.it; Altamura, F. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Basili, A. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); De Pascale, M.P. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Minori, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Nagni, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Picozza, P. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Sparvoli, R. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Adriani, O. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Papini, P. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Spillantini, P. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Castellini, G. [CNR-Istituto di Fisica Applicata ' Nello Carrara' , I-50127 Florence (Italy); Boezio, M. [INFN, Structure of Trieste, Physics Department, University of Trieste, I-34147 Trieste (Italy)

    2007-03-01

    The PAMELA Storage and Control Unit (PSCU) comprises a Central Processing Unit (CPU) and a Mass Memory (MM). The CPU of the experiment is based on a ERC-32 architecture (a SPARC v7 implementation) running a real time operating system (RTEMS). The main purpose of the CPU is to handle slow control, acquisition and store data on a 2 GB MM. Communications between PAMELA and the satellite are done via a 1553B bus. Data acquisition from the sub-detectors is performed via a 2 MB/s interface. Download from the PAMELA MM towards the satellite main storage unit is handled by a 16 MB/s bus. The maximum daily amount of data transmitted to ground is about 20 GB.

  7. High Performance Commodity Networking in a 512-CPU Teraflop Beowulf Cluster for Computational Astrophysics

    CERN Document Server

    Dubinski, J; Pen, U L; Loken, C; Martin, P; Dubinski, John; Humble, Robin; Loken, Chris; Martin, Peter; Pen, Ue-Li

    2003-01-01

    We describe a new 512-CPU Beowulf cluster with Teraflop performance dedicated to problems in computational astrophysics. The cluster incorporates a cubic network topology based on inexpensive commodity 24-port gigabit switches and point to point connections through the second gigabit port on each Linux server. This configuration has network performance competitive with more expensive cluster configurations and is scaleable to much larger systems using other network topologies. Networking represents only about 9% of our total system cost of USD$561K. The standard Top 500 HPL Linpack benchmark rating is 1.202 Teraflops on 512 CPUs so computing costs by this measure are $0.47/Megaflop. We also describe 4 different astrophysical applications using complex parallel algorithms for studying large-scale structure formation, galaxy dynamics, magnetohydrodynamic flows onto blackholes and planet formation currently running on the cluster and achieving high parallel performance. The MHD code achieved a sustained speed of...

  8. An FPGA Based Multiprocessing CPU for Beam Synchronous Timing in CERN's SPS and LHC

    CERN Document Server

    Ballester, F J; Gras, J J; Lewis, J; Savioz, J J; Serrano, J

    2003-01-01

    The Beam Synchronous Timing system (BST) will be used around the LHC and its injector, the SPS, to broadcast timing meassages and synchronize actions with the beam in different receivers. To achieve beam synchronization, the BST Master card encodes messages using the bunch clock, with a nominal value of 40.079 MHz for the LHC. These messages are produced by a set of tasks every revolution period, which is every 89 us for the LHC and every 23 us for the SPS, therefore imposing a hard real-time constraint on the system. To achieve determinism, the BST Master uses a dedicated CPU inside its main Field Programmable Gate Array (FPGA) featuring zero-delay hardware task switching and a reduced instruction set. This paper describes the BST Master card, stressing the main FPGA design, as well as the associated software, including the LynxOS driver and the tailor-made assembler.

  9. Accelerating mesh-based Monte Carlo method on modern CPU architectures.

    Science.gov (United States)

    Fang, Qianqian; Kaeli, David R

    2012-12-01

    In this report, we discuss the use of contemporary ray-tracing techniques to accelerate 3D mesh-based Monte Carlo photon transport simulations. Single Instruction Multiple Data (SIMD) based computation and branch-less design are exploited to accelerate ray-tetrahedron intersection tests and yield a 2-fold speed-up for ray-tracing calculations on a multi-core CPU. As part of this work, we have also studied SIMD-accelerated random number generators and math functions. The combination of these techniques achieved an overall improvement of 22% in simulation speed as compared to using a non-SIMD implementation. We applied this new method to analyze a complex numerical phantom and both the phantom data and the improved code are available as open-source software at http://mcx.sourceforge.net/mmc/.

  10. Hybrid computing: CPU+GPU co-processing and its application to tomographic reconstruction.

    Science.gov (United States)

    Agulleiro, J I; Vázquez, F; Garzón, E M; Fernández, J J

    2012-04-01

    Modern computers are equipped with powerful computing engines like multicore processors and GPUs. The 3DEM community has rapidly adapted to this scenario and many software packages now make use of high performance computing techniques to exploit these devices. However, the implementations thus far are purely focused on either GPUs or CPUs. This work presents a hybrid approach that collaboratively combines the GPUs and CPUs available in a computer and applies it to the problem of tomographic reconstruction. Proper orchestration of workload in such a heterogeneous system is an issue. Here we use an on-demand strategy whereby the computing devices request a new piece of work to do when idle. Our hybrid approach thus takes advantage of the whole computing power available in modern computers and further reduces the processing time. This CPU+GPU co-processing can be readily extended to other image processing tasks in 3DEM. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Design and implementation of a low power mobile CPU based embedded system for artificial leg control.

    Science.gov (United States)

    Hernandez, Robert; Yang, Qing; Huang, He; Zhang, Fan; Zhang, Xiaorong

    2013-01-01

    This paper presents the design and implementation of a new neural-machine-interface (NMI) for control of artificial legs. The requirements of high accuracy, real-time processing, low power consumption, and mobility of the NMI place great challenges on the computation engine of the system. By utilizing the architectural features of a mobile embedded CPU, we are able to implement our decision-making algorithm, based on neuromuscular phase-dependant support vector machines (SVM), with exceptional accuracy and processing speed. To demonstrate the superiority of our NMI, real-time experiments were performed on an able bodied subject with a 20 ms window increment. The 20 ms testing yielded accuracies of 99.94% while executing our algorithm efficiently with less than 11% processor loads.

  12. A Bit String Content Aware Chunking Strategy for Reduced CPU Energy on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    2015-01-01

    Full Text Available In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.

  13. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    Directory of Open Access Journals (Sweden)

    Markus Hiienkari

    2015-04-01

    Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.

  14. Comparative substrate specificity study of carboxypeptidase U (TAFIa) and carboxypeptidase N: development of highly selective CPU substrates as useful tools for assay development.

    Science.gov (United States)

    Willemse, Johan L; Polla, Magnus; Olsson, Thomas; Hendriks, Dirk F

    2008-01-01

    Measurement of procarboxypeptidase U (TAFI) in plasma by activity-based assays is complicated by the presence of plasma carboxypeptidase N (CPN). Accurate blank measurements, correcting for this interfering CPN activity, should therefore be performed. A selective CPU substrate will make proCPU determination much less time-consuming. We searched for selective and sensitive CPU substrates by kinetic screening of different Bz-Xaa-Arg (Xaa=a naturally occurring amino acid) substrates using a novel kinetic assay. The presence of an aromatic amino acid (Phe, Tyr, Trp) resulted in a fairly high selectivity for CPU which was most pronounced with Bz-Trp-Arg showing a 56-fold higher k(cat)/K(m) value for CPU compared to CPN. Next we performed chemical modifications on the structure of those aromatic amino acids. This approach resulted in a fully selective CPU substrate with a 2.5-fold increase in k(cat) value compared to the commonly used Hip-Arg (Bz-Gly-Arg). We demonstrated significant differences in substrate specificity between CPU and CPN that were previously not fully appreciated. The selective CPU substrate presented in this paper will allow straightforward determination of proCPU in plasma in the future.

  15. Alphabet Writing and Allograph Selection as Predictors of Spelling in Sentences Written by Spanish-Speaking Children Who Are Poor or Good Keyboarders.

    Science.gov (United States)

    Peake, Christian; Diaz, Alicia; Artiles, Ceferino

    This study examined the relationship and degree of predictability that the fluency of writing the alphabet from memory and the selection of allographs have on measures of fluency and accuracy of spelling in a free-writing sentence task when keyboarding. The Test Estandarizado para la Evaluación de la Escritura con Teclado ("Spanish Keyboarding Writing Test"; Jiménez, 2012) was used as the assessment tool. A sample of 986 children from Grades 1 through 3 were classified according to transcription skills measured by keyboard ability (poor vs. good) across the grades. Results demonstrated that fluency in writing the alphabet and selecting allographs mediated the differences in spelling between good and poor keyboarders in the free-writing task. Execution in the allograph selection task and writing alphabet from memory had different degrees of predictability in each of the groups in explaining the level of fluency and spelling in the free-writing task sentences, depending on the grade. These results suggest that early assessment of writing by means of the computer keyboard can provide clues and guidelines for intervention and training to strengthen specific skills to improve writing performance in the early primary grades in transcription skills by keyboarding.

  16. Improving fine motor activities of people with disabilities by using the response-stimulation strategy with a standard keyboard.

    Science.gov (United States)

    Chang, Man-Ling; Shih, Ching-Hsiang

    2014-08-01

    The principle of this study was to use the finger-pressing position detection program (FPPDP) with a standard keyboard to improve the fine motor activities of disabled people through environmental stimulation. The FPPDP is a software solution which turns a standard keyboard into a finger-pressing position detector. By using this technique, this study tried to find out whether two students with developmental disabilities would be able to effectively perform fine motor activities through the triggering of environmental stimulation. This study was based on an ABAB design and the results showed that both participants demonstrated an obvious increase in terms of their willingness to perform target responses during the intervention phases. The practical and developmental implications of the findings are discussed.

  17. Assisting people with multiple disabilities to improve computer typing efficiency through a mouse wheel and on-screen keyboard software.

    Science.gov (United States)

    Shih, Ching-Hsiang

    2014-09-01

    The main purpose of this study was to find out whether three students with multiple disabilities could increase their keyboard typing performance by poking the standard mouse scroll wheel with the newly developed Dynamic Typing Assistive Program (DTAP) and the built-in On-Screen Keyboard (OSK) computer software. The DTAP is a software solution that allows users to complete typing tasks with OSK software easily, quickly, and accurately by poking the mouse wheel. This study was performed according to a multiple baseline design across participants, and the experimental data showed that all of the participants significantly increased their typing efficiency in the intervention phase. Moreover, this improved performance was maintained during the maintenance phase. Practical and developmental implications of the findings were discussed.

  18. Usability Evaluation of Windows 8 with Keyboard and Mouse: Challenges Related to Operating System Migration in Large Organizations

    OpenAIRE

    Jansen, Nikolas

    2013-01-01

    The purpose of this study has been to evaluate the usability of Windows 8 when using keyboard and mouse. Sub goals have been to uncover the usability problems and to generate recommendations for organizations upgrading to Windows 8.Usability testing according to ISO/IEC 25062:2006 was performed on users that had experience from Windows 7. Tests were performed on both Windows 7 and 8 for comparison purposes. Interviews with administrators involved in the operating system migration process were...

  19. Lodovico Giustini and the Emergence of the Keyboard Sonata in Italy

    Directory of Open Access Journals (Sweden)

    Freeman, Daniel E.

    2003-12-01

    Full Text Available The twelve keyboard sonatas, Op. 1, of Ludovico Giustini (1685-1743 constitute the earliest music explicitly indicated for performance on the pianoforte. They are attractive compositions in early classic style that exhibit an interesting mixture of influences from Italian keyboard music, the Italian violin sonata, and French harpsichord music. Their unusual format of dances, contrapuntal excursions, and novelties in four or five movements appears to have been inspired by the Op. 1 violin sonatas of Francesco Veracini, a fellow Tuscan. Although the only source of the sonatas is a print dated Florence, 1732, it is clear that the print could only have appeared between 1734 and 1740. It was probably disseminated out of Lisbon, not Florence, as a result of the patronage of the Infante Antonio of Portugal and Dom João de Seixas, a prominent courtier in Lisbon during the late 1730's.

    Las doce sonatas para teclado, Op. 1, de Ludovico Giustini (1685-1743, constituyen la música más antigua explícitamente indicada para su interpretación en el pianoforte. Son composiciones atractivas en el estilo clásico temprano, que exhiben una interesante mezcla de influencias de la música italiana de tecla, la sonata italiana para violín y la música francesa para clave. Su inusual formato de danzas, sus excursiones contrapuntísticas, y novedades en cuatro o cinco movimientos, parecen haberse inspirado en las sonatas para violín Op. 1 del toscano Francesco Veracini. Aunque la única fuente de las sonatas es un impreso datado en Florencia, en 1732, está claro que el impreso sólo pudo haber aparecido entre 1734 y 1740. Fue posiblemente difundido a Lisboa, y no a Florencia, como resultado del mecenazgo del Infante Antonio de Portugal y Dom João de Seixas, relevante cortesano en Lisboa durante los últimos años de la década de 1730.

  20. Conformational characteristics of dimeric subunits of RNA from energy minimization studies. Mixed sugar-puckered ApG, ApU, CpG, and CpU.

    Science.gov (United States)

    Thiyagarajan, P; Ponnuswamy, P K

    1981-09-01

    Following the procedure described in the preceding article, the low energy conformations located for the four dimeric subunits of RNA, ApG, ApU, CpG, and CpU are presented. The A-RNA type and Watson-Crick type helical conformations and a number of different kinds of loop promoting ones were identified as low energy in all the units. The 3E-3E and 3E-2E pucker sequences are found to be more or less equally preferred; the 2E-2E sequence is occasionally preferred, while the 2E-3E is highly prohibited in all the units. A conformation similar to the one observed in the drug-dinucleoside monophosphate complex crystals becomes a low energy case only for the CpG unit. The low energy conformations obtained for the four model units were used to assess the stability of the conformational states of the dinucleotide segments in the four crystal models of the tRNAPhe molecule. Information on the occurrence of the less preferred sugar-pucker sequences in the various loop regions in the tRNAPhe molecule has been obtained. A detailed comparison of the conformational characteristics of DNA and RNA subunits at the dimeric level is presented on the basis of the results.

  1. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    Science.gov (United States)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full

  2. 对称式双电位扩展键盘设计%Design of Double Potential Symmetrical Extended Keyboard

    Institute of Scientific and Technical Information of China (English)

    姜磊

    2013-01-01

    本设计是基于MCU的一款高效率键盘输入系统。通过对分立按键和普通二极管的合理布局,实现了一种利用N个双向I/O端口确定N2+N个键值的键盘电路设计。实验证明,该键盘电路结构简单、抗干扰能力强、稳定性好,具有很高的使用价值。%Based on the MCU,a high-effifiency keyboard input system is designed in this paper. Through the rational distribu-tion of discrete buttons and ordinary diodes, implements a use of N bidirectional I/O port identified N 2+N key keyboard circuit design. Experiments show that the keyboard circuit structure has the simple control circuit,good anti-jamming ability,good stabil-ity and high practical value.

  3. Typing performance and body discomfort among overweight and obese office workers: A pilot study of keyboard modification.

    Science.gov (United States)

    Smith, Matthew Lee; Pickens, Adam W; Ahn, SangNam; Ory, Marcia G; DeJoy, David M; Young, Kristi; Bishop, Gary; Congleton, Jerome J

    2015-01-01

    Obesity in the workplace is associated with loss of productivity, high medical care expenses, and increased rates of work-related injuries and illness. Thus, effective, low-cost interventions are needed to accommodate the size of today's obese office worker while alleviating potential physical harm associated with musculoskeletal disorders. Utilizing a sample of 22 overweight and obese office workers, this pilot study assessed the impact of introducing an alternative, more ergonomically-sound keyboard on perceptions about design, acceptability, and usability; self-reported body discomfort; and typing productivity. Data were collected using self-reported questionnaires and objective typing tests administered before and after the intervention. The intervention duration was six weeks. After switching from their standard work keyboard to an alternative keyboard, all participants reported significant decreases in lower back discomfort (t = 2.14, P = 0.044); although obese participants reported significant decreases in both upper (t = 2.46, P = 0.032) and lower (t = 2.39, P = 0.036) back discomfort. No significant changes were observed in overall typing performance scores from baseline to follow-up. Findings suggest that such interventions may be introduced into the workforce with positive gains for workers without reducing short-term worker productivity.

  4. Overtaking CPU DBMSes with a GPU in whole-query analytic processing with parallelism-friendly execution plan optimization

    NARCIS (Netherlands)

    A. Agbaria (Adnan); D. Minor (David); N. Peterfreund (Natan); E. Rozenberg (Eyal); O. Rosenberg (Ofer); Huawei Research

    2016-01-01

    textabstractExisting work on accelerating analytic DB query processing with (discrete) GPUs fails to fully realize their potential for speedup through parallelism: Published results do not achieve significant speedup over more performant CPU-only DBMSes when processing complete queries. This paper p

  5. Development of an SSVEP-based BCI spelling system adopting a QWERTY-style LED keyboard.

    Science.gov (United States)

    Hwang, Han-Jeong; Lim, Jeong-Hwan; Jung, Young-Jin; Choi, Han; Lee, Sang Woo; Im, Chang-Hwan

    2012-06-30

    In this study, we introduce a new mental spelling system based on steady-state visual evoked potential (SSVEP), adopting a QWERTY style layout keyboard with 30 LEDs flickering with different frequencies. The proposed electroencephalography (EEG)-based mental spelling system allows the users to spell one target character per each target selection, without the need for multiple step selections adopted by conventional SSVEP-based mental spelling systems. Through preliminary offline experiments and online experiments, we confirmed that human SSVEPs elicited by visual flickering stimuli with a frequency resolution of 0.1 Hz could be classified with classification accuracy high enough to be used for a practical brain-computer interface (BCI) system. During the preliminary offline experiments performed with five participants, we optimized various factors influencing the performance of the mental spelling system, such as distances between adjacent keys, light source arrangements, stimulating frequencies, recording electrodes, and visual angles. Additional online experiments were conducted with six participants to verify the feasibility of the optimized mental spelling system. The results of the online experiments were an average typing speed of 9.39 letters per minute (LPM) with an average success rate of 87.58%, corresponding to an average information transfer rate of 40.72 bits per minute, demonstrating the high performance of the developed mental spelling system. Indeed, the average typing speed of 9.39 LPM attained in this study was one of the best LPM results among those reported in previous BCI literatures.

  6. Computer keyboard interaction as an indicator of early Parkinson’s disease

    Science.gov (United States)

    Giancardo, L.; Sánchez-Ferro, A.; Arroyo-Gallego, T.; Butterworth, I.; Mendoza, C. S.; Montero, P.; Matarazzo, M.; Obeso, J. A.; Gray, M. L.; Estépar, R. San José

    2016-10-01

    Parkinson’s disease (PD) is a slowly progressing neurodegenerative disease with early manifestation of motor signs. Objective measurements of motor signs are of vital importance for diagnosing, monitoring and developing disease modifying therapies, particularly for the early stages of the disease when putative neuroprotective treatments could stop neurodegeneration. Current medical practice has limited tools to routinely monitor PD motor signs with enough frequency and without undue burden for patients and the healthcare system. In this paper, we present data indicating that the routine interaction with computer keyboards can be used to detect motor signs in the early stages of PD. We explore a solution that measures the key hold times (the time required to press and release a key) during the normal use of a computer without any change in hardware and converts it to a PD motor index. This is achieved by the automatic discovery of patterns in the time series of key hold times using an ensemble regression algorithm. This new approach discriminated early PD groups from controls with an AUC = 0.81 (n = 42/43 mean age = 59.0/60.1 women = 43%/60%PD/controls). The performance was comparable or better than two other quantitative motor performance tests used clinically: alternating finger tapping (AUC = 0.75) and single key tapping (AUC = 0.61).

  7. The Influence of Emotion on Keyboard Typing: An Experimental Study Using Auditory Stimuli.

    Science.gov (United States)

    Lee, Po-Ming; Tsui, Wei-Hsuan; Hsiao, Tzu-Chien

    2015-01-01

    In recent years, a novel approach for emotion recognition has been reported, which is by keystroke dynamics. The advantages of using this approach are that the data used is rather non-intrusive and easy to obtain. However, there were only limited investigations about the phenomenon itself in previous studies. Hence, this study aimed to examine the source of variance in keyboard typing patterns caused by emotions. A controlled experiment to collect subjects' keystroke data in different emotional states induced by International Affective Digitized Sounds (IADS) was conducted. Two-way Valence (3) x Arousal (3) ANOVAs was used to examine the collected dataset. The results of the experiment indicate that the effect of arousal is significant in keystroke duration (p keystroke latency (p keystroke duration and latency are influenced by arousal. The finding about the size of the effect suggests that the accuracy rate of emotion recognition technology could be further improved if personalized models are utilized. Notably, the experiment was conducted using standard instruments and hence is expected to be highly reproducible.

  8. Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions.

    Science.gov (United States)

    Lee, Bongshin; Isenberg, P; Riche, N H; Carpendale, S

    2012-12-01

    The importance of interaction to Information Visualization (InfoVis) and, in particular, of the interplay between interactivity and cognition is widely recognized [12, 15, 32, 55, 70]. This interplay, combined with the demands from increasingly large and complex datasets, is driving the increased significance of interaction in InfoVis. In parallel, there have been rapid advances in many facets of interaction technologies. However, InfoVis interactions have yet to take full advantage of these new possibilities in interaction technologies, as they largely still employ the traditional desktop, mouse, and keyboard setup of WIMP (Windows, Icons, Menus, and a Pointer) interfaces. In this paper, we reflect more broadly about the role of more "natural" interactions for InfoVis and provide opportunities for future research. We discuss and relate general HCI interaction models to existing InfoVis interaction classifications by looking at interactions from a novel angle, taking into account the entire spectrum of interactions. Our discussion of InfoVis-specific interaction design considerations helps us identify a series of underexplored attributes of interaction that can lead to new, more "natural," interaction techniques for InfoVis.

  9. Preparation of forefinger's sequence on keyboard orients ocular fixations on computer screen.

    Science.gov (United States)

    Coutté, Alexandre; Olivier, Gérard; Faure, Sylvane; Baccino, Thierry

    2014-08-01

    This study examined the links between attention, hand movements and eye movements when performed in different spatial areas. Participants performed a visual search task on a computer screen while preparing to press two keyboard keys sequentially with their index. Results showed that the planning of the manual sequence influenced the latency of the first saccade and the placement of the first fixation. In particular, even if the first fixation placement was influenced by the combination of both components of the prepared manual sequence in some trials, it was affected principally by the first component of the prepared manual sequence. Moreover, the probability that the first fixation placement did reflect a combination of both components of the manual sequence was correlated with the speed of the second component. This finding suggests that the preparation of the second component of the sequence influence simultaneous oculomotor behavior when motor control of the manual sequence relied on proactive motor planning. These results are discussed taking into account the current debate on the eye/hand coordination research.

  10. The Linguistics of Keyboard - to - screen Communication: A New Terminological Framework

    Directory of Open Access Journals (Sweden)

    Andrea s H. Jucker

    2012-01-01

    Full Text Available New forms of communication that have recently developed in the context of Web 2.0 make it necessary to reconsider some of the analytical tools of linguistic analysis. In the context of keyboard-to-screen communication (KSC, as we shall call it, a range of old dichotomies have become blurred or cease to be useful altogether, e. g. "asynchronous" versus "synchronous", "written" versus "spoken", "monologic" versus "dialogic", and in particular "text" versus "utterance". We propose alternative terminologies ("communicative act" and "communicative act sequence" that are more adequate to describe the new realities of online communication and can usefully be applied to such diverse entities as weblog entries, tweets, status updates on social network sites, comments on other postings and to sequences of such entities. Furthermore, in the context of social network sites, different forms of communication traditionally separated (i. e. blog, chat, email and so on seem to converge. We illustrate and discuss these phenomena with data from Twitter and Facebook.

  11. Computer keyboard interaction as an indicator of early Parkinson’s disease

    Science.gov (United States)

    Giancardo, L.; Sánchez-Ferro, A.; Arroyo-Gallego, T.; Butterworth, I.; Mendoza, C. S.; Montero, P.; Matarazzo, M.; Obeso, J. A.; Gray, M. L.; Estépar, R. San José

    2016-01-01

    Parkinson’s disease (PD) is a slowly progressing neurodegenerative disease with early manifestation of motor signs. Objective measurements of motor signs are of vital importance for diagnosing, monitoring and developing disease modifying therapies, particularly for the early stages of the disease when putative neuroprotective treatments could stop neurodegeneration. Current medical practice has limited tools to routinely monitor PD motor signs with enough frequency and without undue burden for patients and the healthcare system. In this paper, we present data indicating that the routine interaction with computer keyboards can be used to detect motor signs in the early stages of PD. We explore a solution that measures the key hold times (the time required to press and release a key) during the normal use of a computer without any change in hardware and converts it to a PD motor index. This is achieved by the automatic discovery of patterns in the time series of key hold times using an ensemble regression algorithm. This new approach discriminated early PD groups from controls with an AUC = 0.81 (n = 42/43; mean age = 59.0/60.1; women = 43%/60%;PD/controls). The performance was comparable or better than two other quantitative motor performance tests used clinically: alternating finger tapping (AUC = 0.75) and single key tapping (AUC = 0.61). PMID:27703257

  12. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  13. Using hybrid GPU/CPU kernel splitting to accelerate spherical convolutions

    Science.gov (United States)

    Sutter, P. M.; Wandelt, B. D.; Elsner, F.

    2015-06-01

    We present a general method for accelerating by more than an order of magnitude the convolution of pixelated functions on the sphere with a radially-symmetric kernel. Our method splits the kernel into a compact real-space component and a compact spherical harmonic space component. These components can then be convolved in parallel using an inexpensive commodity GPU and a CPU. We provide models for the computational cost of both real-space and Fourier space convolutions and an estimate for the approximation error. Using these models we can determine the optimum split that minimizes the wall clock time for the convolution while satisfying the desired error bounds. We apply this technique to the problem of simulating a cosmic microwave background (CMB) anisotropy sky map at the resolution typical of the high resolution maps produced by the Planck mission. For the main Planck CMB science channels we achieve a speedup of over a factor of ten, assuming an acceptable fractional rms error of order 10-5 in the power spectrum of the output map.

  14. Predictable CPU Architecture Designed for Small Real-Time Application - Concept and Theory of Operation

    Directory of Open Access Journals (Sweden)

    Nicoleta Cristina GAITAN

    2015-04-01

    Full Text Available The purpose of this paper is to describe an predictable CPU architecture, based on the five stage pipeline assembly line and a hardware scheduler engine. We aim at developing a fine-grained multithreading implementation, named nMPRA-MT. The new proposed architecture uses replication and remapping techniques for the program counter, the register file, and the pipeline registers and is implemented with a FPGA device. An original implementation of a MIPS processor with thread interleaved pipeline is obtained, using dynamic scheduling of hard real-time tasks and interrupts. In terms of interrupts handling, the architecture uses a particular method consisting of assigning interrupts to tasks, which insures an efficient control for both the context switch, and the system real-time behavior. The originality of the approach resides in the predictability and spatial isolation of the hard real-time tasks, executed every two clock cycles. The nMPRA-MT architecture is enabled by an innovative scheme of predictable scheduling algorithm, without stalling the pipeline assembly line.

  15. Model Predictive Obstacle Avoidance and Wheel Allocation Control of Mobile Robots Using Embedded CPU

    Science.gov (United States)

    Takahashi, Naoki; Nonaka, Kenichiro

    In this study, we propose a real-time model predictive control method for leg/wheel mobile robots which simultaneously achieves both obstacle avoidance and wheel allocation at a flexible position. The proposed method generates both obstacle avoidance path and dynamical wheel positions, and controls the heading angle depending on the slope of the predicted path so that the robot can keep a good balance between stability and mobility in narrow and complex spaces like indoor environments. Moreover, we reduce the computational effort of the algorithm by deleting usage of mathematical function in the repetitive numerical computation. Thus the proposed real-time optimization method can be applied to low speed on-board CPUs used in commercially-produced vehicles. We conducted experiments to verify efficacy and feasibility of the real-time implementation of the proposed method. We used a leg/wheel mobile robot which is equipped with two laser range finders to detect obstacles and an embedded CPU whose clock speed is only 80MHz. Experiments indicate that the proposed method achieves improved obstacle avoidance comparing with the previous method in the sense that it generates an avoidance path with balanced allocation of right and left side wheels.

  16. Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines.

    Science.gov (United States)

    Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel

    2013-04-01

    We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively.

  17. SPILADY: A parallel CPU and GPU code for spin-lattice magnetic molecular dynamics simulations

    Science.gov (United States)

    Ma, Pui-Wai; Dudarev, S. L.; Woo, C. H.

    2016-10-01

    Spin-lattice dynamics generalizes molecular dynamics to magnetic materials, where dynamic variables describing an evolving atomic system include not only coordinates and velocities of atoms but also directions and magnitudes of atomic magnetic moments (spins). Spin-lattice dynamics simulates the collective time evolution of spins and atoms, taking into account the effect of non-collinear magnetism on interatomic forces. Applications of the method include atomistic models for defects, dislocations and surfaces in magnetic materials, thermally activated diffusion of defects, magnetic phase transitions, and various magnetic and lattice relaxation phenomena. Spin-lattice dynamics retains all the capabilities of molecular dynamics, adding to them the treatment of non-collinear magnetic degrees of freedom. The spin-lattice dynamics time integration algorithm uses symplectic Suzuki-Trotter decomposition of atomic coordinate, velocity and spin evolution operators, and delivers highly accurate numerical solutions of dynamic evolution equations over extended intervals of time. The code is parallelized in coordinate and spin spaces, and is written in OpenMP C/C++ for CPU and in CUDA C/C++ for Nvidia GPU implementations. Temperatures of atoms and spins are controlled by Langevin thermostats. Conduction electrons are treated by coupling the discrete spin-lattice dynamics equations for atoms and spins to the heat transfer equation for the electrons. Worked examples include simulations of thermalization of ferromagnetic bcc iron, the dynamics of laser pulse demagnetization, and collision cascades.

  18. Brain morphometry shows effects of long-term musical practice in middle-aged keyboard players

    Directory of Open Access Journals (Sweden)

    Hanna eGärtner

    2013-09-01

    Full Text Available To what extent does musical practice change the structure of the brain? In order to understand how long-lasting musical training changes brain structure, 20 male right-handed, middle-aged professional musicians and 19 matched controls were investigated. Among the musicians, 13 were pianists or organists with intensive practice regimes. The others were either music teachers at schools or string instrumentalists, who had studied the piano at least as a subsidiary subject, and practiced less intensively. The study was based on T1-weighted MR images, which were analyzed using Deformation Field Morphometry. Cytoarchitectonic probabilistic maps of cortical areas and subcortical nuclei as well as myeloarchitectonic maps of fiber tracts were used as regions of interest to compare volume differences in the brains of musicians and controls. In addition, maps of voxel-wise volume differences were computed and analyzed.Musicians showed a significantly better symmetric motor performance as well as a greater capability of controlling hand independence than controls. Structural MRI-data revealed significant volumetric differences between the brains of keyboard players, who practiced intensively and controls in right sensorimotor areas and the corticospinal tract as well as in the entorhinal cortex and the left superior parietal lobule. Moreover, they showed also larger volumes in a comparable set of regions than the less intensively practicing musicians. The structural changes in the sensory and motor systems correspond well to the behavioral results, and can be interpreted in terms of plasticity as a result of intensive motor training. Areas of the superior parietal lobule and the entorhinal cortex might be enlarged in musicians due to their special skills in sight-playing and memorizing of scores. In conclusion, intensive and specific musical training seems to have an impact on brain structure, not only during the sensitive period of childhood but throughout

  19. Brain morphometry shows effects of long-term musical practice in middle-aged keyboard players.

    Science.gov (United States)

    Gärtner, H; Minnerop, M; Pieperhoff, P; Schleicher, A; Zilles, K; Altenmüller, E; Amunts, K

    2013-01-01

    To what extent does musical practice change the structure of the brain? In order to understand how long-lasting musical training changes brain structure, 20 male right-handed, middle-aged professional musicians and 19 matched controls were investigated. Among the musicians, 13 were pianists or organists with intensive practice regimes. The others were either music teachers at schools or string instrumentalists, who had studied the piano at least as a subsidiary subject, and practiced less intensively. The study was based on T1-weighted MR images, which were analyzed using deformation-based morphometry. Cytoarchitectonic probabilistic maps of cortical areas and subcortical nuclei as well as myeloarchitectonic maps of fiber tracts were used as regions of interest to compare volume differences in the brains of musicians and controls. In addition, maps of voxel-wise volume differences were computed and analyzed. Musicians showed a significantly better symmetric motor performance as well as a greater capability of controlling hand independence than controls. Structural MRI-data revealed significant volumetric differences between the brains of keyboard players, who practiced intensively and controls in right sensorimotor areas and the corticospinal tract as well as in the entorhinal cortex and the left superior parietal lobule. Moreover, they showed also larger volumes in a comparable set of regions than the less intensively practicing musicians. The structural changes in the sensory and motor systems correspond well to the behavioral results, and can be interpreted in terms of plasticity as a result of intensive motor training. Areas of the superior parietal lobule and the entorhinal cortex might be enlarged in musicians due to their special skills in sight-playing and memorizing of scores. In conclusion, intensive and specific musical training seems to have an impact on brain structure, not only during the sensitive period of childhood but throughout life.

  20. Internal Structure and Development of Keyboard Skills in Spanish-Speaking Primary-School Children With and Without LD in Writing.

    Science.gov (United States)

    Jiménez, Juan E; Marco, Isaac; Suárez, Natalia; González, Desirée

    2016-04-07

    This study had two purposes: examining the internal structure of theTest Estandarizado para la Evaluación Inicial de la Escritura con Teclado(TEVET; Spanish Keyboarding Writing Test), and analyzing the development of keyboarding skills in Spanish elementary school children with and without learning disabilities (LD) in writing. A group of 1,168 elementary school children carried out the following writing tasks: writing the alphabet in order from memory, allograph selection, word copying, writing dictated words with inconsistent spelling, writing pseudowords from dictation, and independent composition of sentence. For this purpose, exploratory factor analysis for the TEVET was conducted. Principal component analysis with a varimax rotation identified three factors with eigenvalues greater than 1.0. Based on factorial analysis, we analyzed the keyboarding skills across grades in Spanish elementary school children with and without LD (i.e., poor handwriters compared with poor spellers, who in turn were compared with mixed compared with typically achieving writers). The results indicated that poor handwriters did not differ from typically achieving writers in phonological processing, visual-orthographic processing, and sentence production components by keyboarding. The educational implications of the findings are analyzed with regard to acquisition of keyboarding skills in children with and without LD in transcription.

  1. 话机专用智能化键盘的设计与改进%Design and Improvement of an Intellectualized Keyboard for Telephones

    Institute of Scientific and Technical Information of China (English)

    杨聚庆; 吴柏林

    2013-01-01

    针对HIC998(3)T型IC卡电话机采用矩阵式键盘接口在“3C”认证测试中出现电磁干扰(EMI)超标的问题,设计出智能化键盘进行升级改造,新键盘采用单口线A/D转换式键盘布局和I2C接口通讯的方案.通过对比实验测试证明该方案性能明显改善,且实现模块化标准电路后易于推广应用.%In view of the excessive electromagnetic interference (EMI) in the matrix keyboard of the HIC998 (3) T IC card telephone detected in the "3C" authentication test, an intellectualized keyboard is designed, which uses A/D transformation keyboard layout and the I2C interface connection communication plan. The comparison experiment shows that the intellectualized keyboard has a better than the matrix keyboard, and it is easy to popularize when the standard circuit is modularized.

  2. DESIGNING AND REALISING AN MCU-BASED IP-KEYBOARD%一种基于单片机的网络键盘的设计与实现

    Institute of Scientific and Technical Information of China (English)

    王君; 罗焕佐; 许石哲; 邹媛媛

    2011-01-01

    针对工业设备智能维护领域,通过研究分析PS/2键盘的接口、电气特性及协议规范,利用单片机实现了标准的PC机键盘.并结合网络技术,设计并实现了一种网络键盘,称之为IP-KeyBoard,其扩展了传统键盘的作用范围,可以对远程设备进行调试、参数设置等本地键盘操作.在局域网环境下,实验验证了网络键盘远程操作的可行性.%In light of the field of intelligent maintenance of industrial equipments, by studying and analysing the interface, electrical characteristics and protocol specification of the PS/2 keyboard, in this article we realized a standard PC keyboard with MCU ,and combining with the network technology, we designed and implemented a kind of network keyboard, which is named the IP-KeyBoard in this article. It expands the function range of traditional keyboard, and can be used to do some local keyboard operations on remote equipments. In environment of LAN, the feasibility of the IP-KeyBoard has been proved by the experiments.

  3. 冷冻电镜三维重构在CPU-GPU系统中的并行性%Parallelism for cryo-EM 3D reconstruction on CPU-GPU heterogeneous system

    Institute of Scientific and Technical Information of China (English)

    李兴建; 李临川; 谭光明; 张佩珩

    2011-01-01

    It is a challenge to efficiently utilize massive parallelism on both applications and architectures for heterogeneous systems. A practice of accelerating a cryo-EM 3D program was presented on how to exploit and orchestrate parallelism of applications to take advantage of the underlying parallelism exposed at the architecture level. All possible parallelism in cryo-EM 3D was exploited, and a self-adaptive dynamic scheduling algorithm was leveraged to efficiently implement parallelism mapping between the application and architecture. The experiment on a part of dawning nebulae system (32 nodes) confirms that a hierarchical parallelism is an efficient pattern of parallel programming to utilize capabilities of both CPU and GPU on a heterogeneous system. The hybrid CPU-GPU program improves performance by 2. 4 times over the best CPU-only one for certain problem sizes.%为了有效地发掘和利用异构系统在应用和体系结构上的并行性,以冷冻电镜三维重构为例展示如何利用应用程序潜在的并行性.通过分析重构计算所有的并行性,实现了将动态自适应的划分算法用于任务在异构系统上高效的分发.在曙光星云系统的部分节点系统(32节点)上评估并行化的程序性能.实验证明:多层次的并行化是CPU与GPU异构系统上开发并行性的有效模式;CPU-GPU混合程序在给定问题规模上相对单纯CPU程序获得2.4倍加速比.

  4. Design and research of electrosurgical controller based on dual CPU+PSD%基于双CPU+PSD的电外科控制器的设计与研究

    Institute of Scientific and Technical Information of China (English)

    包晔峰; 张强; 蒋永锋; 赵虎成; 陈俊生

    2011-01-01

    设计了一种基于双CPU+PSD的电外科手术仪控制器.该控制器采用直接数字频率合成技术产生频率和脉宽可调的输出波形;通过共享RAM实现主、从处理器的并行运行;设计了输出电流、电压双反馈电路,根据反馈电流及电压信号,检测功能有效地控制输出能量的大小;利用增量式PID算法实现恒流、恒压及恒功率控制.通过在某高频电刀上的试验表明,系统具有较高的输出精度.%Introduces an electrosurgical controller based on dual CPU + PSD. Direct digital frequency synthesis is used to generate output waveform which pulse frequency and width is adjustable. Parallel running of the master and slaveCPU implements by shared RAM. Having designed the feedback circuit of current and voltage,the function of detection can control the output power with of the feedback variables. The controller utilizes a incremental PID control algorithm, incorporating with the voltage and current as feedback variables, it can realize constant voltage, constant current and constant power. The result shows that the output of the system is accurate.

  5. Application of PCI9052 in CPU Unit%PCI9052在CPU单元中的应用

    Institute of Scientific and Technical Information of China (English)

    韩洪丽

    2009-01-01

    文章主要介绍了PCI9052芯片在H20-20交换机系统CPU单元中的应用,详细介绍了在交换机系统CPU单元中利用PCI9052芯片将PCI访问转换为ISA访问的工作原理和具体实现过程.该芯片的应用使得H20-20交换机系统CPU的升级换代成为可能,大大提高了交换机CPU单元的处理速度和性能,为交换机增值业务的研发提供了平台,增强了H20-20交换机产品的市场竞争力.

  6. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    Science.gov (United States)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  7. Fast Parallel Image Registration on CPU and GPU for Diagnostic Classification of Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Denis P Shamonin

    2014-01-01

    Full Text Available Nonrigid image registration is an important, but time-consuming taskin medical image analysis. In typical neuroimaging studies, multipleimage registrations are performed, i.e. for atlas-based segmentationor template construction. Faster image registration routines wouldtherefore be beneficial.In this paper we explore acceleration of the image registrationpackage elastix by a combination of several techniques: iparallelization on the CPU, to speed up the cost function derivativecalculation; ii parallelization on the GPU building on andextending the OpenCL framework from ITKv4, to speed up the Gaussianpyramid computation and the image resampling step; iii exploitationof certain properties of the B-spline transformation model; ivfurther software optimizations.The accelerated registration tool is employed in a study ondiagnostic classification of Alzheimer's disease and cognitivelynormal controls based on T1-weighted MRI. We selected 299participants from the publicly available Alzheimer's DiseaseNeuroimaging Initiative database. Classification is performed with asupport vector machine based on gray matter volumes as a marker foratrophy. We evaluated two types of strategies (voxel-wise andregion-wise that heavily rely on nonrigid image registration.Parallelization and optimization resulted in an acceleration factorof 4-5x on an 8-core machine. Using OpenCL a speedup factor of ~2was realized for computation of the Gaussian pyramids, and 15-60 forthe resampling step, for larger images. The voxel-wise and theregion-wise classification methods had an area under thereceiver operator characteristic curve of 88% and 90%,respectively, both for standard and accelerated registration.We conclude that the image registration package elastix wassubstantially accelerated, with nearly identical results to thenon-optimized version. The new functionality will become availablein the next release of elastix as open source under the BSD license.

  8. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease.

    Science.gov (United States)

    Shamonin, Denis P; Bron, Esther E; Lelieveldt, Boudewijn P F; Smits, Marion; Klein, Stefan; Staring, Marius

    2013-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4-5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15-60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license.

  9. High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms.

    Science.gov (United States)

    Teodoro, George; Pan, Tony; Kurc, Tahsin M; Kong, Jun; Cooper, Lee A D; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H

    2013-05-01

    Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system.

  10. Gridded design rule scaling: taking the CPU toward the 16nm node

    Science.gov (United States)

    Bencher, Christopher; Dai, Huixiong; Chen, Yongmei

    2009-03-01

    The Intel 45nm PenrynTM CPU was a landmark design, not only for its implementation of high-K metal gate materials1, but also for the adoption of a nearly gridded design rule (GDR) layout architecture for the poly silicon gate layer2. One key advantage of using gridded design rules is reduction of design rules and ease of 1- dimensional scaling compared to complex random 2-dimensinal layouts. In this paper, we demonstrate the scaling capability of GDR to 16nm and 22nm logic nodes. Copying the design of published images for the Intel 45nm PenrynTM poly-silicon layer2, we created a mask set designed to duplicate those patterns targeting a final pitch of 64nm and 52nm using Sidewall Spacer Double Patterning for the extreme pitch shrinking and performed exploratory work at final pitch of 44nm. Mask sets were made in both tones to enable demonstration of both damascene (dark field) patterning and poly-silicon gate layer (clear field) GDR layouts, although the results discussed focus primarily on poly-silicon gate layer scaling. The paper discusses the line-and-cut double patterning technique for generating GDR structures, the use of sidewall spacer double patterning for scaling parallel lines and the lithographic process window (CD and alignment) for applying cut masks. Through the demonstration, we highlight process margin issues and suggest corrective actions to be implemented in future demonstrations and more advanced studies. Overall, the process window is quite large and the technique has strong manufacturing possibilities.

  11. Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards

    Directory of Open Access Journals (Sweden)

    Simon Laurent

    2016-07-01

    Full Text Available We present a new side-channel attack against soft keyboards that support gesture typing on Android smartphones. An application without any special permissions can observe the number and timing of the screen hardware interrupts and system-wide software interrupts generated during user input, and analyze this information to make inferences about the text being entered by the user. System-wide information is usually considered less sensitive than app-specific information, but we provide concrete evidence that this may be mistaken. Our attack applies to all Android versions, including Android M where the SELinux policy is tightened.

  12. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  13. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    Science.gov (United States)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  14. Design and Implementation Methods for Soft Keyboard Based on Linux Qt%基于Linux Qt的软键盘设计与实现

    Institute of Scientific and Technical Information of China (English)

    田福英

    2011-01-01

    This paper introduces the design method for a soft keyboard based on Linux Qt, based on the Linux operating system Ubuntu. This keyboard includes most of the functions of the general keyboard and can be operate easily with friendly interface. It can be applied on all kinds of devices which have a touch screen, based on Linux operating system.%介绍在Linux操作系统Ubuntu中,基于Qt的软键盘设计与实现方法,该键盘包含了通用键盘中的绝大部分功能,界面友善,操作简单,可以应用于各种基于Linux操作系统的带触摸屏设备.

  15. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  16. 38 CFR 36.4232 - Allowable fees and charges; manufactured home unit.

    Science.gov (United States)

    2010-07-01

    ... operator-assisted telephone, terminal entry, or central processing unit-to-central processing unit (CPU-to... charges; manufactured home unit. 36.4232 Section 36.4232 Pensions, Bonuses, and Veterans' Relief... Manufactured Homes and Lots, Including Site Preparation Financing Manufactured Home Units § 36.4232 Allowable...

  17. Book Review: Placing the Suspect behind the Keyboard: Using Digital Forensics and Investigative Techniques to Identify Cybercrime Suspects

    Directory of Open Access Journals (Sweden)

    Thomas Nash

    2013-06-01

    Full Text Available Shavers, B. (2013. Placing the Suspect behind the Keyboard: Using Digital Forensics and Investigative Techniques to Identify Cybercrime Suspects. Waltham, MA: Elsevier, 290 pages, ISBN-978-1-59749-985-9, US$51.56. Includes bibliographical references and index.Reviewed by Detective Corporal Thomas Nash (tnash@bpdvt.org, Burlington Vermont Police Department, Internet Crime against Children Task Force. Adjunct Instructor, Champlain College, Burlington VT.In this must read for any aspiring novice cybercrime investigator as well as the seasoned professional computer guru alike, Brett Shaver takes the reader into the ever changing and dynamic world of Cybercrime investigation.  Shaver, an experienced criminal investigator, lays out the details and intricacies of a computer related crime investigation in a clear and concise manner in his new easy to read publication, Placing the Suspect behind the Keyboard. Using Digital Forensics and Investigative techniques to Identify Cybercrime Suspects. Shaver takes the reader from start to finish through each step of the investigative process in well organized and easy to follow sections, with real case file examples to reach the ultimate goal of any investigation: identifying the suspect and proving their guilt in the crime. Do not be fooled by the title. This excellent, easily accessible reference is beneficial to both criminal as well as civil investigations and should be in every investigator’s library regardless of their respective criminal or civil investigative responsibilities.(see PDF for full review

  18. A validation study of the Keyboard Personal Computer Style instrument (K-PeCS) for use with children.

    Science.gov (United States)

    Green, Dido; Meroz, Anat; Margalit, Adi Edit; Ratzon, Navah Z

    2012-11-01

    This study examines a potential instrument for measurement of typing postures of children. This paper describes inter-rater, test-retest reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS), an observational measurement of postures and movements during keyboarding, for use with children. Two trained raters independently rated videos of 24 children (aged 7-10 years). Six children returned one week later for identifying test-retest reliability. Concurrent validity was assessed by comparing ratings obtained using the K-PECS to scores from a 3D motion analysis system. Inter-rater reliability was moderate to high for 12 out of 16 items (Kappa: 0.46 to 1.00; correlation coefficients: 0.77-0.95) and test-retest reliability varied across items (Kappa: 0.25 to 0.67; correlation coefficients: r = 0.20 to r = 0.95). Concurrent validity compared favourably across arm pathlength, wrist extension and ulnar deviation. In light of the limitations of other tools the K-PeCS offers a fairly affordable, reliable and valid instrument to address the gap for measurement of typing styles of children, despite the shortcomings of some items. However further research is required to refine the instrument for use in evaluating typing among children.

  19. A Brain-Computer Interface (BCI) system to use arbitrary Windows applications by directly controlling mouse and keyboard.

    Science.gov (United States)

    Spuler, Martin

    2015-08-01

    A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.

  20. Performance of Basic Geodynamic Solvers on BG/p and on Modern Mid-sized CPU Clusters

    Science.gov (United States)

    Omlin, S.; Keller, V.; Podladchikov, Y.

    2012-04-01

    Nowadays, most researchers have access to computer clusters. For the community developing numerical applications in geodynamics, this constitutes a very important potential: besides that current applications can be speeded up, much bigger problems can be solved. This is particularly relevant in 3D applications. However, current practical experiments in geodynamic high-performance applications normally end with the successful demonstration of the potential by exploring the performance of the simplest example (typically the Poisson solver); more advanced practical examples are rare. For this reason, we optimize algorithms for 3D scalar problems and 3D mechanics and design concise, educational Fortran 90 templates that allow other researchers to easily plug in their own geodynamic computations: in these templates, the geodynamic computations are entirely separated from the technical programming needed for the parallelized running on a computer cluster; additionally, we develop our code with minimal syntactical differences from the MATLAB language, such that prototypes of the desired geodynamic computations can be programmed in MATLAB and then copied into the template with only minimal syntactical changes. High-performance programming requires to a big extent taking into account the specificities of the available hardware. The hardware of the world's largest CPU clusters is very different from the one of a modern mid-sized CPU cluster. In this context, we investigate the performance of basic memory-bounded geodynamic solvers on the large-sized BlueGene/P cluster, having 13 Gb/s peak memory bandwidth, and compare it with the performance of a typical modern mid-sized CPU cluster, having 100 Gb/s peak memory bandwidth. A memory-bounded solver's performance depends only on the amount of data required for its computations and on the speed this data can be read from memory (or from the CPUs' cache). In consequence, we speed up the solvers by optimizing memory access and CPU

  1. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  2. HD fisheye video stream real-time correction system based on embedded CPU-GPU%基于嵌入式CPU-GPU的高清鱼眼视频实时校正系统

    Institute of Scientific and Technical Information of China (English)

    公维理

    2016-01-01

    In the field of security video surveillance, real-time monitoring system is needed to achieve real-time monitoring for 360°×180° wide panoramic areas with high definition, existing fish-eye correction systems have higher cost, poor flexi-bility, especially low definition and poor real-time problems, etc. In order to solve real-time problems, this article proposes CPU-GPU high-speed communication protocol based on embedded platform STiH418 and CPU-GPU memory sharing method based on programming shader, and achieves HD panoramic fisheye video real-time correction system on CPU-GPU using texture mapping technology. Compares with the related correction system, experimental results show that the system takes good algorithm efficiency, effectiveness and integrity of corrected image, can fully meet the HD pano-ramic 360° × 180°(4 million pixels, 2048 × 2048p30)in real-time correction, embedded systems STiH418 reduce overall system cost compared with PC, correction MAP is generated and updated by ARM CPU in software, virtual PTZ improves system flexibility and stability, so the system has a high practical value in security video surveillance market.%在安防监控领域,需要鱼眼实时监控系统实现360°×180°大范围高质量无死角全景实时监控,现有的鱼眼校正系统存在成本较高,灵活性差,特别是清晰度不高和实时性差等方面的问题。针对如何提高全景高清鱼眼视频校正的实时性问题,提出了基于嵌入式平台STiH418的CPU-GPU高速通信协议和基于可编程着色器的嵌入式CPU-GPU内存共享方法,并利用GPU的纹理映射技术实现了全景高清鱼眼视频实时校正系统。实验结果表明,与相关校正系统相比,该系统很好地兼顾到算法效率、图像校正效果和完整性,可以完全满足360°×180°的全景高清(400万像素,2048×2048p30)鱼眼视频实时监控,而且与使用PC服务器相比嵌入式系统

  3. Full domain-decomposition scheme for diffuse optical tomography of large-sized tissues with a combined CPU and GPU parallelization.

    Science.gov (United States)

    Yi, Xi; Wang, Xin; Chen, Weiting; Wan, Wenbo; Zhao, Huijuan; Gao, Feng

    2014-05-01

    The common approach to diffuse optical tomography is to solve a nonlinear and ill-posed inverse problem using a linearized iteration process that involves repeated use of the forward and inverse solvers on an appropriately discretized domain of interest. This scheme normally brings severe computation and storage burdens to its applications on large-sized tissues, such as breast tumor diagnosis and brain functional imaging, and prevents from using the matrix-fashioned linear inversions for improved image quality. To cope with the difficulties, we propose in this paper a parallelized full domain-decomposition scheme, which divides the whole domain into several overlapped subdomains and solves the corresponding subinversions independently within the framework of the Schwarz-type iterations, with the support of a combined multicore CPU and multithread graphics processing unit (GPU) parallelization strategy. The numerical and phantom experiments both demonstrate that the proposed method can effectively reduce the computation time and memory occupation for the large-sized problem and improve the quantitative performance of the reconstruction.

  4. Improving the execution performance of FreeSurfer : a new scheduled pipeline scheme for optimizing the use of CPU and GPU resources.

    Science.gov (United States)

    Delgado, J; Moure, J C; Vives-Gilabert, Y; Delfino, M; Espinosa, A; Gómez-Ansón, B

    2014-07-01

    A scheme to significantly speed up the processing of MRI with FreeSurfer (FS) is presented. The scheme is aimed at maximizing the productivity (number of subjects processed per unit time) for the use case of research projects with datasets involving many acquisitions. The scheme combines the already existing GPU-accelerated version of the FS workflow with a task-level parallel scheme supervised by a resource scheduler. This allows for an optimum utilization of the computational power of a given hardware platform while avoiding problems with shortages of platform resources. The scheme can be executed on a wide variety of platforms, as its implementation only involves the script that orchestrates the execution of the workflow components and the FS code itself requires no modifications. The scheme has been implemented and tested on a commodity platform within the reach of most research groups (a personal computer with four cores and an NVIDIA GeForce 480 GTX graphics card). Using the scheduled task-level parallel scheme, a productivity above 0.6 subjects per hour is achieved on the test platform, corresponding to a speedup of over six times compared to the default CPU-only serial FS workflow.

  5. CPU0213, a novel endothelin type A and type B receptor antagonist, protects against myocardial ischemia/reperfusion injury in rats

    Directory of Open Access Journals (Sweden)

    Z.Y. Wang

    2011-11-01

    Full Text Available The efficacy of endothelin receptor antagonists in protecting against myocardial ischemia/reperfusion (I/R injury is controversial, and the mechanisms remain unclear. The aim of this study was to investigate the effects of CPU0123, a novel endothelin type A and type B receptor antagonist, on myocardial I/R injury and to explore the mechanisms involved. Male Sprague-Dawley rats weighing 200-250 g were randomized to three groups (6-7 per group: group 1, Sham; group 2, I/R + vehicle. Rats were subjected to in vivo myocardial I/R injury by ligation of the left anterior descending coronary artery and 0.5% sodium carboxymethyl cellulose (1 mL/kg was injected intraperitoneally immediately prior to coronary occlusion. Group 3, I/R + CPU0213. Rats were subjected to identical surgical procedures and CPU0213 (30 mg/kg was injected intraperitoneally immediately prior to coronary occlusion. Infarct size, cardiac function and biochemical changes were measured. CPU0213 pretreatment reduced infarct size as a percentage of the ischemic area by 44.5% (I/R + vehicle: 61.3 ± 3.2 vs I/R + CPU0213: 34.0 ± 5.5%, P < 0.05 and improved ejection fraction by 17.2% (I/R + vehicle: 58.4 ± 2.8 vs I/R + CPU0213: 68.5 ± 2.2%, P < 0.05 compared to vehicle-treated animals. This protection was associated with inhibition of myocardial inflammation and oxidative stress. Moreover, reduction in Akt (protein kinase B and endothelial nitric oxide synthase (eNOS phosphorylation induced by myocardial I/R injury was limited by CPU0213 (P < 0.05. These data suggest that CPU0123, a non-selective antagonist, has protective effects against myocardial I/R injury in rats, which may be related to the Akt/eNOS pathway.

  6. Comparative Performance Analysis of Best Performance Round Robin Scheduling Algorithm (BPRR) using Dynamic Time Quantum with Priority Based Round Robin (PBRR) CPU Scheduling Algorithm in Real Time Systems

    OpenAIRE

    Pallab Banerjee; Talat Zabin; ShwetaKumai; Pushpa Kumari

    2015-01-01

    Round Robin Scheduling algorithm is designed especially for Real Time Operating system (RTOS). It is a preemptive CPU scheduling algorithm which switches between the processes when static time Quantum expires .Existing Round Robin CPU scheduling algorithm cannot be implemented in real time operating system due to their high context switch rates, large waiting time, large response time, large turnaround time and less throughput . In this paper a new algorithm is presented called Best Performan...

  7. Fluency and Accuracy in Alphabet Writing by Keyboarding: A Cross-Sectional Study in Spanish-Speaking Children With and Without Learning Disabilities.

    Science.gov (United States)

    Bisschop, Elaine; Morales, Celia; Gil, Verónica; Jiménez-Suárez, Elizabeth

    2016-04-11

    The aim of this study was to analyze whether children with and without difficulties in handwriting, spelling, or both differed in alphabet writing when using a keyboard. The total sample consisted of 1,333 children from Grades 1 through 3. Scores on the spelling and handwriting factors from theEarly Grade Writing Assessment(Jiménez, in press) were used to assign the participants to one of four groups with different ability patterns: poor handwriters, poor spellers, a mixed group, and typically achieving students. Groups were equalized by a matching strategy, resulting in a final sample of 352 children. A MANOVA was executed to analyze effects of group and grade on orthographic motor integration (fluency of alphabet writing) and the number of omissions when writing the alphabet (accuracy of alphabet writing) by keyboard writing mode. The results indicated that poor handwriters did not differ from typically achieving children in both variables, whereas the poor spellers did perform below the typical achievers and the poor handwriters. The difficulties of poor handwriters seem to be alleviated by the use of the keyboard; however, children with spelling difficulties might need extra instruction to become fluent keyboard writers.

  8. 保健型人机键盘的参数优化与工效学设计%Optimization and Ergonomics Design of Human-computer Keyboard

    Institute of Scientific and Technical Information of China (English)

    楚杰

    2009-01-01

    This article analyzes the keyboard functions on the basis of man-machine relations and innovative design both the functional needs of the human body and strong features of the modern health - type keyboard. In which the whole appearance of the keyboard, the key board layout , the bit keys and the design materials . All of these are human ergonomics - based innovative design. That not only is a new attempt of the keyboard design but also is a challenge the human product.%目的:文章探讨键盘参数设计与工作舒适性的关系,拟设计出布局合理,参数优化的保健型人机键盘.方法:采用调查问卷与人机测量相结合的方法,抽样调查100名样本人群,进行手型及手指操作分工测量,得出键盘操作规律.结果:主键盘采用分离设计的思想,Backspace键,delete键移到了主键盘,设置了一组快捷功能键,增加键枯托,键艋的色彩、材料重新选择,初步完成了全新一体化的办公键盘的设计.

  9. A COMPARATIVE STUDY OF PROGRAMMED AND TRADITIONAL TECHNIQUES FOR TEACHING MUSIC READING IN THE UPPER ELEMENTARY SCHOOLS, UTILIZING A KEYBOARD APPROACH. FINAL REPORT.

    Science.gov (United States)

    MANDLE, WILLIAM DEE

    THE PURPOSE OF THIS STUDY WAS TO INITIATE A PROGRAM FOR TEACHING MUSIC READING SKILLS, USING THE PIANO KEYBOARD IN COMBINATION WITH PROGRAMED LEARNING, AND TO COMPARE IT WITH CONVENTIONAL METHODS OF MUSIC INSTRUCTION. FOURTH, FIFTH, AND SIXTH GRADERS AT ONE CLEVELAND PUBLIC SCHOOL COMPRISED THE CONTROL GROUP RECEIVING CONVENTIONAL MUSIC READING…

  10. Using Interpretative Phenomenological Analysis in a Mixed Methods Research Design to Explore Music in the Lives of Mature Age Amateur Keyboard Players

    Science.gov (United States)

    Taylor, Angela

    2015-01-01

    This article discusses the use of interpretative phenomenological analysis (IPA) in a mixed methods research design with reference to five recent publications about music in the lives of mature age amateur keyboard players. It explores the links between IPA and the data-gathering methods of "Rivers of Musical Experience",…

  11. Implementing of Light Script Engine for Simulating Keyboard and Mouse%轻量级键盘鼠标模拟脚本引擎实现*

    Institute of Scientific and Technical Information of China (English)

    吴文辉; 任毅

    2013-01-01

    The paper studies the light script engine for simulating keyboard and mouse, giving the key technology of programming and algorithms.%文章研究了轻量级键盘鼠标模拟脚本引擎,给出了编程实现的关键技术和算法。

  12. Using Interpretative Phenomenological Analysis in a Mixed Methods Research Design to Explore Music in the Lives of Mature Age Amateur Keyboard Players

    Science.gov (United States)

    Taylor, Angela

    2015-01-01

    This article discusses the use of interpretative phenomenological analysis (IPA) in a mixed methods research design with reference to five recent publications about music in the lives of mature age amateur keyboard players. It explores the links between IPA and the data-gathering methods of "Rivers of Musical Experience",…

  13. A Voice-Detecting Sensor and a Scanning Keyboard Emulator to Support Word Writing by Two Boys with Extensive Motor Disabilities

    Science.gov (United States)

    Lancioni, Giulio E.; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Green, Vanessa; Chiapparino, Claudia; Stasolla, Fabrizio; Oliva, Doretta

    2009-01-01

    The present study assessed the use of a voice-detecting sensor interfaced with a scanning keyboard emulator to allow two boys with extensive motor disabilities to write. Specifically, the study (a) compared the effects of the voice-detecting sensor with those of a familiar pressure sensor on the boys' writing time, (b) checked which of the sensors…

  14. Gaining the upper hand: comparison of alphabetic and keyboard positions as spatial features of letters producing distinct S-R compatibility effects.

    Science.gov (United States)

    Kozlik, Julia; Neumann, Roland

    2013-09-01

    The present study explored which stimulus feature, alphabetic or keyboard position, primarily influences letter processing in different task settings. In Experiment 1 (alphabetic position judgment) a response side effect (faster responses when the location of letters within the alphabet or on the keyboard maps onto the response hand) could be observed for alphabetic position as task-relevant stimulus feature. In Experiments 2 and 3 participants responded to a non-spatial stimulus feature (uppercase-lowercase classification) so that both attributes can be characterized as task-irrelevant. The pattern indicated that a keyboard position-hand correspondence effect emerged independent of the time window (after stimulus onset) in which the response was given. However, an alphabetic position-hand correspondence effect only emerged when participants were forced to delay their responses by 450ms. The overall pattern indicated that although both features were processed and translated into a spatial code reflecting their position within the alphabet vs. on the keyboard, the relevance of these features to the task as well as the time that elapsed since stimulus onset determined which attribute of the letters was effective in yielding a stimulus-response compatibility effect.

  15. A Comparison of Accuracy and Rate of Transcription by Adults with Learning Disabilities Using a Continuous Speech Recognition System and a Traditional Computer Keyboard

    Science.gov (United States)

    Millar, Diane C.; McNaughton, David B.; Light, Janice C.

    2005-01-01

    A single-subject, alternating-treatments design was implemented for three adults with learning disabilities to compare the transcription of college-level texts using a speech recognition system and a traditional keyboard. The accuracy and rate of transcribing after editing was calculated for each transcribed passage. The results provide evidence…

  16. Design Principles and Example of Control Keyboard for Paver%摊铺机控制键盘设计原则与实现

    Institute of Scientific and Technical Information of China (English)

    欧青立; 何克忠

    2001-01-01

    The keyboard of control system of automatic paver based on industry PC should be developed according to control objects and operation requirements of the paver, because standard PC keyboard is difficult to fulfil requirements of the paver. This paper discussed the design principles of IPC realtime control keyboard meeting the requirements of automatic asphaltum and concrete paver, and then described the hardware and software scheme of the special control keyboard for LTU125A automatic asphaltum and concrete paver.%基于工业控制计算机的沥青混凝土自动摊铺机控制系统的操作键盘必须根据摊铺机控制对象及操作规程开发研制,标准PC键盘难以满足要求.针对自动摊铺机的控制需求和操作规程,讨论了工控机实时控制键盘的设计原则,并给出了LTU125A沥青混凝土自动摊铺机专用控制键盘的软硬件设计方案.

  17. Continuity, Change and Mature Musical Identity Construction: Using "Rivers of Musical Experience" to Trace the Musical Lives of Six Mature-Age Keyboard Players

    Science.gov (United States)

    Taylor, Angela

    2011-01-01

    "Rivers of Musical Experience" were used as a research tool to explore the wide range of musical experiences and concomitant identity construction that six amateur keyboard players over the age of 55 brought to their learning as mature adults. It appears that significant changes in their lives acted as triggers for them to engage in musical…

  18. Alphabet Writing and Allograph Selection as Predictors of Spelling in Sentences Written by Spanish-Speaking Children Who Are Poor or Good Keyboarders

    Science.gov (United States)

    Peake, Christian; Diaz, Alicia; Artiles, Ceferino

    This study examined the relationship and degree of predictability that the fluency of writing the alphabet from memory and the selection of allographs have on measures of fluency and accuracy of spelling in a free-writing sentence task when keyboarding. The "Test Estandarizado para la Evaluación de la Escritura con Teclado"…

  19. Continuity, Change and Mature Musical Identity Construction: Using "Rivers of Musical Experience" to Trace the Musical Lives of Six Mature-Age Keyboard Players

    Science.gov (United States)

    Taylor, Angela

    2011-01-01

    "Rivers of Musical Experience" were used as a research tool to explore the wide range of musical experiences and concomitant identity construction that six amateur keyboard players over the age of 55 brought to their learning as mature adults. It appears that significant changes in their lives acted as triggers for them to engage in musical…

  20. hybridMANTIS: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators.

    Science.gov (United States)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-21

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors.

  1. 键盘按键信息辐射泄漏研究%Study on Electromagnetic Compromising Emanations of Keyboard

    Institute of Scientific and Technical Information of China (English)

    周一帆; 吕英华

    2014-01-01

    Compromising radiation emanation may appear very far away from keyboard and be detected covertly. Modulated spurious carriers, as one kind of compromising radiation emanation, have stronger anti-interference .And the red signal can be easily get from carriers with a good signal-to-noise ratio. To recovery keyboard information clearly, the modulated spurious carriers were selected. In advance, comparison of spectrum was used to discover an interesting frequency by spectral analyzers. The carriers were acquired and demodulated on our interesting frequency based on a broadband receiver. Then an accurate method was introduced to completely restore the keystrokes. Experimental results show that this technique is very effective. Finally, according to the experimental program mentioned previously, the miniaturization and practicality of keyboard electromagnetic interceptions are discussed.%计算机键盘输入电磁辐射泄漏具有覆盖范围广、截获隐蔽性强的特点,其中辐射的调制载波红信号比起其他辐射泄漏信号抗干扰性更强、更易剥离出信噪比较好的红信号。针对键盘辐射调制载波红信号进行截获,首先使用频谱对比的方法对有用信号的频段进行初判,然后基于宽带接收机对可能存在有用信号的频段进行解调,接着将得到红信号应用还原算法实现按键信息的复现,实验结果证明了这种截获方式的有效性。最后结合实验方案进一步探讨了键盘有用信号电磁截获实用化和小型化的可行性。

  2. A Case Study of MasterMind Chess: Comparing Mouse/Keyboard Interaction with Kinect-Based Gestural Interface

    Directory of Open Access Journals (Sweden)

    Gabriel Alves Mendes Vasiljevic

    2016-01-01

    Full Text Available As gestural interfaces emerged as a new type of user interface, their use has been vastly explored by the entertainment industry to better immerse the player in games. Despite being mainly used in dance and sports games, little use was made of gestural interaction in more slow-paced genres, such as board games. In this work, we present a Kinect-based gestural interface for an online and multiplayer chess game and describe a case study with users with different playing skill levels. Comparing the mouse/keyboard interaction with the gesture-based interaction, the results of the activity were synthesized into lessons learned regarding general usability and design of game control mechanisms. These results could be applied to slow-paced board games like chess. Our findings indicate that gestural interfaces may not be suitable for competitive chess matches, yet it can be fun to play while using them in casual matches.

  3. Blockade of L-type calcium channel in myocardium and calcium-induced contractions of vascular smooth muscle by CPU 86017.

    Science.gov (United States)

    Dai, De-zai; Hu, Hui-juan; Zhao, Jing; Hao, Xue-mei; Yang, Dong-mei; Zhou, Pei-ai; Wu, Cai-hong

    2004-04-01

    To assess the blockade by CPU 86017 on the L-type calcium channels in the myocardium and on the Ca(2+)-related contractions of vascular smooth muscle. The whole-cell patch-clamp was applied to investigate the blocking effect of CPU 86017 on the L-type calcium current in isolated guinea pig myocytes and contractions by KCl or phenylephrine (Phe) of the isolated rat tail arteries were measured. Suppression of the L-type current of the isolated myocytes by CPU 86017 was moderate, in time- and concentration-dependent manner and with no influence on the activation and inactivation curves. The IC(50) was 11.5 micromol/L. Suppressive effect of CPU 86017 on vaso-contractions induced by KCl 100 mmol/L, phenylephrine 1 micromol/L in KH solution (phase 1), Ca(2+) free KH solution ( phase 2), and by addition of CaCl(2) into Ca(2+)-free KH solution (phase 3) were observed. The IC(50) to suppress vaso-contractions by calcium entry via the receptor operated channel (ROC) and voltage-dependent channel (VDC) was 0.324 micromol/L and 16.3 micromol/L, respectively. The relative potency of CPU 86017 to suppress vascular tone by Ca(2+) entry through ROC and VDC is 1/187 of prazosin and 1/37 of verapamil, respectively. The blocking effects of CPU 86017 on the L-type calcium channel of myocardium and vessel are moderate and non-selective. CPU 86017 is approximately 50 times more potent in inhibiting ROC than VDC.

  4. Blockade of L-type calcium channel in myocardium and calcium-induced contractions of vascular smooth muscle by by CPU 86017

    Institute of Scientific and Technical Information of China (English)

    De-zai DAI; Hui-juan HU; Jing ZHAO; Xue-mei HAO; Dong-mei YANG; Pei-ai ZHOU; Cai-hong WU

    2004-01-01

    AIM: To assess the blockade by CPU 86017 on the L-type calcium channels in the myocardium and on the Ca2+related contractions of vascular smooth muscle. METHODS: The whole-cell patch-clamp was applied to investigate the blocking effect of CPU 86017 on the L-type calcium current in isolated guinea pig myocytes and contractions by KC1 or phenylephrine (Phe) of the isolated rat tail arteries were measured. RESULTS: Suppression of the L-type current of the isolated myocytes by CPU 86017 was moderate, in time- and concentration-dependent manner and with no influence on the activation and inactivation curves. The IC50 was 11.5 μmol/L. Suppressive effect of CPU 86017 on vaso-contractions induced by KC1 100 mmol/L, phenylephrine I μmol/Lin KH solution (phase 1),Ca2+ free KH solution ( phase 2), and by addition of CaCI2 into Ca2+-free KH solution (phase 3) were observed. The IC50 to suppress vaso-contractions by calcium entry via the receptor operated channel (ROC) and Voltage-dependent channel (VDC) was 0.324 μmol/L and 16.3 μmol/L, respectively. The relative potency of CPU 86017 to suppress vascular tone by Ca2+ entry through ROC and VDC is 1/187 of prazosin and 1/37 of verapamil, respectively.CONCLUSION: The blocking effects of CPU 86017 on the L-type calcium channel of myocardium and vessel are moderate and non-selective. CPU 86017 is approximately 50 times more potent in inhibiting ROC than VDC.

  5. General purpose parallel programing using new generation graphic processors: CPU vs GPU comparative analysis and opportunities research

    Directory of Open Access Journals (Sweden)

    Donatas Krušna

    2013-03-01

    Full Text Available OpenCL, a modern parallel heterogeneous system programming language, enables problems to be partitioned and executed on modern CPU and GPU hardware, this increases performance of such applications considerably. Since GPU's are optimized for floating point and vector operations and specialize in them, they outperform general purpose CPU's in this field greatly. This language greatly simplifies the creation of applications for such heterogeneous system since it's cross-platform, vendor independent and is embeddable , hence letting it be used in any other general purpose programming language via libraries. There is more and more tools being developed that are aimed at low level programmers and scientists or engineers alike, that are developing applications or libraries for CPU’s and GPU’s of today as well as other heterogeneous platforms. The tendency today is to increase the number of cores or CPU‘s in hopes of increasing performance, however the increasing difficulty of parallelizing applications for such systems and the even increasing overhead of communication and synchronization are limiting the potential performance. This means that there is a point at which increasing cores or CPU‘s will no longer increase applications performance, and even can diminish performance. Even though parallel programming and GPU‘s with stream computing capabilities have decreased the need for communication and synchronization (since only the final result needs to be committed to memory, however this still is a weak link in developing such applications.

  6. Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor.

    Science.gov (United States)

    Delbruck, Tobi; Lang, Manuel

    2013-01-01

    Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g., 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS) silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most "threatening" ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1 m from the goal even with the fastest-shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows), the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2 ± 2 ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided.

  7. Robotic Goalie with 3ms Reaction Time at 4% CPU Load Using Event-Based Dynamic Vision Sensor

    Directory of Open Access Journals (Sweden)

    Tobi eDelbruck

    2013-11-01

    Full Text Available Conventional vision-based robotic systems that must operate quickly require high video frame rates and consequently high computational costs. Visual response latencies are lower-bound by the frame period, e.g. 20 ms for 50 Hz frame rate. This paper shows how an asynchronous neuromorphic dynamic vision sensor (DVS silicon retina is used to build a fast self-calibrating robotic goalie, which offers high update rates and low latency at low CPU load. Independent and asynchronous per pixel illumination change events from the DVS signify moving objects and are used in software to track multiple balls. Motor actions to block the most threatening ball are based on measured ball positions and velocities. The goalie also sees its single-axis goalie arm and calibrates the motor output map during idle periods so that it can plan open-loop arm movements to desired visual locations. Blocking capability is about 80% for balls shot from 1m from the goal even with the fastest shots, and approaches 100% accuracy when the ball does not beat the limits of the servo motor to move the arm to the necessary position in time. Running with standard USB buses under a standard preemptive multitasking operating system (Windows, the goalie robot achieves median update rates of 550 Hz, with latencies of 2.2+/-2ms from ball movement to motor command at a peak CPU load of less than 4%. Practical observations and measurements of USB device latency are provided.

  8. C Language Extensions for Hybrid CPU/GPU Programming with StarPU

    OpenAIRE

    Courtès, Ludovic

    2013-01-01

    Modern platforms used for high-performance computing (HPC) include machines with both general-purpose CPUs, and "accelerators", often in the form of graphical processing units (GPUs). StarPU is a C library to exploit such platforms. It provides users with ways to define "tasks" to be executed on CPUs or GPUs, along with the dependencies among them, and by automatically scheduling them over all the available processing units. In doing so, it also relieves programmers from the need to know the ...

  9. Asymptotic fitting optimization technology for source-to-source compile system on CPU-GPU architecture%面向CPU-GPU源到源编译系统的渐近拟合优化方法

    Institute of Scientific and Technical Information of China (English)

    魏洪昌; 朱正东; 董小社; 宁洁

    2016-01-01

    Aiming at addressing the problem of the inadequate performance optimization after developing and porting of application on CPU-GPU heterogeneous parallel system, a new approach for CPU-GPU system is proposed, which com-bines asymptotic fitting optimization with source-to-source compiling technique. This approach can translate C code that inserts directives into CUDA code, and profile the generated code several times. Besides, the approach can realize the source-to-source compiling and optimization of the generated code automatically, and a prototype system based on the approach is realized in this paper as well. Functionality and performance evaluations of the prototype show that the gener-ated CUDA code is functionally equivalent to the original C code while its improvement in performance is significant. When compared with CUDA benchmark, the performance of the generated CUDA code is obviously better than codes generated by other source-to-source compiling technique.%针对CPU-GPU异构并行系统应用开发移植后优化不充分问题,提出了一种渐近拟合优化与源到源编译相结合的方法,该方法能够对插入了制导语句的C语言程序转换为CUDA语言后的程序进行多次剖分,根据源程序特性和硬件信息自动完成源到源编译与优化,并基于该方法实现了原型系统。通过在不同环境中的该原型系统在功能和性能方面进行的测试表明,由系统生成的CUDA目标程序与C源程序在功能上一致,性能上却有了大幅度提高,通过与CUDA基准测试程序相比表明,该目标程序在性能上明显优于其他源到源编译转换生成的程序。

  10. Effects of CPU 86017 (chlorobenzyltetrahydroberberine chloride) and its enantiomers on thyrotoxicosis-induced overactive endothelin-1 system and oxidative stress in rat testes.

    Science.gov (United States)

    Tang, XiaoYun; Qi, MinYou; Dai, DeZai; Zhang, Can

    2006-08-01

    To study the effects of CPU 86017, a berberine derivative, and its four enantiomers on thyrotoxicosis-induced oxidative stress and the excessive endothelin-1 system in rat testes. Adult male SD rats were given high-dose L-thyroxin (0.2 mg/kg subcutaneously) once daily for 10 days to develop thyrotoxicosis. Subsets of the rats were treated with CPU 86017 or its four enantiomers (SR, SS, RS, and RR) once daily from day 6 to day 10. The alterations of redox, nitric oxide synthase, and endothelin-1 system in testes were examined by spectrophotometry and reverse transcriptase-polymerase chain reaction assay. After 10 days of high-dose L-thyroxin administration, increased mRNA expression of prepro-endothelin-1 and endothelin-converting enzyme was observed in the rat testes, accompanied by an elevated inducible nitric oxide synthase activity and oxidative stress. CPU 86017 and its enantiomer SR significantly improved these abnormalities. High-dose L-thyroxin results in an overactive endothelin-1 system and oxidative stress in adult rat testis. CPU 86017 and its enantiomer SR suppressed the excessive ET-1 system by improving oxidative stress, and SR exhibited more potent efficacy than CPU 86017 and other enantiomers.

  11. KEYBOARD DESIGNING AND PROGRAMMING BASED ON M68HC08 SERIAL MCU%基于M68HC08系列单片机的键盘设计与编程

    Institute of Scientific and Technical Information of China (English)

    王宜怀; 王林; 张志平

    2001-01-01

    In this paper, the programming method of keyboard interrupt module for Motorola M68HC08 serial MCU is discussed. A design example about keyboard interface circuit based on MC68HC908GP32 MCU is given, and keyboard identification, the definition value of key, keyboard interrupt programming method are discussed.%讨论Motorola的新一代单片机M68HC08系列的键盘中断模块的编程方法,给出了一个基于MC68HC08GP32单片机的键盘接口电路设计实例,讨论了键的识别、键值定义、键盘中断等功能子程序的编制方法。

  12. Towards 100,000 CPU Cycle-Scavenging by Genetic Algorithms

    Science.gov (United States)

    Globus, Al; Biegel, Bryan A. (Technical Monitor)

    2001-01-01

    We examine a web-centric design using standard tools such as web servers, web browsers, PHP, and mySQL. We also consider the applicability of Information Power Grid tools such as the Globus (no relation to the author) Toolkit. We intend to implement this architecture with JavaGenes running on at least two cycle-scavengers: Condor and United Devices. JavaGenes, a genetic algorithm code written in Java, will be used to evolve multi-species reactive molecular force field parameters.

  13. Utilizing Graphics Processing Units for Network Anomaly Detection

    Science.gov (United States)

    2012-09-13

    matching system using deterministic finite automata and extended finite automata resulting in a speedup of 9x over the CPU implementation [SGO09]. Kovach ...pages 14–18, 2009. [Kov10] Nicholas S. Kovach . Accelerating malware detection via a graphics processing unit, 2010. http://www.dtic.mil/dtic/tr

  14. Optimization of a Superconducting Magnetic Energy Storage Device via a CPU-Efficient Semi-Analytical Simulation

    CERN Document Server

    Dimitrov, I K; Solovyov, V F; Chubar, O; Li, Qiang

    2014-01-01

    Recent advances in second generation (YBCO) high temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and considerable filament size of these wires requires the concomitant development of dedicated optimization methods that account for both the critical current density and ac losses in type II superconductors. Here, we report on the novel application and results of a CPU-efficient semi-analytical computer code based on the Radia 3D magnetostatics software package. Our algorithm is used to simulate and optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart...

  15. Design and Development of a Vector Control System of Induction Motor Based on Dual CPU for Electric Vehicle

    Institute of Scientific and Technical Information of China (English)

    孙逢春; 翟丽; 张承宁; 彭连云

    2003-01-01

    A vector control system for electric vehicle (EV) induction motor drive system is designed and developed. Its hardware system based on dual CPU(microcomputer 80C196KC and DSP TMS320F2407) is implemented. The fundamental mathematics equations of induction motor in the general synchronously rotating reference frame (M-T frame) used for vector control are achieved by coordinate transformation. Rotor flux equation and torque equation are deduced. According to these equations, an induction motor mathematical model and rotor flux observer model are built separately. The rotor flux field-oriented vector control method is implemented based on these models in system software, some of the simulation results with Matab/Simulink are given. The simulation results show that the vector control system for EV induction motor drive system has better static and dynamic performance, and the rotor flux field-oriented vector control method was practically verified.

  16. Leveraging the checkpoint-restart technique for optimizing CPU efficiency of ATLAS production applications on opportunistic platforms

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2017-01-01

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and services. In some cases the initialization step can take as long as 10-15 minutes. Such slow initialization, being inherently serial, has a significant negative impact on overall CPU efficiency of the production job, especially when the job is executed on opportunistic, often short-lived, resources such as commercial clouds or volunteer computing. In order to improve this situation, we can take advantage of the fact that ATLAS runs large numbers of production jobs with similar configuration parameters (e.g. jobs within the same production task). This allows us to checkpoint one job at the end of its configuration step a...

  17. Study on and Design of the Color of Computer Keyboard%电脑键盘色彩研究与设计

    Institute of Scientific and Technical Information of China (English)

    李辉

    2014-01-01

    运用色彩设计的基本理论分析了现有键盘色彩设计中存在的问题,从色彩的功能性和非功能性两方面对键盘设计进行了分析研究,提出了一系列的改进方案。%This paper analyzes some problems existing in the current color design of computer keyboard by using the basic theories of color design, makes the analytical study on the keyboard design from two aspects of color’s functionality and non-functionality, and puts forward a series of improvements.

  18. Comparative Performance Analysis of Intel Xeon Phi, GPU, and CPU: A Case Study from Microscopy Image Analysis.

    Science.gov (United States)

    Teodoro, George; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel

    2014-05-01

    We study and characterize the performance of operations in an important class of applications on GPUs and Many Integrated Core (MIC) architectures. Our work is motivated by applications that analyze low-dimensional spatial datasets captured by high resolution sensors, such as image datasets obtained from whole slide tissue specimens using microscopy scanners. Common operations in these applications involve the detection and extraction of objects (object segmentation), the computation of features of each extracted object (feature computation), and characterization of objects based on these features (object classification). In this work, we have identify the data access and computation patterns of operations in the object segmentation and feature computation categories. We systematically implement and evaluate the performance of these operations on modern CPUs, GPUs, and MIC systems for a microscopy image analysis application. Our results show that the performance on a MIC of operations that perform regular data access is comparable or sometimes better than that on a GPU. On the other hand, GPUs are significantly more efficient than MICs for operations that access data irregularly. This is a result of the low performance of MICs when it comes to random data access. We also have examined the coordinated use of MICs and CPUs. Our experiments show that using a performance aware task strategy for scheduling application operations improves performance about 1.29× over a first-come-first-served strategy. This allows applications to obtain high performance efficiency on CPU-MIC systems - the example application attained an efficiency of 84% on 192 nodes (3072 CPU cores and 192 MICs).

  19. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  20. Computational performance comparison of wavefront reconstruction algorithms for the European Extremely Large Telescope on multi-CPU architecture.

    Science.gov (United States)

    Feng, Lu; Fedrigo, Enrico; Béchet, Clémentine; Brunner, Elisabeth; Pirani, Werther

    2012-06-01

    The European Southern Observatory (ESO) is studying the next generation giant telescope, called the European Extremely Large Telescope (E-ELT). With a 42 m diameter primary mirror, it is a significant step from currently existing telescopes. Therefore, the E-ELT with its instruments poses new challenges in terms of cost and computational complexity for the control system, including its adaptive optics (AO). Since the conventional matrix-vector multiplication (MVM) method successfully used so far for AO wavefront reconstruction cannot be efficiently scaled to the size of the AO systems on the E-ELT, faster algorithms are needed. Among those recently developed wavefront reconstruction algorithms, three are studied in this paper from the point of view of design, implementation, and absolute speed on three multicore multi-CPU platforms. We focus on a single-conjugate AO system for the E-ELT. The algorithms are the MVM, the Fourier transform reconstructor (FTR), and the fractal iterative method (FRiM). This study enhances the scaling of these algorithms with an increasing number of CPUs involved in the computation. We discuss implementation strategies, depending on various CPU architecture constraints, and we present the first quantitative execution times so far at the E-ELT scale. MVM suffers from a large computational burden, making the current computing platform undersized to reach timings short enough for AO wavefront reconstruction. In our study, the FTR provides currently the fastest reconstruction. FRiM is a recently developed algorithm, and several strategies are investigated and presented here in order to implement it for real-time AO wavefront reconstruction, and to optimize its execution time. The difficulty to parallelize the algorithm in such architecture is enhanced. We also show that FRiM can provide interesting scalability using a sparse matrix approach.

  1. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  2. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  3. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  4. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems.

    Science.gov (United States)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste; van Dam, Hubertus J J; Aprà, Edoardo; Kowalski, Karol

    2013-04-09

    A novel parallel algorithm for noniterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [Bhaskaran-Nair, K.; Brabec, J.; Aprà, E.; van Dam, H. J. J.; Pittner, J.; Kowalski, K. J. Chem. Phys.2012, 137, 094112] with the possibility of accelerating numerical calculations using graphics processing units (GPUs) is presented. We discuss the performance of this approach applied to the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithm is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.

  5. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  6. Keyboard and Mouse Insertion Authorization System Based on Filter Drive%基于过滤驱动的键盘鼠标插入授权系统

    Institute of Scientific and Technical Information of China (English)

    姜富强; 郑扣根

    2011-01-01

    In order to assure the information security of financial ATM, this paper studies and analyzes the keyboard and mouse drivers in Windows operating system, and based on filter driver, designs and implements keyboard and mouse insertion authorization system.The authorization system can detect the insertion of USB or PS/2 interfaced keyboard and mouse into the computer, and keeps the new device disabled before authorized.The final tests demonstrate that the system can be deployed and run on Windows XP and Windows 7 stably and efficiently.%为保证ATM的信息安全,分析Windows上键盘和鼠标设备驱动的工作原理,设计并实现基于过滤驱动的键盘鼠标插入事件授权系统.该授权系统能准确捕捉USB和PS/2接口的键盘鼠标设备的插拔事件,保证新插入的设备在系统授权前处于禁用状态.测试结果表明,系统能在Windows XP和Windows 7操作系统上高效稳定地运行.

  7. Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU.

    Science.gov (United States)

    Techavipoo, Udomchai; Worasawate, Denchai; Boonleelakul, Wittawat; Keinprasit, Rachaporn; Sunpetchniyom, Treepop; Sugino, Nobuhiko; Thajchayapong, Pairash

    2016-11-24

    An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I)/quadrature (Q) interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU). The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared.

  8. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.

    Science.gov (United States)

    Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin

    2014-10-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

  9. Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU

    Science.gov (United States)

    Techavipoo, Udomchai; Worasawate, Denchai; Boonleelakul, Wittawat; Keinprasit, Rachaporn; Sunpetchniyom, Treepop; Sugino, Nobuhiko; Thajchayapong, Pairash

    2016-01-01

    An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I)/quadrature (Q) interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU). The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared. PMID:27886149

  10. Toward Optimal Computation of Ultrasound Image Reconstruction Using CPU and GPU

    Directory of Open Access Journals (Sweden)

    Udomchai Techavipoo

    2016-11-01

    Full Text Available An ultrasound image is reconstructed from echo signals received by array elements of a transducer. The time of flight of the echo depends on the distance between the focus to the array elements. The received echo signals have to be delayed to make their wave fronts and phase coherent before summing the signals. In digital beamforming, the delays are not always located at the sampled points. Generally, the values of the delayed signals are estimated by the values of the nearest samples. This method is fast and easy, however inaccurate. There are other methods available for increasing the accuracy of the delayed signals and, consequently, the quality of the beamformed signals; for example, the in-phase (I/quadrature (Q interpolation, which is more time consuming but provides more accurate values than the nearest samples. This paper compares the signals after dynamic receive beamforming, in which the echo signals are delayed using two methods, the nearest sample method and the I/Q interpolation method. The comparisons of the visual qualities of the reconstructed images and the qualities of the beamformed signals are reported. Moreover, the computational speeds of these methods are also optimized by reorganizing the data processing flow and by applying the graphics processing unit (GPU. The use of single and double precision floating-point formats of the intermediate data is also considered. The speeds with and without these optimizations are also compared.

  11. Feasibility Analysis of Low Cost Graphical Processing Units for Electromagnetic Field Simulations by Finite Difference Time Domain Method

    CERN Document Server

    Choudhari, A V; Gupta, M R

    2013-01-01

    Among several techniques available for solving Computational Electromagnetics (CEM) problems, the Finite Difference Time Domain (FDTD) method is one of the best suited approaches when a parallelized hardware platform is used. In this paper we investigate the feasibility of implementing the FDTD method using the NVIDIA GT 520, a low cost Graphical Processing Unit (GPU), for solving the differential form of Maxwell's equation in time domain. Initially a generalized benchmarking problem of bandwidth test and another benchmarking problem of 'matrix left division is discussed for understanding the correlation between the problem size and the performance on the CPU and the GPU respectively. This is further followed by the discussion of the FDTD method, again implemented on both, the CPU and the GT520 GPU. For both of the above comparisons, the CPU used is Intel E5300, a low cost dual core CPU.

  12. 一种新的小檗碱衍生物CPU-86017在人小肠上皮细胞(Caco-2细胞)中的转运与摄取特性%Transport and uptake characteristics of a new derivative of berberine (CPU-86017) by human intestinal epithelial cell line: Caco-2

    Institute of Scientific and Technical Information of China (English)

    杨海涛; 王广基

    2003-01-01

    AIM: The characteristics of transepithelial transport and uptake of CPU-86017 {[7-(4-chlorbenzyl)-7,8,13,13α tetrahydroberberine chloride, CTHB] }, a new antiarrhythmia agent and a new derivative of berberine, were investi gated on epithelial cell line (Caco-2) to further understand the absorption mechanism of berberine and its derivatives. METHODS: Caco-2 cell was used. RESULTS: 1) The permeability coefficient from the apical (AP) to basolateral (BL) of CPU-86017 was approximately 5 times higher than that from BL-to-AP transport. The effects of a P-glycoprotein (P-gp) inhibitor-cyclosporin A, some surfactants, and lower pH on the transepithelial transport of CPU-86017 were also observed. Cyclosporine A at 7.5 mg/L had no effect on the transepithelial electrical resistance (TEER); an about 4-fold enhancement on the transepithlial transport of CPU-86017 was observed. Some surfac tants (sodium citrate, sodium deoxycholate, and sodium dodecyl sulfate) at 100 μmol/L and low pH (pH=6.0) induced a reversible decrease of TEER; enhancements of the transepithelial transport of CPU-86017 were also observed with some surfactants; 2) In the process of uptake of CPU-86017, the initial uptake rates of CPU-86017 were saturable with a Vmax of (250+39) μg.min-1.g-1 (protein) and Km of (0.90+0.12) mmol/L. This process was enhanced by cyclosporine A (7.5 mg/L) with a Vmax of (588+49) μg.min-1.g-1 (protein) and Km (0.42+0.08) mmol/L. CONCLUSION: Some surfactants and P-gp inhibitors can be considered as enhancers of its transepithelial trans port and uptake.

  13. A novel synthetic oleanolic acid derivative (CPU-II2) attenuates liver fibrosis in mice through regulating the function of hepatic stellate cells.

    Science.gov (United States)

    Wu, Li-Mei; Wu, Xing-Xin; Sun, Yang; Kong, Xiang-Wen; Zhang, Yi-Hua; Xu, Qiang

    2008-03-01

    Regulation on the function of the hepatic stellate cells (HSCs) is one of the proposed therapeutic approaches to liver fibrosis. In the present study, we examined the in vitro and in vivo effects of CPU-II2, a novel synthetic oleanolic acid (OLA) derivative with nitrate, on hepatic fibrosis. This compound alleviated CCl4-induced hepatic fibrosis in mice with a decrease in hepatic hydroxyproline (Hyp) content and histological changes. CPU-II2 also attenuated the mRNA expression of alpha-smooth muscle actin (alpha-SMA) and tissue inhibitor of metalloproteinase type 1 (TIMP-1) induced by CCl4 in mice and reduced both mRNA and protein levels of alpha-SMA in HSC-T6 cells. Interestingly, CPU-II2 did not affect the survival of HSC-T6 cells but decreased the expression of procollagen-alpha1 (I) in HSC-T6 cells through down-regulating the phosphorylation of p38 MAPK. CPU-II2 attenuates the development of liver fibrosis rather by regulating the function of HSCs through p38 MAPK pathway than by damaging the stellate cells.

  14. Fast Clustering of Radar Reflectivity Data Using GPU-CPU Pipeline Scheme%基于GPU-CPU流水线的雷达回波快速聚类

    Institute of Scientific and Technical Information of China (English)

    周伟; 施宁; 王健; 汪群山

    2012-01-01

    In our meteorological application,the clustering algorithm was adopted for analysis and processing of radar reflectivity data.While facing problems of large scale of dataset and high dimension of feature vector,the clustering algorithm is too time-consuming to satisfy the real-time constraint in our applications.This paper proposes a parallelized clustering algorithm using GPU-CPU pipeline scheme to solve this problem.In our method,we utilized the feature of asynchronous execution between GPU and CPU,and organized the process of clustering into pipeline-style,with which we can largely exploit the parallelism in algorithm.The experimental results show that our GPU-CPU pipeline based parallelized clustering algorithm outperform normally parallelized clustering algorithm using CUDA without GPU-CPU pipeline by 38%.Compared to the serial code on CPU,out approach can achieve a 47x performance improvement,which makes it satisfy the requirements of real-time applications.%提出了基于GPU-CPU流水线的雷达回波快速聚类方法.该方法利用GPU与CPU异步执行的特征,将聚类的各步骤组织成流水线,大大的挖掘了聚类全过程的的并行性.实验表明,引入这种GPU-CPU流水线机制后,该方法比一般策略的基于GPU的并行聚类算法性能有38%的提升,而相对于传统的CPU上的串行程序,获得了47x的加速比,满足了气象实时分析应用中的实时性要求.

  15. Adaptation of a Multi-Block Structured Solver for Effective Use in a Hybrid CPU/GPU Massively Parallel Environment

    Science.gov (United States)

    Gutzwiller, David; Gontier, Mathieu; Demeulenaere, Alain

    2014-11-01

    Multi-Block structured solvers hold many advantages over their unstructured counterparts, such as a smaller memory footprint and efficient serial performance. Historically, multi-block structured solvers have not been easily adapted for use in a High Performance Computing (HPC) environment, and the recent trend towards hybrid GPU/CPU architectures has further complicated the situation. This paper will elaborate on developments and innovations applied to the NUMECA FINE/Turbo solver that have allowed near-linear scalability with real-world problems on over 250 hybrid GPU/GPU cluster nodes. Discussion will focus on the implementation of virtual partitioning and load balancing algorithms using a novel meta-block concept. This implementation is transparent to the user, allowing all pre- and post-processing steps to be performed using a simple, unpartitioned grid topology. Additional discussion will elaborate on developments that have improved parallel performance, including fully parallel I/O with the ADIOS API and the GPU porting of the computationally heavy CPUBooster convergence acceleration module. Head of HPC and Release Management, Numeca International.

  16. Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2016-01-01

    Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.

  17. High temperature transformations of waste printed circuit boards from computer monitor and CPU: Characterisation of residues and kinetic studies.

    Science.gov (United States)

    Rajagopal, Raghu Raman; Rajarao, Ravindra; Sahajwalla, Veena

    2016-11-01

    This paper investigates the high temperature transformation, specifically the kinetic behaviour of the waste printed circuit board (WPCB) derived from computer monitor (single-sided/SSWPCB) and computer processing boards - CPU (multi-layered/MLWPCB) using Thermo-Gravimetric Analyser (TGA) and Vertical Thermo-Gravimetric Analyser (VTGA) techniques under nitrogen atmosphere. Furthermore, the resulting WPCB residues were subjected to characterisation using X-ray Fluorescence spectrometry (XRF), Carbon Analyser, X-ray Photoelectron Spectrometer (XPS) and Scanning Electron Microscopy (SEM). In order to analyse the material degradation of WPCB, TGA from 40°C to 700°C at the rates of 10°C, 20°C and 30°C and VTGA at 700°C, 900°C and 1100°C were performed respectively. The data obtained was analysed on the basis of first order reaction kinetics. Through experiments it is observed that there exists a substantial difference between SSWPCB and MLWPCB in their decomposition levels, kinetic behaviour and structural properties. The calculated activation energy (EA) of SSWPCB is found to be lower than that of MLWPCB. Elemental analysis of SSWPCB determines to have high carbon content in contrast to MLWPCB and differences in materials properties have significant influence on kinetics, which is ceramic rich, proving to have differences in the physicochemical properties. These high temperature transformation studies and associated analytical investigations provide fundamental understanding of different WPCB and its major variations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    Science.gov (United States)

    Leggett, C.; Binet, S.; Jackson, K.; Levinthal, D.; Tatarkhanov, M.; Yao, Y.

    2011-12-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  19. 状态变量法在键盘控制程序中的应用研究%Research on the Application of State Variable Method in the Keyboard Control Program

    Institute of Scientific and Technical Information of China (English)

    刘德全

    2012-01-01

    This paper studies the application of state variables in the design of the keyboard control program, by analyzing the relationship between keyboard state matrix and the basic concepts of the keyboard state diagram, to establish the corresponding relationship between the state variables and keyboard control subroutine, enabling the keyboard program control. By the example analysis of the method of specific design process and programming ideas.%研究了状态变量法在键盘控制程序设计中的应用,通过分析键盘状态矩阵表和键盘状态图之间的关系,建立状态变量与键盘控制子程序之间的一一对应关系,从而实现键盘程序控制.通过实例分析了该方法的具体设计流程和编程思路.

  20. 基于C8051 F020的绞车键盘模拟训练系统的设计与实现%Design and Implementation of Winch Keyboard Simulation Training System Based on C8051F020

    Institute of Scientific and Technical Information of China (English)

    王成昆

    2014-01-01

    For the convenience of sonar operation training , a equipment dipping sonar winch keyboard simulation training system is designed and developed ,winch picture keyboard layout ,keyboard layout is consistent with the live-fire winch keyboard,etc.At the same time,the keyboard by controlling the fre-quency converter drive motor to drive the winch winches ,can be realistic simulation of the actual to the operation of the winch ,simulation can also to set the training scheme for certain conditions ,in order to im-prove the level of sonar operator training .%为方便声纳员进行操作训练,设计、开发了一种能够脱离实装设备的吊放声纳绞车键盘模拟训练系统,绞车键盘操作的画面布局及键盘布局等与实装绞车键盘一致。同时,绞车键盘通过控制变频器带动电机以驱动绞车,可以逼真地模拟实际对绞车的操作,模拟也可以对特定条件下的训练方案进行设定,以提高声纳操作人员的训练水平。

  1. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  2. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  3. 激光投影键盘的手势识别功能扩展%Extended Gesture Recognition of the Laser Projection Keyboard

    Institute of Scientific and Technical Information of China (English)

    贺天章; 杨沛; 汪良会

    2014-01-01

    Since 1992, the invention of the laser projection keyboard attracts a large number of lighting effects lovers by its good light interactive experience. If increases some simple space plane gesture recognition based on the keyboard function, it will meet more personalized interactive pursuit. Through human-computer interaction gesture recognition algorithm based on radial basis function neural network, the extension methods is studied in this paper, which is tested by 10 predefined gestures. The result shows that the recognition rate is up to 96.4%, and achieves the pursuit of personalized interaction on the laser keyboard ges-tures recognition function.%自从1992年激光投影键盘发明以来,其良好的光景交互体验吸引了众多光景效果爱好者。若能在其键盘功能基础上再完成一些较简单的空间平面手势识别,将会满足更多的个性化交互追求。该文通过基于径向基函数神经网络的人机交互手势识别算法研究了激光投影键盘的手势识别功能扩展方法,并通过实验测试预定义的10种手势动作,实验结果表明识别率达到96.4%,从而一定程度上实现了激光键盘手势识别功能扩展的个性化交互追求。

  4. Information Protection Based on Windows NT Filter Drivers Keyboard%基于Windows NT过滤驱动键盘录入信息保护

    Institute of Scientific and Technical Information of China (English)

    叶磊; 葛万成

    2012-01-01

    随着电子商务的发展和网上银行系统的应用日益广泛,并且伴随RootKit技术的日益成熟,针对这些应用系统的信息窃取行为也日益增多,恶意软件的检测变得更加困难.本文在网上银行系统键盘保护模块设计方案的基础上,提出针对目前流行的RootKit检测方法的一种比较完善的键盘录入信息保护策略.该保护方案以一种安全可靠的键盘录入信息方式很好地保护用户输入信息,以达到更好的信息安全保护效果.%With the development of E-commerce, the wide application of E-bank and the growing sophistication of RoolKit technology, information stealing from these application systems keeps increasing. It becomes much harder to detect malicious software. Based on the theories of E-bank keyboard protection module, a comparatively sound keyboard data entry protection strategy, targeting the detection of currently-popular RookKit, is proposed. By adopting a more safe and reliable keyboard data entry method, this strategy achieves a better result of protecting users' data entry.

  5. 基于人机学的键盘和鼠标的人性化设计%Humanization design of keyboard and mouse based on ergonomics

    Institute of Scientific and Technical Information of China (English)

    刘旭辉; 陈牧; 李青

    2011-01-01

    Aiming at the problems and inconvenience existing in using computer keyboard and mouse, a concenrated effort to design a kind of human oriented keyboard and mouse was made for the young people of 18 to 30-year-old based on ergonomics, psychology, physiology, etc. ,expressing the concept of the humanization design. The designs of color, appearance, function were included, with much attention to their conveyed meanings. The keyboard was improved pleasurably from the perspectives of plane layout, keystroke of the central section, edit & localization section and number aided section, color design. The results indicate that the newly-designed keyboard and mouse are helpful for users' health, releasing their tiredness and pain in neck, wraist and back and convenient and efficient as well, providing a happy, healthy environment for their users at the same time.%针对使用键盘和鼠标时存在的一些问题,为了减少操作者的手、肩、颈等部位肌肉疲劳和视觉疲劳,以及提高输入效率,基于人机工程学、心理学、生理学等原理,从颜色、形态、功能、寓意等方面出发,为18~30岁的青年人设计了一种更具人性化的键盘.通过对现有产品和文献的调查,确定了将键盘和鼠标联系在一起进行设计的设计思路.通过调整键盘和鼠标的平面布局,采用曲面和斜面键盘外形,在表面增加具有中国文化元素的图案等设计方法以改善人机性和视觉效果,传达人性化设计理念.

  6. On the cost of approximating and recognizing a noise perturbed straight line or a quadratic curve segment in the plane. [central processing units

    Science.gov (United States)

    Cooper, D. B.; Yalabik, N.

    1975-01-01

    Approximation of noisy data in the plane by straight lines or elliptic or single-branch hyperbolic curve segments arises in pattern recognition, data compaction, and other problems. The efficient search for and approximation of data by such curves were examined. Recursive least-squares linear curve-fitting was used, and ellipses and hyperbolas are parameterized as quadratic functions in x and y. The error minimized by the algorithm is interpreted, and central processing unit (CPU) times for estimating parameters for fitting straight lines and quadratic curves were determined and compared. CPU time for data search was also determined for the case of straight line fitting. Quadratic curve fitting is shown to require about six times as much CPU time as does straight line fitting, and curves relating CPU time and fitting error were determined for straight line fitting. Results are derived on early sequential determination of whether or not the underlying curve is a straight line.

  7. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  8. Design of CPU with Cache and Precise Interruption Response%带Cache和精确中断响应的CPU设计

    Institute of Scientific and Technical Information of China (English)

    刘秋菊; 李飞; 刘书伦

    2012-01-01

    In this paper the design of CPU with Cache and precise interruption response was proposed. 15 of the MIPS instruction set were selected as the basic instruction for the CPU. By using 5 stage pipeline, the instruction Cache,data Cache and precise interruption response were realized. The teat results show that the scheme meets the design requirements.%提出了带Cache和精确中断响应的CPU设计方案,实现指令集MIPS中选取15条指令作为本CPU的基本指令.采用基本5步流水线CPU设计,给出了指令Cache、数据Cache和精确中断响应的设计与实现.测试结果表明,该方案符合设计要求.

  9. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    Science.gov (United States)

    Wang, Yu; Du, Haixiao; Xia, Mingrui; Ren, Ling; Xu, Mo; Xie, Teng; Gong, Gaolang; Xu, Ningyi; Yang, Huazhong; He, Yong

    2013-01-01

    Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome). Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based) brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  10. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    Directory of Open Access Journals (Sweden)

    Yu Wang

    Full Text Available Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome. Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  11. The Experimental Study on CPU Water-cooling Heat-radiating System%CPU水冷系统散热实验研究

    Institute of Scientific and Technical Information of China (English)

    吕玉坤; 刘海峰; 徐国涛

    2012-01-01

    通过对某台式计算机水冷系统CPU吸热盒的换热和阻力特性实验,证明CPU吸热盒内的阻力压降与进口流速成二次方关系,热交换量随流量的增加先增大后减小。然后进行了不同管路布置情况下阻力和换热的性能试验,得出北桥吸热盒与显卡吸热盒并联的管路布置为最优方案,比管路串联布置时的总阻力低2.4%,CPU吸热盒换热量增加了21%。同时推出除CPU吸热盒管路以外的管路总阻力系数和管路阻力损失计算公式。%Based on the experiment of heat transfer and pressure drop characteristics of a desktop computer water-cooling system CPU heat-absorbing box, the relationship between the resistance pressure drop of the CPU heat-absorbing box and inlet velocity is a quadratic relationship and the amount of heat exchange first increases and then decreases with increasing flow is proved. And then resistance and heat transfer performance experiment under different pipeline layout was done, the pipeline layout when the North Bridge heat-absorbing box and graphics heat-absorbing box is in parallel circuit arrangement is the optimal solution. In contrast to the pipeline series connection, the resistance induces 2.4% and the heat exchange of CPU heat-absorbing box increases 21%. The pipeline (exclude the pipeline of CPU heat-absorbing box) total drag coefficient and resistance loss formula is derived.

  12. Abrupt changes in FKBP12.6 and SERCA2a expression contribute to sudden occurrence of ventricular fibrillation on reperfusion and are prevented by CPU86017

    Institute of Scientific and Technical Information of China (English)

    Tao NA; Zhi-jiang HUANG; De-zai DAI; Yuan ZHANG; Yin DAI

    2007-01-01

    Aim: The occurrence of ventricular fibriUation (VF) is dependent on the deterioration of channelopathy in the myocardium. It is interesting to investigate molecular changes in relation to abrupt appearance of VF on repeffusion. We aimed to study whether changes in the expression of FKBP12.6 and SERCA2a and the endothelin (ET) system on reperfusion against ischemia were related to the rapid occurrence of VF and whether CPU86017, a class Ⅲ antiarrhythmic agent which blocksK IKr, IKs,and ICa.L, suppressed VF by correcting the molecular changes on repeffusion.Methods: Cardiomyopathy (CM) was produced by 0.4 mg/kg sc L-thyroxin for 10 d in rats, and subjected to 10 min coronary artery ligation/reperfusion on d 11. Expressions of the Ca2+ handling and ET system and calcium transients were conducted and CPU86017 was injected (4 mg/kg, sc) on d 6-10.Results: Ahigh incidence of VF was found on repeffusion of the rat CM hearts, but there was no VF before reperfusion. The elevation of diastolic calcium was significant in the CM myocytes and exhibited abnormality of the Ca2+ handling system. The rapid downregulation of mRNA and the protein expression of FKBP12.6 and SERCA2a were found on reperfusion in association with the upregulation of the expression of the endothelin-converting enzyme (ECE) and protein kinase A (PKA), in contrast, no change in the ryanodine type 2 receptor (RyR2), phospholamban (PLB),endothelin A receptor (ETAR), and iNOS was found. CPU86017 removed these changes and suppressed VEConclusion: Abrupt changes in the expression of FKBP12.6, SERCA2a, PKA, and ECE on reperfusion against ischemia, which are responsible for the rapid occurrence of VF, have been observed. These changes are effectively prevented by CPU86017.

  13. Finite differences numerical method for two-dimensional superlattice Boltzmann transport equation and case comparison of CPU(C) and GPGPU(CUDA) implementations

    CERN Document Server

    Priimak, Dmitri

    2014-01-01

    We present finite differences numerical algorithm for solving 2D spatially homogeneous Boltzmann transport equation for semiconductor superlattices (SL) subject to time dependant electric field along SL axis and constant perpendicular magnetic field. Algorithm is implemented in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPUs. We compare performance and merits of one implementation versus another and discuss various methods of optimization.

  14. Isoproterenol disperses distribution of NADPH oxidase, MMP-9, and pPKCε in the heart, which are mitigated by endothelin receptor antagonist CPU0213

    Institute of Scientific and Technical Information of China (English)

    Yusi CHENG; De-zai DAI; Yin DAI

    2009-01-01

    Aim: Spatial dispersion of bioactive substances in the myocardium could serve as pathological basis for arrhythmogenesis and cardiac impairment by β-adrenoceptor stimulation. We hypothesized that dispersed NADPH oxidase, protein kinase Cε (PKCε), early response gene (ERG), and matrix metalloproteinase 9 (MMP-9) across the heart by isoproterenol (ISO) medication might be mediated by the endothelin (ET) - ROS pathway. We aimed to verify if ISO induced spatially heterogeneous distribution of pPKCε, NAPDH oxidase, MMP-9 and ERG could be mitigated by either an ET receptor antagonist CPU0213 or iNOS inhibitor aminoguanidine.Methods: Rats were treated with ISO (1 mg/kg sc) for 10 days, and drug interventions (mg/kg) either CPU0213 (30 sc) or aminoguani-dine (100 ip) were administered on days 8-10. Expression of NADPH oxidase, MMP-9, ERG, and PKCε in the left and right ventricle (LV, RV) and septum (S) were measured separately.Results: Ventricular hypertrophy was found in the LV, S, and RV, in association with dispersed QTc and oxidative stress in ISO-treated rats. mRNA and protein expression of MMP-9, PKCε, NADPH oxidase and ERG in the LV, S, and RV were obviously dispersed, with aug-mented expression mainly in the LV and S. Dispersed parameters were re-harmonized by either CPU0213, or aminoguanidine. Conclusion: We found at the first time that ISO-induced dispersed distribution of pPKCε, NADPH oxidase, MMP-9, and ERG in the LV, S,and RV of the heart, which were suppressed by either CPU0213 or aminoguanidine. It indicates that the ET-ROS pathway plays a role in the dispersed distribution of bioactive substances following sustained β-receptor stimulation.

  15. CPU0213,a novel endothelin receptor antagonist,ameliorates septic renal lesion by suppressing ET system and NF-κB in rats

    Institute of Scientific and Technical Information of China (English)

    Haibo HE; De-zai DAI; Yin DAI

    2006-01-01

    Aim: To examine whether a novel endothelin receptor antagonist, CPU0213, is effective in relieving the acute renal failure (ARF) of septic shock by suppressing the activated endothelin-reactive oxygen species (ET-ROS) pathway and nuclear factor kappa B (NF-κB). Methods: The cecum was ligated and punctured in rats under anesthesia. CPU0213 (30 mg·kg-1·d-1, bid, sc×3 d) was administered 8 h after surgical operation. Results: In the untreated septic shock group, the mean arterial pressure and survival rate were markedly decreased (P<0.01), and heart rate, weight index of kidney, serum creatinine and blood urea nitrogen, 24 h urinary protein and creatinine were significantly increased (P<0.01). The levels of ET-1, total NO synthetase (tNOS), indusible nitric oxide synthetase (iNOS), nitric oxide (NO), and ROS in serum and the renal cortex were markedly increased (P< 0.01). The upregulation of the mRNAlevels of preproET-1, endothelin converting enzyme, ETA, ETB, iNOS, and tumor necrosis factor-alpha in the renal cortex was significant (P<0.01). The protein amount of activated NF-κB was significantly increased (P<0.01) in comparison with the sham operation group. All of these changes were significantly reversed after CPU0213 administration. Conclusion: Upregulation of the ET signaling pathway and NF-κB play an important role in the ARF of septic shock. Amelioration of renal lesions was achieved by suppressing the ETA and ETB receptors in the renal cortex following CPU0213 medication.

  16. Developing a framework for predicting upper extremity muscle activities, postures, velocities, and accelerations during computer use: the effect of keyboard use, mouse use, and individual factors on physical exposures.

    Science.gov (United States)

    Bruno Garza, Jennifer L; Catalano, Paul J; Katz, Jeffrey N; Huysmans, Maaike A; Dennerlein, Jack T

    2012-01-01

    Prediction models were developed based on keyboard and mouse use in combination with individual factors that could be used to predict median upper extremity muscle activities, postures, velocities, and accelerations experienced during computer use. In the laboratory, 25 participants performed five simulated computer trials with different amounts of keyboard and mouse use ranging from a highly keyboard-intensive trial to a highly mouse-intensive trial. During each trial, muscle activity and postures of the shoulder and wrist and velocities and accelerations of the wrists, along with percentage keyboard and mouse use, were measured. Four individual factors (hand length, shoulder width, age, and gender) were also measured on the day of data collection. Percentage keyboard and mouse use explained a large amount of the variability in wrist velocities and accelerations. Although hand length, shoulder width, and age were each significant predictors of at least one median muscle activity, posture, velocity, or acceleration exposure, these individual factors explained very little variability in addition to percentage keyboard and mouse use in any of the physical exposures investigated. The amounts of variability explained for models predicting median wrist velocities and accelerations ranged from 75 to 84% but were much lower for median muscle activities and postures (0-50%). RMS errors ranged between 8 to 13% of the range observed. While the predictions for wrist velocities and accelerations may be able to be used to improve exposure assessment for future epidemiologic studies, more research is needed to identify other factors that may improve the predictions for muscle activities and postures.

  17. Der ATLAS LVL2-Trigger mit FPGA-Prozessoren : Entwicklung, Aufbau und Funktionsnachweis des hybriden FPGA/CPU-basierten Prozessorsystems ATLANTIS

    CERN Document Server

    Singpiel, Holger

    2000-01-01

    This thesis describes the conception and implementation of the hybrid FPGA/CPU based processing system ATLANTIS as trigger processor for the proposed ATLAS experiment at CERN. CompactPCI provides the close coupling of a multi FPGA system and a standard CPU. The system is scalable in computing power and flexible in use due to its partitioning into dedicated FPGA boards for computation, I/O tasks and a private communication. Main focus of the research activities based on the usage of the ATLANTIS system are two areas in the second level trigger (LVL2). First, the acceleration of time critical B physics trigger algorithms is the major aim. The execution of the full scan TRT algorithm on ATLANTIS, which has been used as a demonstrator, results in a speedup of 5.6 compared to a standard CPU. Next, the ATLANTIS system is used as a hardware platform for research work in conjunction with the ATLAS readout systems. For further studies a permanent installation of the ATLANTIS system in the LVL2 application testbed is f...

  18. HD7279A在单片机键盘和显示接口中的应用%THE APPLICATION OF HD7279A IN INTERFACE OF KEYBOARD AND DISPALY IN SINGLE-CHIP-COMPUTER

    Institute of Scientific and Technical Information of China (English)

    李外云

    2001-01-01

    介绍了串行键盘和显示接口芯片HD7279A的各种控制命令字的功能和用法,通过实例从硬件和软件方面介绍其在单片机键盘和显示接口中的应用。%This paper introduces the function and use of control command word of serial keyboard and dis-plays interface of HD7279A in detail, and introduces its keyboard and displays interface and application in sin-gle-chip-computer from hardware and soft via example.

  19. 基于单片机的简易电子琴的设计与实现%Design and Implementation of Simple Microcontroller Based Keyboard

    Institute of Scientific and Technical Information of China (English)

    章丹

    2014-01-01

    电子琴是现代电子科技与音乐结合的产物,是一种新型的键盘乐器。它在现代音乐扮演重要的角色,单片机具有强大的控制功能和灵活的编程实现特性,它已经溶入现代人们的生活中,成为不可替代的一部分。该文的主要内容是用8253芯片为核心控制元件,设计一个简易电子琴。通过按动STAR ES598PCI单板机的G6区的1~7号键,使用单板机上的8255A芯片进行音调的选择,由8253芯片控制产生不同频率的方波,输出到单板机上D1区的蜂鸣器,使其对应于G6区的1~7号键由低到高发出1~7的音阶,并由8255A芯片控制8253芯片的工作状态,使其能够控制蜂鸣器的发声与否,从而实现简易电子琴的演奏功能。同时,也可以通过事先设置好的“乐谱”回放一段音乐,实现简易电子琴的回放功能以及对用户演奏过的一段音乐进行重放功能。用户可以通过DOS界面下的菜单对电子琴的回放和重放或是演奏功能进行选择。%The keyboard is a combination of modern electronic music technology and the product is a new type of keyboard in-strument. It plays an important role in modern music , SCM has a powerful control functions and flexible programming features, it has been integrated into the modern people's lives , become an irreplaceable part . The main content of this paper is to use 8253 as the core control elements , the design of a simple flower . By pressing the number keys 1-7 STAR ES598PCI SBC G6 zone , 8255A chips on a single board computer for tone selection control 8253 generates a square wave of different frequencies , the output of the SBC to bee region D1 buzzer to make it correspond to the G6 District No. 1 to 7 keys issued from low to high scale of 1 to 7 , 8253 by the 8255A chip to control the working status of the chip so that it can control the buzzer sound or not, thus achieve simple keyboard playing capabilities. Meanwhile , you can

  20. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  1. 基于嵌入式CPU的加解密子系统%Encryption and Decryption Subsystem Based on Embedded CPU

    Institute of Scientific and Technical Information of China (English)

    王剑非; 马德; 熊东亮; 陈亮; 黄凯; 葛海通

    2014-01-01

    针对信息安全等级和应用场合变化时IP级复用的片上系统( SoC)集成验证效率低的问题,提出一种基于嵌入式CPU的加解密子系统。子系统包括RSA,DES,AES等多种加解密模块,通过硬件上的参数配置,构造满足不同信息安全应用和等级的子系统;采用低功耗高性能的嵌入式CPU,作为SoC中主CPU的协处理器,控制各加解密模块的工作,可减少对主CPU的访问,以降低功耗。将经过验证的加解密子系统作为整体集成到SoC中,实现子系统复用,可减少SoC设计和集成工作量,降低SoC验证难度;利用门控时钟技术,根据各加解密模块的工作状态管理时钟,从而降低加解密子系统的功耗。采用CKSoC设计集成方法,在SoC集成工具平台上可快速集成不同配置下的基于嵌入式CPU的加解密子系统。实验结果表明,构造子系统后的SoC设计和验证工作量明显减少,提高了工作效率。%To improve the efficiency of System-on-Chip( SoC) integration and verification for different applications of information security,a complete and pre-verified encryption and decryption subsystem based on embedded CPU is proposed. The subsystem includes cryptography modules such as RSA,DES,AES and so on. It can satisfy applications of different requirements on security levels. The embedded CPU in subsystem is a low-power and high-performance CPU,as a coprocessor for main CPU in SoC. It is responsible for controlling the operation of cryptography modules, reducing both the computation load of the main CPU and the power of SoC greatly. Integrating the pre-verified encryption and decryption subsystem as a whole to SoC, significantly reduces SoC design and integration effort and lowers the difficulty of SoC verification. Using gated clock technology, which manages the clock of cryptography modules based on their states,reduces the power of subsystem effectively. According to the CKSoC Integration method, the subsystem based

  2. Evaluating Mobile Graphics Processing Units (GPUs) for Real-Time Resource Constrained Applications

    Energy Technology Data Exchange (ETDEWEB)

    Meredith, J; Conger, J; Liu, Y; Johnson, J

    2005-11-11

    Modern graphics processing units (GPUs) can provide tremendous performance boosts for some applications beyond what a single CPU can accomplish, and their performance is growing at a rate faster than CPUs as well. Mobile GPUs available for laptops have the small form factor and low power requirements suitable for use in embedded processing. We evaluated several desktop and mobile GPUs and CPUs on traditional and non-traditional graphics tasks, as well as on the most time consuming pieces of a full hyperspectral imaging application. Accuracy remained high despite small differences in arithmetic operations like rounding. Performance improvements are summarized here relative to a desktop Pentium 4 CPU.

  3. Permeability of CPU-86017 through the BBB after Intravenous and Intracerebroventricular Injection in Mice%氯苄律定CPU-86017在小鼠静脉给药和脑室给药后通过血脑屏障的渗透性

    Institute of Scientific and Technical Information of China (English)

    查玛拉; 林生; 戴德哉

    2003-01-01

    目的:研究小鼠静脉给药和脑室给药后CPU-86017通过血脑屏障的双向渗透性.方法:小鼠给予3.0 mg/kg CPU-86017(盐酸对氯苄四氢小檗碱)后,在5,10,20,30,60 min时用HPLC测脑、心、肾和血中CPU-86017的水平.结果:两种途径给药的CPU-86017在小鼠脑,心、肾和血浆中的最大浓度基本上在10 min时都达到最大值,在静脉给药组分别为0.83±0.335,25.13±4.17,56.0±19.69,2.23±0.97 μg/ml,在脑室给药组分别为23.68±4.2,15.9±10.24,7.93±4.68,3.32±2.3 μg/ml.小鼠血浆浓度很快减少.静脉给药后CPU-86017在肾中的最大浓度是56.0±19.69 μg/g,脑室给药后,在脑中的最大浓度是23.68±4.2 μg/g.静脉给药60 min后,CPU-86017在肾中和脑中的浓度没有显著性差异.脑室给药后,CPU-86017到达外周组织和血浆中的浓度每5分钟都会有变化,而静脉给药后,在20,30和60 min时,脑中的药物浓度仍不能被检测出.结论:CPU-86017通过血脑屏障的渗透可以通过两条途径:从血液循环到脑和从脑到全身循环.然而,这两条途径存在很大的不同.从血渗透到脑比从脑渗透到外周更困难.%AIM:To investigate the bi-directional penetration of CPU-86017 across the BBB (Blood Brain Barrier) following iv and icv (intracerebroventricular) administration in mice.METHOD:The levels of CPU-86017 (p-Chlorobenzyltetrahydroberberine hydrochloride) in the brain, heart, kidney and blood of mice after acute administration of 3.0 mg/kg of CPU-86017 were measured by validated HPLC assay at several time points: 5, 10, 20, 30 and 60 minutes. RESULT:The maximum concentrations of CPU-86017 in the brain, heart, kidney and plasma achieved at 10 minutes by both routes of administration were 0.83±0.335, 25.13±4.17, 56.0±19.69, and 2.23±0.97 μg/ml in the iv group and 23.68±4.2,15.9±10.24, 7.93±4.68 and 3.32±2.3 μg/ml in the icv group, respectively. The decline in concentrations was rapid in plasma. The highest concentration of CPU

  4. Computer Keyboard Savvy.

    Science.gov (United States)

    Binderup, Denise Belick

    1988-01-01

    Elementary school students are often exposed to computer usage before they have been taught correct typing techniques. Sample exercises to teach touch-typing to young children are provided on a reproducible page. (JL)

  5. Computer Workstations: Keyboards

    Science.gov (United States)

    ... repeatedly at a fast pace and with little variation. When motions are isolated and repeated frequently for prolonged periods, there may be inadequate time for your muscles and tendons to recover. Combining repetitive tasks with ...

  6. Validation of columnar CsI x-ray detector responses obtained with hybridMANTIS, a CPU-GPU Monte Carlo code for coupled x-ray, electron, and optical transport.

    Science.gov (United States)

    Sharma, Diksha; Badano, Aldo

    2013-03-01

    hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.

  7. 雷达终端设备键盘鼠标通用接口板的设计与实现%Design and realization of common interface board for keyboard and mouse of radar terminal equipment

    Institute of Scientific and Technical Information of China (English)

    莫兰宾; 黄少锋

    2013-01-01

      Aiming at the problem that the general keyboard and mouse have weak generality in radar terminal equipment,a common interface board for keyboard and mouse was designed. It can realize the application of the general keyboard and mouse in radar teminal equipment successfully,reduce the use of special keyboard and mouse,save much cost and improve the sup⁃port ability of radar terminal equipment. It satisfy the requirement of army.%  针对当前普通键盘鼠标在雷达终端录取设备中通用性不强的问题,设计了键盘鼠标通用接口板,成功实现了将普通键盘鼠标运用于雷达录取终端,减少了专用键盘鼠标的使用,节约了大量成本,提高了雷达终端录取设备的保障能力,满足了部队的需求。

  8. Layout de teclado para uma prancha de comunicação alternativa e ampliada Keyboard layout for an augmentative and alternative communication board

    Directory of Open Access Journals (Sweden)

    Luciane Aparecida Liegel

    2008-12-01

    Full Text Available O objetivo deste artigo é descrever e discutir a proposta de um novo layout de teclado projetado especialmente para uma prancha de comunicação alternativa com acionamento mecânico e remoto, para ser utilizado por portadores de paralisia cerebral com capacidade cognitiva preservada. Para compor o layout do teclado de comunicação alternativa, realizou-se uma pesquisa envolvendo disposição e conteúdo das teclas. Participaram do estudo onze voluntárias, sendo: cinco professoras de educação especial, quatro pedagogas especializadas em educação especial e duas fonoaudiólogas. O layout é composto por 95 teclas dispostas em grupos de teclas: alfabéticas, de letras acentuadas, numéricas, de funções e de comunicação alternativa e ampliada. As teclas de comunicação alternativa, contêm ícones associados à palavras ou frases, além de teclas acentuadas. Os ícones contemplados fazem parte de uma linguagem visual brasileira de comunicação, em desenvolvimento. Para auxiliar na localização, tanto o tamanho de teclas e caracteres quanto as cores de fundo das teclas diferenciadas foram utilizadas. As teclas com letras acentuadas e as teclas de comunicação alternativa visam facilitar e acelerar a digitação das mensagens, reduzindo assim o tempo de digitação e conseqüentemente, a ocorrência de fadiga muscular.The aim of this article is to describe and discuss a novel layout proposal for keyboard especially designed for a communication board using mechanical and remote activation to be used by people with cerebral paralysis who present sufficient cognitive skills. In order to design the layout of the augmentative and alternative communication keyboard, a research study involving position and content of the keys was undertaken. Eleven volunteers participated in the study, and they were: five special education teachers, four pedagogues specialized in special education and two speech and language therapists. The layout is made up

  9. A Qualitative and Quantitative Analysis of Multi-core CPU Power and Performance Impact on Server Virtualization for Enterprise Cloud Data Centers

    Directory of Open Access Journals (Sweden)

    S. Suresh

    2015-02-01

    Full Text Available Cloud is an on demand service provisioning techniques uses virtualization as the underlying technology for managing and improving the utilization of data and computing center resources by server consolidation. Even though virtualization is a software technology, it has the effect of making hardware more important for high consolidation ratio. Performance and energy efficiency is one of the most important issues for large scale server systems in current and future cloud data centers. As improved performance is pushing the migration to multi core processors, this study does the analytic and simulation study of, multi core impact on server virtualization for new levels of performance and energy efficiency in cloud data centers. In this regard, the study develops the above described system model of virtualized server cluster and validate it for CPU core impact for performance and power consumption in terms of mean response time (mean delay vs. offered cloud load. Analytic and simulation results show that multi core virtualized model yields the best results (smallest mean delays, over the single fat CPU processor (faster clock speed for the diverse cloud workloads. For the given application, multi cores, by sharing the processing load improves overall system performance for all varying workload conditions; whereas, the fat single CPU model is only best suited for lighter loads. In addition, multi core processors don’t consume more power or generate more heat vs. a single-core processor, which gives users more processing power without the drawbacks typically associated with such increases. Therefore, cloud data centers today rely almost exclusively on multi core systems.

  10. Reversal of isoproterenol-induced downregulation of phospholamban and FKBP12.6 by CPU0213-mediated antagonism of endothelin receptors

    Institute of Scientific and Technical Information of China (English)

    Yu FENG; Xiao-yun TANG; De-zai DAI; Yin DAI

    2007-01-01

    Aim:The downregulation of phospholamban (PLB) and FKBPI 2.6 as a result of βreceptor activation is involved in the pathway(s) of congestive heart failure. We hypothesized that the endothelin (ET)-I system may link to downregulated PLB and FKBP12.6. Methods:Rats were subjected to ischemia/reperfusion (I/R) to cause heart failure (HF). 1 mg/kg isoproterenol (ISO) was injected subcutaneously (sc) for 10 d to worsen HF. 30 mg/kg CPU0213 (sc),a dual ET receptor (ETAR/ETBR) antagonist was given from d 6 to d 10. On d 11,cardiac function was assessed together with the determination of mRNA levels of ryanodine receptor 2,calstabin-2 (FKBP12.6),PLB,and sarcoplasmic reticulum Ca2+-ATPase. Isolated adult rat ventricular myocytes were incubated with ISO at lx 10-6 mol/L to set up an in vitro model of HF. Propranolol (PRO),CPU0213,and darusentan (DAR,an ETAR antagonist) were incubated with cardiomyocytes at 1 x 10.5 mol/L or 1 × 10-6mol/L in the presence of ISO (1× 10.6 mol/L). Immunocytochemistry and Western blotting were applied for measuring the protein levels of PLB and FKBP12.6.Results:The worsened hemodynamics produced by I/R were exacerbated by ISO pretreatment. The significant downregulation of the gene expression of PLB and FKBPI 2.6 and worsened cardiac function by ISO were reversed by CPU0213. In vitro ISO lx 10-6 mol/L produced a sharp decline of PLB and FKBP12.6 proteins relative to the control. The downregulation of the protein expression was significantly reversed by the ET receptor antagonist CPU0213 or DAR,comparable to that achieved by PRO. Conclusion:This study demonstrates a role of ET in mediating the downregulation of the cardiac Ca2+-handling protein by ISO.AcknowledgementWe are most grateful to Prof David J TRIGGLE from the State University of New York at Buffalo for assistance in revising the English of the manuscript.

  11. Design of universal network interface based on MiniGUI keyboard input%基于MiniGUI键盘输入的通用网络接口的设计

    Institute of Scientific and Technical Information of China (English)

    黄健

    2011-01-01

    在分析了MiniGUI系统架构的基础上,设计了通用的键盘网络接口通信协议以及该网络化接口在嵌入式Linux和MiniGUI下的实现.采用UDP作为网络承载协议,控制端遵照网络键盘协议发UDP包到MiniGUI网络键盘接口,即可实现MiniGUl键盘输入.根据具体按键布局需求,更改网络接口的键盘映射码,可满足不同的键盘指令控制场合,同时该网络接口也可扩展到局域网和广城网上实现远程控制.%On the basis of analyzing MiniGUI system architecture, the universal keyboard network interface communication protocol is designed and the network interface is realized in the embedded Linux and MiniGUI environment.UDP is adopted as the network load protocols, and the control terminal sends UDP packets to the MiniGUI network keyboard interface.Then, the MiniGUI keyboard input is realized.According to the different keypad layout demand, change the network interface keyboard mapping code to satisfy the different keyboard command control occasions.At the same time, the network interface can also be extended to LANs and WANs for remote control.

  12. 基于Linux的智能终端键盘驱动的研究与实现%Research and Implementation of Intelligent Terminal Keyboard Driver on Linux

    Institute of Scientific and Technical Information of China (English)

    刘思敏

    2013-01-01

    In order to achieve the requirements for input user interface of the intelligent terminals system, the device driver of keyboard were implemented under Linux environment, based on the OMAP5912 dual-core processor with rich peripheral interface, which further man-machine communication extent of the intelligent terminals.Implementation of the input device driver for the intelligent terminals is the main task of this paper.%  以OMAP5912双核处理器为核心硬件平台,利用其丰富的外围接口,在Montavista Linux环境下,研究并实现了矩阵键盘输入设备的驱动支持,进一步提高了智能终端的人机交流程度。

  13. Comparison of the CPU and memory performance of StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA)

    CERN Document Server

    Palombo, Giulio

    2011-01-01

    High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the learning process scale versus data set size. As classifiers, we consider Random Forests, Boosted Decision Trees and Neural Networks. For our tests, we employ a data set widely used in the machine learning community, "Threenorm" data set, as well as data tailored for testing various edge cases. For each data set, we constantly increase its size and check CPU time and memory needed to build the classifiers implemented in SPR and TMVA. We show that SPR is often significantly faster and consumes significantly less memory. For example, the SPR implementation of Random Forest is by an order of magnitude faster and consumes an order...

  14. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  15. Toward a formal verification of a floating-point coprocessor and its composition with a central processing unit

    Science.gov (United States)

    Pan, Jing; Levitt, Karl N.; Cohen, Gerald C.

    1991-01-01

    Discussed here is work to formally specify and verify a floating point coprocessor based on the MC68881. The HOL verification system developed at Cambridge University was used. The coprocessor consists of two independent units: the bus interface unit used to communicate with the cpu and the arithmetic processing unit used to perform the actual calculation. Reasoning about the interaction and synchronization among processes using higher order logic is demonstrated.

  16. Parallel particle swarm optimization on a graphics processing unit with application to trajectory optimization

    Science.gov (United States)

    Wu, Q.; Xiong, F.; Wang, F.; Xiong, Y.

    2016-10-01

    In order to reduce the computational time, a fully parallel implementation of the particle swarm optimization (PSO) algorithm on a graphics processing unit (GPU) is presented. Instead of being executed on the central processing unit (CPU) sequentially, PSO is executed in parallel via the GPU on the compute unified device architecture (CUDA) platform. The processes of fitness evaluation, updating of velocity and position of all particles are all parallelized and introduced in detail. Comparative studies on the optimization of four benchmark functions and a trajectory optimization problem are conducted by running PSO on the GPU (GPU-PSO) and CPU (CPU-PSO). The impact of design dimension, number of particles and size of the thread-block in the GPU and their interactions on the computational time is investigated. The results show that the computational time of the developed GPU-PSO is much shorter than that of CPU-PSO, with comparable accuracy, which demonstrates the remarkable speed-up capability of GPU-PSO.

  17. Arbitrary Angular Momentum Electron Repulsion Integrals with Graphical Processing Units: Application to the Resolution of Identity Hartree-Fock Method.

    Science.gov (United States)

    Kalinowski, Jaroslaw; Wennmohs, Frank; Neese, Frank

    2017-07-11

    A resolution of identity based implementation of the Hartree-Fock method on graphical processing units (GPUs) is presented that is capable of handling basis functions with arbitrary angular momentum. For practical reasons, only functions up to (ff|f) angular momentum are presently calculated on the GPU, thus leaving the calculation of higher angular momenta integrals on the CPU of the hybrid CPU-GPU environment. Speedups of up to a factor of 30 are demonstrated relative to state-of-the-art serial and parallel CPU implementations. Benchmark calculations with over 3500 contracted basis functions (def2-SVP or def2-TZVP basis sets) are reported. The presented implementation supports all devices with OpenCL support and is capable of utilizing multiple GPU cards over either MPI or OpenCL itself.

  18. Pharmacological efficacy of CPU 86017 on hypoxic pulmonary hypertension in rats: mediated by direct inhibition of calcium channels and antioxidant action, but indirect effects on the ET-1 pathway.

    Science.gov (United States)

    Zhang, Tian-Tai; Cui, Bing; Dai, De-Zai; Tang, Xiao-Yun

    2005-12-01

    Endothelin-1 (ET-1) plays a key role in the pathogenesis of pulmonary hypertension. The present study was conducted to examine the effects of a novel compound p-chlorobenzyltetrahydroberberine (CPU 86017) on endothelin-1 system of hypoxia-induced pulmonary hypertension in rats. SD male rats were divided into control, untreated pulmonary hypertension, nifedipine (10 mg/kg p.o.), and CPU 86017 (80, 40, and 20 mg/kg p.o.) groups. The pulmonary hypertension was established by housing the rats in a hypoxic (10 +/- 0.5% oxygen) chamber 8 hours per day for 4 weeks. Hemodynamic and morphologic assessment exhibited a significant increase in the central vein pressure (CVP), right ventricular systolic pressure (RVSP), and pulmonary arteriole remodeling in the pulmonary hypertensive rats, which were improved by CPU 86017 80 and 40 mg/kg administration (P CPU 86017 groups. The maladjustment of redox enzyme system in pulmonary hypertension rats was corrected after treatment. We concluded that CPU 86017 improves pulmonary hypertension mainly to suppress the endothelin-1 pathway at the upstream and downstream via calcium antagonism and antioxidative action, then, resulting in a relief in pathogenesis of the disease.

  19. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  20. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  1. Study on finite deformation finite element analysis algorithm of turbine blade based on CPU+GPU heterogeneous parallel computation

    Directory of Open Access Journals (Sweden)

    Liu Tian-Yuan

    2016-01-01

    Full Text Available Blade is one of the core components of turbine machinery. The reliability of blade is directly related to the normal operation of plant unit. However, with the increase of blade length and flow rate, non-linear effects such as finite deformation must be considered in strength computation to guarantee enough accuracy. Parallel computation is adopted to improve the efficiency of classical nonlinear finite element method and shorten the blade design period. So it is of extraordinary importance for engineering practice. In this paper, the dynamic partial differential equations and the finite element method forms for turbine blades under centrifugal load and flow load are given firstly. Then, according to the characteristics of turbine blade model, the classical method is optimized based on central processing unit + graphics processing unit heterogeneous parallel computation. Finally, the numerical experiment validations are performed. The computation speed of the algorithm proposed in this paper is compared with the speed of ANSYS. For the rectangle plate model with mesh number of 10 k to 4000 k, a maximum speed-up of 4.31 can be obtained. For the real blade-rim model with mesh number of 500 k, the speed-up of 4.54 times can be obtained.

  2. GPU/CPU co-processing parallel computation for seismic data processing in oil and gas exploration%油气勘探地震资料处理GPU/CPU协同并行计算

    Institute of Scientific and Technical Information of China (English)

    刘国峰; 刘钦; 李博; 佟小龙; 刘洪

    2009-01-01

    随着图形处理器(Graphic Processing Unit:GPU)在通用计算领域的日趋成熟,使GPU/CPU协同并行计算应用到油气勘探地震资料处理中,对诸多大规模计算的关键性环节有重大提升.本文阐明协同并行计算机的思路、架构及编程环境,着重分析其计算效率得以大幅度提升的关键所在.文中以地震资料处理中的叠前时间偏移和Gazdag深度偏移为切入点,展示样机测试结果的图像显示.显而易见,生产实践中,时常面临对诸多算法进行算法精度和计算速度之间的折中选择.本文阐明GPU/CPU样机协同计算具有高并行度,进而可在算法精度与计算速度的优化配置协调上获得广阔空间.笔者认为,本文的台式协同并行机研制思路及架构,或可作为地球物理配置高性能计算机全新选择的一项依据.%With the development of Graphic Processing Unit (GPU) in general colculations, it makes the GPU/CPU co-processing parallel computing used in seismic data processing of oil and gas exploration and get the great improvement in some important steps. In this paper, we introduce the idea ,architecture and coding environment for GPU and CPU co-processing with CUDA. At the same time,we pay more attention to point out why this method can greatly improve the efficiency. We use the pre-stack time migration and Gazdag depth migration in seismic data processing as the start point of this paper, and the figures can tell us the result of these tests. Obviously ,we often face the situation that we must abandon some good method because of some problems about computing speed, but with the GPU we will have more choices. The result sticks out a mile that the samples in this paper have more advantages than the common PC-Cluster,the new architecture of making parallel computer on the desk expressed in this paper may be one of another choice for high performance computing in the field of geophysics.

  3. CPU-GPU mixed implementation of virtual node method for real-time interactive cutting of deformable objects using OpenCL.

    Science.gov (United States)

    Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan

    2015-09-01

    Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real

  4. Endoplasmic reticulum stress mediating downregulated StAR and 3-beta-HSD and low plasma testosterone caused by hypoxia is attenuated by CPU86017-RS and nifedipine

    Directory of Open Access Journals (Sweden)

    Liu Gui-Lai

    2012-01-01

    Full Text Available Abstract Background Hypoxia exposure initiates low serum testosterone levels that could be attributed to downregulated androgen biosynthesizing genes such as StAR (steroidogenic acute regulatory protein and 3-beta-HSD (3-beta-hydroxysteroid dehydrogenase in the testis. It was hypothesized that these abnormalities in the testis by hypoxia are associated with oxidative stress and an increase in chaperones of endoplasmic reticulum stress (ER stress and ER stress could be modulated by a reduction in calcium influx. Therefore, we verify that if an application of CPU86017-RS (simplified as RS, a derivative to berberine could alleviate the ER stress and depressed gene expressions of StAR and 3-beta-HSD, and low plasma testosterone in hypoxic rats, these were compared with those of nifedipine. Methods Adult male Sprague-Dawley rats were randomly divided into control, hypoxia for 28 days, and hypoxia treated (mg/kg, p.o. during the last 14 days with nifedipine (Nif, 10 and three doses of RS (20, 40, 80, and normal rats treated with RS isomer (80. Serum testosterone (T and luteinizing hormone (LH were measured. The testicular expressions of biomarkers including StAR, 3-beta-HSD, immunoglobulin heavy chain binding protein (Bip, double-strand RNA-activated protein kinase-like ER kinase (PERK and pro-apoptotic transcription factor C/EBP homologous protein (CHOP were measured. Results In hypoxic rats, serum testosterone levels decreased and mRNA and protein expressions of the testosterone biosynthesis related genes, StAR and 3-beta-HSD were downregulated. These changes were linked to an increase in oxidants and upregulated ER stress chaperones: Bip, PERK, CHOP and distorted histological structure of the seminiferous tubules in the testis. These abnormalities were attenuated significantly by CPU86017-RS and nifedipine. Conclusion Downregulated StAR and 3-beta-HSD significantly contribute to low testosterone in hypoxic rats and is associated with ER stress

  5. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  6. Application Research of Temperature Measurement of Delivery Coal Roller Based on SMBus’s Double CPU%基于SMBus的双CPU运煤滚轴温度检测的应用研究①

    Institute of Scientific and Technical Information of China (English)

    王亚林; 郭文慧; 查文华

    2014-01-01

    Abstact:The friction temperature is monitored between coal belt and roller shaft in coal mine.To achieve hign-speed data exchange and processing using double CPU, main control and data processing function are separated.Multiple infared measuring temperature sensor(MLX90614) are respectively fixed on the roller,its simple SCK and SDA interface ,is connected directly to the CPU’s I/O,corresponded to a different address,to fastly detect temperature of roller.Dual port RAM is shared as data memory,the two ports have two sets of fully independent control Bus,address Bus and the I/Odata Bus,may separate the Read/Write address unit;two CPU share the RAM,it improve the data transfer rate of the system.%监控煤矿运煤皮带与滚动轴之间的摩擦温度。利用双CPU实现高速数据交换和处理,将主控和数据处理等功能分开执行。利用多个红外测温传感器MLX90614,分别固定在各滚轴附近,其简单的SCK和SDA接口,直接连接于CPU的普通I/O口,各自对应一个不同地址,快速完成滚轴温度检测。利用双端口RAM作为共享数据存储器,两个端口有两套完全独立的控制总线、地址总线和I/O数据总线,可独立读/写任意地址单元,两个CPU共享该存储器,提高了系统的数据传输速率。

  7. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  8. Research of Architecture of CPU + Multi-GPU Heterogeneous Collaborative Computing%CPU+多 GPU 异构协同计算的体系结构研究

    Institute of Scientific and Technical Information of China (English)

    李龙飞; 贺占庄; 徐丹妮

    2014-01-01

    以CUDA架构为例,对传统的CPU+单GPU架构进行了分析,提出了一种CPU+多GPU异构协同计算的系统方案,对关键的CPU对多GPU的管理及多GPU间数据通信等问题做了重点讨论,从理论上进行了可行性分析,并提出了相应的优化方法。%In this paper ,we analyzed the traditional CPU + single GPU architecture ,and put forward a CPU +multi-GPU heterogeneous collaborative computing system .We focused on the CPU management for multi-GPU and data communication between multi-GPU ,then theoretically analyzed the feasibility ,finally gave the corresponding optimization methods .

  9. 基于VHDL利用PS/2键盘控制的密码锁设计%a method to design the password lock controlled by ps/2 keyboard with VHDL

    Institute of Scientific and Technical Information of China (English)

    胡彩霞; 吴晓盼

    2011-01-01

    This paper introduces a method to design the password lock with FPGA.It describes functions of every module and gives the key VHDL programs.This system makes use of the standard PS/2 keyboard taking place of the common matrix keyboard as input equipment,which indicates the powerful capability of FPGA in the desiging of embedded system.%介绍了一种基于FPGA芯片的电子密码锁设计方法,阐述了系统各模块的功能,给出了关键的VHDL程序。系统利用了标准PS/2键盘作为输入设备,取代了普通的矩阵式键盘,展示了FPGA在嵌入式系统设计中的优越性。

  10. Design of the Multifunction Keyboard in Single Chip Microcomputer System with the Display Function%利用单片机系统的显示功能设计多功能键盘

    Institute of Scientific and Technical Information of China (English)

    马安良

    2013-01-01

    This paper presents the design method of multi-function keyboard in single chip microcomputer system with the dis-play function ,this method can be widely used in system resources MCU interface tension ,and combined with the development process ,describing in detail the design method of system hardware and software keyboard .%介绍了利用单片机系统的显示功能设计多功能键盘的方法,此方法可以普遍应用在单片机接口资源紧张的系统中,并结合开发过程,详细的阐述了键盘系统硬件组成和软件的设计方法。

  11. 一种降低手臂疲劳的计算机键盘的设计%A Design on Computer Keyboard to Abate the Tiredness of Human Arms

    Institute of Scientific and Technical Information of China (English)

    张晓凡; 张鹏

    2012-01-01

    分析了标准键盘在使用时引起身体疲劳的因素,基于人机工程学的理论,提出了一种降低手臂疲劳的计算机键盘的设计方案。从键盘的整体布局、外形设计、按键的布置等方面提出具体的措施,以此提高键盘操作的舒适性,降低手臂疲劳,提高工作效率。%It analyzed the factors of causing body tiredness using the standard keyboard.Based on the theories of ergonomics,it proposed a design project of computer keyboard to reduce arm tiredness.New ideas for the arrangement,shape and keystroke-position of the keyboard had been brought up and achieved which would improve the comfort,abate the tiredness of arms and eventually enhance the efficiency of work.

  12. 基于F340单片机的USB/PS2自适应键鼠设计%Based on F340 microcontroler Design of USB/PS2 self-adapted Keyboard&Mouse

    Institute of Scientific and Technical Information of China (English)

    袁启孟; 张久明; 翟乐

    2016-01-01

    It's absolutely necessary devices of keyboard and mouse for computers. Keyboard and mouse with PS2 interface stil in use, while devices with USB interface are universaly used at present, especialy in military reinforce devices. The superiority of far transmission of PS2 devices signal compared with USB devices, resulting in the wide use of the PS2 devices in reinforce computer and servers equipment. This design developing a device integrated of keyboard and mouse, witch based on F340 microcontroler can be self-adaptedof USB/PS2 signal.%键盘鼠标是计算机必不可少的输入设备,目前USB接口的键盘鼠标已普遍应用,但是仍有一部分PS2接口的设备仍在使用,尤其是在军用加固设备领域,可长距离传输的优势使得PS2设备广泛应用于加固计算机、服务器等设备.本设计基于F340单片机,开发出一款集键盘鼠标于一体的一款USB/PS2自适应键鼠.

  13. Analysis and Optimization of Central Processing Unit Process Parameters

    Science.gov (United States)

    Kaja Bantha Navas, R.; Venkata Chaitana Vignan, Budi; Durganadh, Margani; Rama Krishna, Chunduri

    2017-05-01

    The rapid growth of computer has made processing more data capable, which increase the heat dissipation. Hence the system unit CPU must be cooled against operating temperature. This paper presents a novel approach for the optimization of operating parameters on Central Processing Unit with single response based on response graph method. These methods have a series of steps from of proposed approach which are capable of decreasing uncertainty caused by engineering judgment in the Taguchi method. Orthogonal Array value was taken from ANSYS report. The method shows a good convergence with the experimental and the optimum process parameters.

  14. Large scale neural circuit mapping data analysis accelerated with the graphical processing unit (GPU)

    Science.gov (United States)

    Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin

    2014-01-01

    Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633

  15. Handwriting or Typewriting? The Influence of Pen- or Keyboard-Based Writing Training on Reading and Writing Performance in Preschool Children.

    Science.gov (United States)

    Kiefer, Markus; Schuler, Stefanie; Mayer, Carmen; Trumpp, Natalie M; Hille, Katrin; Sachse, Steffi

    2015-01-01

    Digital writing devices associated with the use of computers, tablet PCs, or mobile phones are increasingly replacing writing by hand. It is, however, controversially discussed how writing modes influence reading and writing performance in children at the start of literacy. On the one hand, the easiness of typing on digital devices may accelerate reading and writing in young children, who have less developed sensory-motor skills. On the other hand, the meaningful coupling between action and perception during handwriting, which establishes sensory-motor memory traces, could facilitate written language acquisition. In order to decide between these theoretical alternatives, for the present study, we developed an intense training program for preschool children attending the German kindergarten with 16 training sessions. Using closely matched letter learning games, eight letters of the German alphabet were trained either by handwriting with a pen on a sheet of paper or by typing on a computer keyboard. Letter recognition, naming, and writing performance as well as word reading and writing performance were assessed. Results did not indicate a superiority of typing training over handwriting training in any of these tasks. In contrast, handwriting training was superior to typing training in word writing, and, as a tendency, in word reading. The results of our study, therefore, support theories of action-perception coupling assuming a facilitatory influence of sensory-motor representations established during handwriting on reading and writing.

  16. Design of Split-Screen Display Menu Based on Keyboard Intercrossing%基于键盘交互的液晶分屏显示菜单设计

    Institute of Scientific and Technical Information of China (English)

    张小鸣; 张岩

    2013-01-01

    为了实现智能仪表操作面板以最少键盘完成变量显示、参数设置等液晶显示功能,提出了基于4个键盘交互的液晶显示多屏菜单设计方法.采用一键多义的非编码制键盘,用标记转移算法,设计出基于键盘交互的多屏菜单系统.在STC89C54RD+ LM12864LFW硬件平台上,进行了多屏菜单结构的汇编程序设计,并具有字符反显,菜单选择,上下翻屏和参数设置等功能.实验结果表明:键盘交互接口简单,占用空间少,操作简便直观,增强了人机交互性能,对智能仪表操作面板人机交互设计具有一定的参考价值.%In order to realize the variable display, parameter settings and other liquid crystal display (LCD) functions with minimal keyboard of the smart meter operation panel, an intercrossing liquid crystal display (LCD) split-screen display menu design method based on 4 keyboards is presented. Non-coding keyboard with key ambiguity is used, the mark transfer algorithm is proposed, and the split -screen menu system based on keyboard interaction is designed. In STC89C54RD+LM12864LFW hardware platform, assembly program with split-screen menu structure is designed. The functions of character reverse display, menu selection, fluctuation turn screen and parameters setting are achieved. The experimental results show that the keyboard interface is of simple, less memory space is taken up and operation is convenient and visual, improving the human-computer interactive performance, which has some reference value to human-computer interaction design of the smart meter operation panel.

  17. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  18. Effect of CPU-XT-008, a combretastatin A-4 analogue, on the proliferation, apoptosis and expression of vascular endothelial growth factor and basic fibroblast growth factor in human umbilical vein endothelial cells.

    Science.gov (United States)

    Xiong, Rui; Sun, Jing; Liu, Kun; Xu, Yungen; He, Shuying

    2016-01-01

    The present study investigated the effect of the combretastatin A-4 analogue CPU-XT-008 on the proliferation, apoptosis and expression of vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (FGF-2) in human umbilical vein endothelial cells (HUVECs). The proliferation capacity of HUVECs was analyzed with a cell viability assay, while their apoptosis and migration abilities were evaluated via flow cytometry and monolayer denudation assay, respectively. The mRNA and protein expression levels of VEGF and FGF-2 in these cells were determined by reverse transcription-polymerase chain reaction, and cell-based ELISA, western blotting and immunocytochemistry, respectively. The results demonstrated that CPU-XT-008 inhibited proliferation and migration, and induced apoptosis in HUVECs in a dose-dependent manner. In addition, CPU-XT-008 downregulated the mRNA and protein expression levels of VEGF and FGF-2 in these cells. These findings suggest that CPU-XT-008 exerts anti-angiogenic effects in HUVECs, which may explain the inhibition of cell proliferation and migration, induction of apoptosis, and reduction in the mRNA and protein expression levels of VEGF and FGF-2 observed in the present study.

  19. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  20. Rapid learning-based video stereolization using graphic processing unit acceleration

    Science.gov (United States)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.