WorldWideScience

Sample records for idle computing resources

  1. Anti-idling campaign : Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-11-01

    The efficient use of transportation fuels and other petroleum products is being promoted by the Canadian Petroleum Products Institute. The Institute was busy during the past year in attempting to gain an understanding of the measures that could be adopted to assist motorists clearly identify the relationship between fuel consumption, personal transportation spending, and environmental impacts. The Institute undertook these efforts with Natural Resources Canada (NRCan) Office of Energy Efficiency (which both provided funding) and the Public Policy Forum. A first step proposed was the development of an anti-idling public awareness campaign. It was recognized that idling a vehicle for more than ten seconds costs money and wastes fuel, while simultaneously contributing to air pollution, greenhouse gas emissions, and climate change. The campaign also involved Esso, Shell, Petro-Canada, Canadian Tire and Sunoco for the development and implementation phases over the last two weeks of August 2002. A pilot campaign was tested in Mississauga, Ontario. Various materials were used for this campaign, such as posters, banners, cling vinyl window decals, air fresheners and information cards. The main successes of the campaign were: testing the methods of communicating the anti-idling message to drivers at gasoline retailing sites, increasing awareness among the driving public concerning the problems resulting from excessive idling, and encouraging the reduction of idling whenever and wherever it takes place. 1 tab.

  2. An Idle-State Detection Algorithm for SSVEP-Based Brain-Computer Interfaces Using a Maximum Evoked Response Spatial Filter.

    Science.gov (United States)

    Zhang, Dan; Huang, Bisheng; Wu, Wei; Li, Siliang

    2015-11-01

    Although accurate recognition of the idle state is essential for the application of brain-computer interfaces (BCIs) in real-world situations, it remains a challenging task due to the variability of the idle state. In this study, a novel algorithm was proposed for the idle state detection in a steady-state visual evoked potential (SSVEP)-based BCI. The proposed algorithm aims to solve the idle state detection problem by constructing a better model of the control states. For feature extraction, a maximum evoked response (MER) spatial filter was developed to extract neurophysiologically plausible SSVEP responses, by finding the combination of multi-channel electroencephalogram (EEG) signals that maximized the evoked responses while suppressing the unrelated background EEGs. The extracted SSVEP responses at the frequencies of both the attended and the unattended stimuli were then used to form feature vectors and a series of binary classifiers for recognition of each control state and the idle state were constructed. EEG data from nine subjects in a three-target SSVEP BCI experiment with a variety of idle state conditions were used to evaluate the proposed algorithm. Compared to the most popular canonical correlation analysis-based algorithm and the conventional power spectrum-based algorithm, the proposed algorithm outperformed them by achieving an offline control state classification accuracy of 88.0 ± 11.1% and idle state false positive rates (FPRs) ranging from 7.4 ± 5.6% to 14.2 ± 10.1%, depending on the specific idle state conditions. Moreover, the online simulation reported BCI performance close to practical use: 22.0 ± 2.9 out of the 24 control commands were correctly recognized and the FPRs achieved as low as approximately 0.5 event/min in the idle state conditions with eye open and 0.05 event/min in the idle state condition with eye closed. These results demonstrate the potential of the proposed algorithm for implementing practical SSVEP BCI systems.

  3. Advances in ATLAS@Home towards a major ATLAS computing resource

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2018-01-01

    The volunteer computing project ATLAS@Home has been providing a stable computing resource for the ATLAS experiment since 2013. It has recently undergone some significant developments and as a result has become one of the largest resources contributing to ATLAS computing, by expanding its scope beyond traditional volunteers and into exploitation of idle computing power in ATLAS data centres. Removing the need for virtualization on Linux and instead using container technology has made the entry barrier significantly lower data centre participation and in this paper, we describe the implementation and results of this change. We also present other recent changes and improvements in the project. In early 2017 the ATLAS@Home project was merged into a combined LHC@Home platform, providing a unified gateway to all CERN-related volunteer computing projects. The ATLAS Event Service shifts data processing from file-level to event-level and we describe how ATLAS@Home was incorporated into this new paradigm. The finishing...

  4. xdamp Version 6 : an IDL-based data and image manipulation program.

    Energy Technology Data Exchange (ETDEWEB)

    Ballard, William Parker

    2012-04-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows{trademark} computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.

  5. 48 CFR 31.205-17 - Idle facilities and idle capacity costs.

    Science.gov (United States)

    2010-10-01

    ... workload; or (2) Were necessary when acquired and are now idle because of changes in requirements..., or sale, in accordance with sound business, economics, or security practices. Widespread idle...

  6. xdamp Version 6.100: An IDL(reg sign)-based data and image manipulation program

    International Nuclear Information System (INIS)

    Ballard, William Parker

    2012-01-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA(trademark) (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sign)-based workstations, a replacement was needed. This package uses the IDL(reg sign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sign) workstations, Hewlett Packard workstations, SUN(reg sign) workstations, Microsoft(reg sign) Windows(trademark) computers, Macintosh(reg sign) computers and Digital Equipment Corporation VMS(reg sign) and Alpha(reg sign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 7 and Windows Vista; Unix platforms; and Macintosh computers. Version 6 is an update that uses the IDL Virtual Machine to resolve the need for licensing IDL.

  7. Idling Reduction for Personal Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-05-07

    Fact sheet on reducing engine idling in personal vehicles. Idling your vehicle--running your engine when you're not driving it--truly gets you nowhere. Idling reduces your vehicle's fuel economy, costs you money, and creates pollution. Idling for more than 10 seconds uses more fuel and produces more emissions that contribute to smog and climate change than stopping and restarting your engine does.

  8. The IDA-LIKE peptides IDL6 and IDL7 are negative modulators of stress responses in Arabidopsis thaliana.

    Science.gov (United States)

    Vie, Ane Kjersti; Najafi, Javad; Winge, Per; Cattan, Ester; Wrzaczek, Michael; Kangasjärvi, Jaakko; Miller, Gad; Brembu, Tore; Bones, Atle M

    2017-06-15

    Small signalling peptides have emerged as important cell to cell messengers in plant development and stress responses. However, only a few of the predicted peptides have been functionally characterized. Here, we present functional characterization of two members of the IDA-LIKE (IDL) peptide family in Arabidopsis thaliana, IDL6 and IDL7. Localization studies suggest that the peptides require a signal peptide and C-terminal processing to be correctly transported out of the cell. Both IDL6 and IDL7 appear to be unstable transcripts under post-transcriptional regulation. Treatment of plants with synthetic IDL6 and IDL7 peptides resulted in down-regulation of a broad range of stress-responsive genes, including early stress-responsive transcripts, dominated by a large group of ZINC FINGER PROTEIN (ZFP) genes, WRKY genes, and genes encoding calcium-dependent proteins. IDL7 expression was rapidly induced by hydrogen peroxide, and idl7 and idl6 idl7 double mutants displayed reduced cell death upon exposure to extracellular reactive oxygen species (ROS). Co-treatment of the bacterial elicitor flg22 with IDL7 peptide attenuated the rapid ROS burst induced by treatment with flg22 alone. Taken together, our results suggest that IDL7, and possibly IDL6, act as negative modulators of stress-induced ROS signalling in Arabidopsis. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  9. Impact of idling on fuel consumption and exhaust emissions and available idle-reduction technologies for diesel vehicles – A review

    International Nuclear Information System (INIS)

    Rahman, S.M. Ashrafur; Masjuki, H.H.; Kalam, M.A.; Abedin, M.J.; Sanjid, A.; Sajjad, H.

    2013-01-01

    Highlights: • In this paper we reviewed the impact of diesel vehicles idling on fuel consumption and exhaust emission. • Fuel consumption and emissions during idling are very high compared to driving cycle. • The effects of various operating on fuel consumption and exhaust emission were discussed. • Available idle-reduction technologies impact on idling fuel consumption and emissions were discussed. • Idling reduction technologies reduce fuel consumption and emissions significantly. - Abstract: In order to maintain cab comfort truck drivers have to idle their engine to obtain the required power for accessories, such as the air conditioner, heater, television, refrigerator, and lights. This idling of the engine has a major impact on its fuel consumption and exhaust emission. Idling emissions can be as high as 86.4 g/h, 16,500 g/h, 5130 g/h, 4 g/h, and 375 g/h for HC, CO 2 , CO, PM, and NOx, respectively. Idling fuel consumption rate can be as high as 1.85 gal/h. The accessory loading, truck model, fuel-injection system, ambient temperature, idling speed, etc., also affect significantly the emission levels and fuel consumption rate. An increase in accessory loading and ambient temperature increases the emissions and fuel consumption. During idling, electronic fuel-injection systems reduce HC, PM, and CO emission, but increase NOx emissions compared with a mechanical fuel-injection system. An increase of idling speed increases fuel consumption rate. There are many systems available on the market to reduce engine idling and improve air quality and fuel consumption rate, such as an auxiliary power unit (APU), truck stop electrification, thermal storage systems, fuel cells, and direct fire heaters. A direct fire heater reduces fuel consumption by 94–96% and an APU reduces consumption by 60–87%. Furthermore, these technologies increase air quality significantly by reducing idling emissions, which is the reason why they are considered as key alternatives to

  10. xdamp Version 4: An IDL Based Data and Image Manipulation Program

    International Nuclear Information System (INIS)

    William P. Ballard

    2002-01-01

    The original DAMP (W t a Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA(trademark) (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg s ign)-based workstations, a replacement was needed. This package uses the IDL(reg s ign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg s ign) workstations, Hewlett Packard workstations, SUN(reg s ign) workstations, Microsoft(reg s ign) Windows(trademark) computers, Macintosh(reg s ign) computers and Digital Equipment Corporation VMS(reg s ign) and Alpha(reg s ign) systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; DEC Alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC(trademark) versions. Version 4 is an update that removes some obsolete features and better supports very large arrays and Excel formatted data import

  11. Idling is Not the Way to Go

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-06-01

    Researchers estimate that idling from heavy-duty and light-duty vehicles combined wastes about 6 billion gallons of fuel annually. Many states have put restrictions on idling, especially in metropolitan areas. Clearly, idling is not the way to go.

  12. Coronal Magnetism and Forward Solarsoft Idl Package

    Science.gov (United States)

    Gibson, S. E.

    2014-12-01

    The FORWARD suite of Solar Soft IDL codes is a community resource for model-data comparison, with a particular emphasis on analyzing coronal magnetic fields. FORWARD may be used both to synthesize a broad range of coronal observables, and to access and compare to existing data. FORWARD works with numerical model datacubes, interfaces with the web-served Predictive Science Inc MAS simulation datacubes and the Solar Soft IDL Potential Field Source Surface (PFSS) package, and also includes several analytic models (more can be added). It connects to the Virtual Solar Observatory and other web-served observations to download data in a format directly comparable to model predictions. It utilizes the CHIANTI database in modeling UV/EUV lines, and links to the CLE polarimetry synthesis code for forbidden coronal lines. FORWARD enables "forward-fitting" of specific observations, and helps to build intuition into how the physical properties of coronal magnetic structures translate to observable properties.

  13. xdamp Version 3: An IDL reg-sign-based data and image manipulation program

    International Nuclear Information System (INIS)

    Ballard, W.P.

    1998-05-01

    The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA trademark (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix reg-sign-based workstations, a replacement was needed. This package uses the IDL reg-sign software, available from Research Systems Incorporated in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM reg-sign workstations, Hewlett Packard workstations, SUN reg-sign workstations, Microsoft reg-sign Windows trademark computers, Macintosh reg-sign computers and Digital Equipment Corporation VMS reg-sign and Alpha reg-sign systems. Thus, xdamp is portable across many platforms. The author has verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; and DEC alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC trademark versions. Version 3 adds the capability to manipulate images to the original xdamp capabilities

  14. Optimization of Steering System of Forklift Vehicle for Idle Performance

    Directory of Open Access Journals (Sweden)

    Yuan Shen

    2015-01-01

    Full Text Available This paper presents an optimal design process for the steering system of a forklift vehicle. An efficient procedure for minimizing the engine-induced idle vibration is developed in this study. Reciprocating unbalance and gas pressure torque as two major sources of engine excitation are studied. Using the field vibration tests and FEM analysis, the cause and characteristics of steering system’s idle vibration are recognized. So as to distribute the characteristic modes based on the optimization strategy, global sensitivity analysis of the main parameters is also carried out to achieve the optimal combination of the optimization factors. Based on all analysis above, some structure modifications for optimization are presented to control the idle vibration. The effectiveness and rationality of the improvements are also verified through experimental prototyping testing. This study also makes it possible to provide a design guideline using CAE (computer aided engineering analysis for some other objects.

  15. Interactive data language (IDL) for medical image processing

    International Nuclear Information System (INIS)

    Md Saion Salikin

    2002-01-01

    Interactive Data Language (IDL) is one of many softwares available in the market for medical image processing and analysis. IDL is a complete, structured language that can be used both interactively and to create sophisticated functions, procedures, and applications. It provides a suitable processing routines and display method which include animation, specification of colour table including 24-bit capability, 3-D visualization and many graphic operation. The important features of IDL for medical imaging are segmentation, visualization, quantification and pattern recognition. In visualization IDL is capable of allowing greater precision and flexibility when visualizing data. For example, IDL eliminates the limits on Number of Contour level. In term of data analysis, IDL is capable of handling complicated functions such as Fast Fourier Transform (FFT) function, Hough and Radon Transform function, Legendre Polynomial function, as well as simple functions such as Histogram function. In pattern recognition, pattern description is defined as points rather than pixels. With this functionality, it is easy to re-use the same pattern on more than one destination device (even if the destinations have varying resolution). In other words it have the ability to specify values in points. However there are a few disadvantages of using IDL. Licensing is by dongkel key and limited licences hence limited access to potential IDL users. A few examples are shown to demonstrate the capabilities of IDL in carrying out its function for medical image processing. (Author)

  16. Caterpillar MorElectric DOE Idle Reduction Demonstration Program

    Energy Technology Data Exchange (ETDEWEB)

    John Bernardi

    2007-09-30

    This project titled 'Demonstration of the New MorElectric{trademark} Technology as an Idle Reduction Solution' is one of four demonstration projects awarded by the US Department of Energy in 2002. The goal of these demonstration and evaluation projects was to gather objective in-use information on the performance of available idle reduction technologies by characterizing the cost; fuel, maintenance, and engine life savings; payback; and user impressions of various systems and techniques. In brief, the Caterpillar Inc. project involved applying electrically driven accessories for cab comfort during engine-off stops and for reducing fuel consumption during on-highway operation. Caterpillar had equipped and operated five new trucks with the technology in conjunction with International Truck and Engine Corporation and COX Transfer. The most significant result of the project was a demonstrated average idle reduction of 13.8% for the 5 truck MEI fleet over the control fleet. It should be noted that the control fleet trucks were also equipped with an idle reduction device that would start and stop the main engine automatically in order to maintain cab temperature. The control fleet idle usage would have been reduced by 3858 hours over the 2 year period with the MEI system installed, or approximately 2315 gallons of fuel less (calculations assume a fuel consumption of 0.6 gallons per hour for the 13 liter engine at idle). The fuel saved will be significantly larger for higher displacement engines without idle reduction equipment such as the engine auto start/stop device used by COX Transfer. It is common for engines to consume 1.0 gallons per hour which would increase the fuel savings to approximately 1260 gallons per truck per year of typical idling (1800 hours idle/yr).

  17. xdamp Version 4 An IDL Based Data and Image Manipulation Program

    CERN Document Server

    William-Ballar, P

    2002-01-01

    The original DAMP (W t a Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA(trademark) (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix(reg sub s ign)-based workstations, a replacement was needed. This package uses the IDL(reg sub s ign) software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM(reg sub s ign) workstations, Hewlett Packard workstations, SUN(reg sub s ign) workstations, Microsoft(reg sub s ign) Windows(trademark) computers, Macinto...

  18. Remote Data Exploration with the Interactive Data Language (IDL)

    Science.gov (United States)

    Galloy, Michael

    2013-01-01

    A difficulty for many NASA researchers is that often the data to analyze is located remotely from the scientist and the data is too large to transfer for local analysis. Researchers have developed the Data Access Protocol (DAP) for accessing remote data. Presently one can use DAP from within IDL, but the IDL-DAP interface is both limited and cumbersome. A more powerful and user-friendly interface to DAP for IDL has been developed. Users are able to browse remote data sets graphically, select partial data to retrieve, import that data and make customized plots, and have an interactive IDL command line session simultaneous with the remote visualization. All of these IDL-DAP tools are usable easily and seamlessly for any IDL user. IDL and DAP are both widely used in science, but were not easily used together. The IDL DAP bindings were incomplete and had numerous bugs that prevented their serious use. For example, the existing bindings did not read DAP Grid data, which is the organization of nearly all NASA datasets currently served via DAP. This project uniquely provides a fully featured, user-friendly interface to DAP from IDL, both from the command line and a GUI application. The DAP Explorer GUI application makes browsing a dataset more user-friendly, while also providing the capability to run user-defined functions on specified data. Methods for running remote functions on the DAP server were investigated, and a technique for accomplishing this task was decided upon.

  19. Application of Sleeper Cab Thermal Management Technologies to Reduce Idle Climate Control Loads in Long-Haul Trucks

    Energy Technology Data Exchange (ETDEWEB)

    Lustbader, J. A.; Venson, T.; Adelman, S.; Dehart, C.; Yeakel, S.; Castillo, M. S.

    2012-10-01

    Each intercity long-haul truck in the U.S. idles approximately 1,800 hrs per year, primarily for sleeper cab hotel loads. Including workday idling, over 2 billion gallons of fuel are used annually for truck idling. NREL's CoolCab project works closely with industry to design efficient thermal management systems for long-haul trucks that keep the cab comfortable with minimized engine idling and fuel use. The impact of thermal load reduction technologies on idle reduction systems were characterized by conducting thermal soak tests, overall heat transfer tests, and 10-hour rest period A/C tests. Technologies evaluated include advanced insulation packages, a solar reflective film applied to the vehicle's opaque exterior surfaces, a truck featuring both film and insulation, and a battery-powered A/C system. Opportunities were identified to reduce heating and cooling loads for long-haul truck idling by 36% and 34%, respectively, which yielded a 23% reduction in battery pack capacity of the idle-reduction system. Data were also collected for development and validation of a CoolCalc HVAC truck cab model. CoolCalc is an easy-to-use, simplified, physics-based HVAC load estimation tool that requires no meshing, has flexible geometry, excludes unnecessary detail, and is less time-intensive than more detailed computer-aided engineering modeling approaches.

  20. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  1. Flight Management System Execution of Idle-Thrust Descents in Operations

    Science.gov (United States)

    Stell, Laurel L.

    2011-01-01

    To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its error models, commercial flights executed idle-thrust descents, and the recorded data includes the target speed profile and FMS intent trajectories. The FMS computes the intended descent path assuming idle thrust after top of descent (TOD), and any intervention by the controllers that alters the FMS execution of the descent is recorded so that such flights are discarded from the analysis. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location are extracted from the radar data. Using more than 60 descents in Boeing 777 aircraft, the actual speeds are compared to the intended descent speed profile. In addition, three aspects of the accuracy of the FMS intent trajectory are analyzed: the meter fix crossing time, the TOD location, and the altitude at the meter fix. The actual TOD location is within 5 nmi of the intent location for over 95% of the descents. Roughly 90% of the time, the airspeed is within 0.01 of the target Mach number and within 10 KCAS of the target descent CAS, but the meter fix crossing time is only within 50 sec of the time computed by the FMS. Overall, the aircraft seem to be executing the descents as intended by the designers of the onboard automation.

  2. Bargaining and idle public sector capacity in health care

    OpenAIRE

    Barros, Pedro Pita

    2005-01-01

    A feature present in countries with a National Health Service is the co−existence of a públic and a private sector. Often, the public payer contracts with private providers while holding idle capacity. This is often seen as inefficiency from the management of public facilities. We present here a different rationale for the existence of such idle capacity: the public sector may opt to have idle capacity as a way to gain bargaining power vis−à−vis the private provider, under the assumption of a...

  3. Bargaining and idle public sector capacity in health care

    OpenAIRE

    Xavier Martinez-Giralt; Barros Pedro Pita

    2005-01-01

    A feature present in countries with a National Health Service is the co-existence of a public and a private sector. Often, the public payer contracts with private providers while holding idle capacity. This is often seen as inefficiency from the management of public facilities. We present here a different rationale for the existence of such idle capacity: the public sector may opt to have idle capacity as a way to gain bargaining power vis-Ã -vis the private provider, under the assumption of ...

  4. Internet Connection Control based on Idle Time Using User Behavior Pattern Analysis

    Directory of Open Access Journals (Sweden)

    Fadilah Fahrul Hardiansyah

    2014-12-01

    Full Text Available The increase of smartphone ability is rapidly increasing the power consumption. Many methods have been proposed to reduce smartphone power consumption. Most of these methods use the internet connection control based on the availability of the battery power level regardless of when and where a waste of energy occurs. This paper proposes a new approach to control the internet connection based on idle time using user behavior pattern analysis. User behavior patterns are used to predict idle time duration. Internet connection control performed during idle time. During idle time internet connection periodically switched on and off by a certain time interval. This method effectively reduces a waste of energy. Control of the internet connection does not interfere the user because it is implemented on idle time. Keywords: Smartphone, User Behavior, Pattern Recognition, Idle Time, Internet Connection Control

  5. Idle emissions from medium heavy-duty diesel and gasoline trucks.

    Science.gov (United States)

    Khan, A B M S; Clark, Nigel N; Gautam, Mridul; Wayne, W Scott; Thompson, Gregory J; Lyons, Donald W

    2009-03-01

    Idle emissions data from 19 medium heavy-duty diesel and gasoline trucks are presented in this paper. Emissions from these trucks were characterized using full-flow exhaust dilution as part of the Coordinating Research Council (CRC) Project E-55/59. Idle emissions data were not available from dedicated measurements, but were extracted from the continuous emissions data on the low-speed transient mode of the medium heavy-duty truck (MHDTLO) cycle. The four gasoline trucks produced very low oxides of nitrogen (NOx) and negligible particulate matter (PM) during idle. However, carbon monoxide (CO) and hydrocarbons (HCs) from these four trucks were approximately 285 and 153 g/hr on average, respectively. The gasoline trucks consumed substantially more fuel at an hourly rate (0.84 gal/hr) than their diesel counterparts (0.44 gal/hr) during idling. The diesel trucks, on the other hand, emitted higher NOx (79 g/hr) and comparatively higher PM (4.1 g/hr), on average, than the gasoline trucks (3.8 g/hr of NOx and 0.9 g/hr of PM, on average). Idle NOx emissions from diesel trucks were high for post-1992 model year engines, but no trends were observed for fuel consumption. Idle emissions and fuel consumption from the medium heavy-duty diesel trucks (MHDDTs) were marginally lower than those from the heavy heavy-duty diesel trucks (HHDDTs), previously reported in the literature.

  6. Idleness, returns to education and child labor

    Directory of Open Access Journals (Sweden)

    José Raimundo Carvalho

    2012-12-01

    Full Text Available Although recent trends about child labor are positive, see ILO (2006, there still are important shortcomings which require further investigation. Among them, the exclusion of the category "idle children" (those who neither work nor study from past studies, as well as the lack of reliable information on returns to education are two significant omissions. By using a data base that contains details on idle children and a proxy for the returns to education, we find evidence that confirms traditional findings both with regard to the strong positive effect of parental background and to the positive relationship between the number of children in the household and child labor. On the other hand, our estimates point out new insights, such as the great regional variation of estimates and the fact that the Body Mass Index effect is positive. Finally, we suggest a new perspective on the issue of "street children" through the analysis of the category of "idle children".

  7. Summary of OEM Idling Recommendations from Vehicle Owner's Manuals

    Energy Technology Data Exchange (ETDEWEB)

    Keel-Blackmon, Kristy [East Tennessee Clean Fuels Coalition (ETCleanFuels), Knoxville, TN (United States); Curran, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lapsa, Melissa Voss [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-01-01

    The project upon which this report is based was conceived in 2012 during discussions between the East Tennessee Clean Fuels Coalition (ETCleanFuels) and Oak Ridge National Laboratory (ORNL) who both noted that a detailed summary of idling recommendations for a wide variety of engines and vehicles were not available in the literature. The two organizations agreed that ETCleanFuels would develop a first-of-its-kind collection of idling recommendations from the owner’s manuals of modern production vehicles. Vehicle engine idling, a subject that has long been debated, is largely shrouded in misinformation. The justifications for idling seem to be many: driver comfort, waiting in lines, and talking on cell phones to name a few. Assuredly, a great number of people idle because of the myths and misinformation surrounding this issue. This report addresses these myths by turning to statements taken directly from the automobile and engine manufacturers themselves.

  8. Idling operation apparatus for multicylinder fuel injection engine

    Energy Technology Data Exchange (ETDEWEB)

    Kanahira, A

    1974-11-20

    A device to cut off the fuel supply to a number of cylinders at idling is described for those engines equipped with multicylinder fuel injection systems. The discontinuation of the fuel gas supply to the cylinders is made by a magnetically operated valve which is related to the accelerator. When the engine is idling, a switch activates the magnetic valve and the tube leading to the cylinder closes while a valve on the tube leading to a dual tank opens, and the pumped gas returns to the tank. This valve is installed on several cylinders, but not on all. Thus, at idling only a certain number of cylinders are firing, which lowers the hydrocarbon levels in the exhaust gas since non-firing cylinders intake and discharge only air.

  9. Costly myths. An analysis of idling beliefs and behavior in personal motor vehicles

    International Nuclear Information System (INIS)

    Carrico, Amanda R.; Padgett, Paul; Vandenbergh, Michael P.; Gilligan, Jonathan; Wallston, Kenneth A.

    2009-01-01

    Despite the large contribution of individuals and households to climate change, little has been done in the US to reduce the CO 2 emissions attributable to this sector. Motor vehicle idling among individual private citizens is one behavior that may be amenable to large-scale policy interventions. Currently, little data are available to quantify the potential reductions in emissions that could be realized by successful policy interventions. In addition, little is known about the motivations and beliefs that underlie idling. In the fall of 2007, 1300 drivers in the US were surveyed to assess typical idling practices, beliefs and motivations. Results indicate that the average individual idled for over 16 min a day and believed that a vehicle can be idled for at least 3.6 min before it is better to turn it off. Those who held inaccurate beliefs idled, on average, over 1 min longer than the remainder of the sample. These data suggest that idling accounts for over 93 MMt of CO 2 and 10.6 billion gallons (40.1 billion liters) of gasoline a year, equaling 1.6% of all US emissions. Much of this idling is unnecessary and economically disadvantageous to drivers. The policy implications of these findings are discussed. (author)

  10. Experimental and numerical investigation of idling car exposure

    Energy Technology Data Exchange (ETDEWEB)

    McNabola, A; Broderick, B M; Gill, L W [Trinity College, Dublin (Ireland). Dept. of Civil, Structural, and Environmental Engineering

    2006-07-01

    This study examined the effect of maintaining a 2 metre distance between vehicles on commuter pollution exposure levels. Air quality samples were recorded inside cars on a busy road in Dublin. A turbulent dispersion model was used to predict the exposure levels from idling cars. Samples were recorded along the route by keeping a distance of approximately 2 metres by sight to the car in front, and then a second time keeping a distance of approximately 1 meter. Traffic numbers were recorded during each sample from local authority loops. Meteorological and idling time data were also recorded for a total of 10 pairs of samples. Experiments were then conducted to measure volatile organic compounds (VOCs) and particulate matter (PM{sub 2.5}). A calibrated computational fluid dynamics (CFD) model was then used to predict car exposure levels under varying conditions. Key parameters included ventilation rates; wind speed; and distance. The calibrated numerical model demonstrated that the pollution concentration decreased rapidly within the first 2 metres of the preceding exhaust. Maintaining a distance of 2 metres to the preceding vehicle showed a reduction in VOCs and particulate matter of approximately 30 to 40 per cent. It was concluded that further research is needed to determine if modified driving behaviours will promote higher levels of traffic congestion. 11 refs., 6 tabs., 11 figs.

  11. Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region

    Science.gov (United States)

    Chapman, Jeffryes Walter; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.

    2016-01-01

    In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.

  12. Novel 3D Approach to Flare Modeling via Interactive IDL Widget Tools

    Science.gov (United States)

    Nita, G. M.; Fleishman, G. D.; Gary, D. E.; Kuznetsov, A.; Kontar, E. P.

    2011-12-01

    Currently, and soon-to-be, available sophisticated 3D models of particle acceleration and transport in solar flares require a new level of user-friendly visualization and analysis tools allowing quick and easy adjustment of the model parameters and computation of realistic radiation patterns (images, spectra, polarization, etc). We report the current state of the art of these tools in development, already proved to be highly efficient for the direct flare modeling. We present an interactive IDL widget application intended to provide a flexible tool that allows the user to generate spatially resolved radio and X-ray spectra. The object-based architecture of this application provides full interaction with imported 3D magnetic field models (e.g., from an extrapolation) that may be embedded in a global coronal model. Various tools provided allow users to explore the magnetic connectivity of the model by generating magnetic field lines originating in user-specified volume positions. Such lines may serve as reference lines for creating magnetic flux tubes, which are further populated with user-defined analytical thermal/non thermal particle distribution models. By default, the application integrates IDL callable DLL and Shared libraries containing fast GS emission codes developed in FORTRAN and C++ and soft and hard X-ray codes developed in IDL. However, the interactive interface allows interchanging these default libraries with any user-defined IDL or external callable codes designed to solve the radiation transfer equation in the same or other wavelength ranges of interest. To illustrate the tool capacity and generality, we present a step-by-step real-time computation of microwave and X-ray images from realistic magnetic structures obtained from a magnetic field extrapolation preceding a real event, and compare them with the actual imaging data obtained by NORH and RHESSI instruments. We discuss further anticipated developments of the tools needed to accommodate

  13. Analysis of Technology Options to Reduce the Fuel Consumption of Idling Trucks; FINAL

    International Nuclear Information System (INIS)

    Stodolsky, F.; Gaines, L.; Vyas, A.

    2000-01-01

    Long-haul trucks idling overnight consume more than 838 million gallons (20 million barrels) of fuel annually. Idling also emits pollutants. Truck drivers idle their engines primarily to (1) heat or cool the cab and/or sleeper, (2) keep the fuel warm in winter, and (3) keep the engine warm in the winter so that the engine is easier to start. Alternatives to overnight idling could save much of this fuel, reduce emissions, and cut operating costs. Several fuel-efficient alternatives to idling are available to provide heating and cooling: (1) direct-fired heater for cab/sleeper heating, with or without storage cooling; (2) auxiliary power units; and (3) truck stop electrification. Many of these technologies have drawbacks that limit market acceptance. Options that supply electricity are economically viable for trucks that are idled for 1,000-3,000 or more hours a year, while heater units could be used across the board. Payback times for fleets, which would receive quantity discounts on the prices, would be somewhat shorter

  14. Case Study – Idling Reduction Technologies for Emergency Service Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Laughlin, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Owens, Russell J. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    This case study explores the use of idle reduction technologies (IRTs) on emergency service vehicles in police, fire, and ambulance applications. Various commercially available IRT systems and approaches can decrease, or ultimately eliminate, engine idling. Fleets will thus save money on fuel, and will also decrease their criteria pollutant emissions, greenhouse gas emissions, and noise.

  15. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  16. A Heuristic Scheduling Algorithm for Minimizing Makespan and Idle Time in a Nagare Cell

    Directory of Open Access Journals (Sweden)

    M. Muthukumaran

    2012-01-01

    Full Text Available Adopting a focused factory is a powerful approach for today manufacturing enterprise. This paper introduces the basic manufacturing concept for a struggling manufacturer with limited conventional resources, providing an alternative solution to cell scheduling by implementing the technique of Nagare cell. Nagare cell is a Japanese concept with more objectives than cellular manufacturing system. It is a combination of manual and semiautomatic machine layout as cells, which gives maximum output flexibility for all kind of low-to-medium- and medium-to-high- volume productions. The solution adopted is to create a dedicated group of conventional machines, all but one of which are already available on the shop floor. This paper focuses on the development of heuristic scheduling algorithm in step-by-step method. The algorithm states that the summation of processing time of all products on each machine is calculated first and then the sum of processing time is sorted by the shortest processing time rule to get the assignment schedule. Based on the assignment schedule Nagare cell layout is arranged for processing the product. In addition, this algorithm provides steps to determine the product ready time, machine idle time, and product idle time. And also the Gantt chart, the experimental analysis, and the comparative results are illustrated with five (1×8 to 5×8 scheduling problems. Finally, the objective of minimizing makespan and idle time with greater customer satisfaction is studied through.

  17. Characterization of PTO and Idle Behavior for Utility Vehicles

    Energy Technology Data Exchange (ETDEWEB)

    Duran, Adam W. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Konan, Arnaud M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Miller, Eric S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kelly, Kenneth J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Prohaska, Robert S. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-09-28

    This report presents the results of analyses performed on utility vehicle data composed primarily of aerial lift bucket trucks sampled from the National Renewable Energy Laboratory's Fleet DNA database to characterize power takeoff (PTO) and idle operating behavior for utility trucks. Two major data sources were examined in this study: a 75-vehicle sample of Odyne electric PTO (ePTO)-equipped vehicles drawn from multiple fleets spread across the United States and 10 conventional PTO-equipped Pacific Gas and Electric fleet vehicles operating in California. Novel data mining approaches were developed to identify PTO and idle operating states for each of the datasets using telematics and controller area network/onboard diagnostics data channels. These methods were applied to the individual datasets and aggregated to develop utilization curves and distributions describing PTO and idle behavior in both absolute and relative operating terms. This report also includes background information on the source vehicles, development of the analysis methodology, and conclusions regarding the study's findings.

  18. A study experiment of auto idle application in the excavator engine performance

    Energy Technology Data Exchange (ETDEWEB)

    Purwanto, Wawan, E-mail: wawan5527@gmail.com; Maksum, Hasan; Putra, Dwi Sudarno, E-mail: dwisudarnoputra@ft.unp.ac.id; Wahyudi, Retno [State University of Padang, West Sumatera (Indonesia); Azmi, Meri, E-mail: meriazmi@gmail.com [State Polytechnic of Padang, West Sumatera (Indonesia)

    2016-03-29

    The purpose of this study was to analyze the effect of applying auto idle to excavator engine performance, such as machine unitization and fuel consumption in Excavator. Steps to be done are to modify the system JA 44 and 67 in Vehicle Electronic Control Unit (V-ECU). The modifications will be obtained from the pattern of the engine speed. If the excavator attachment is not operated, the engine speed will return to the idle speed automatically. From the experiment results the auto idle reduces fuel consumption in excavator engine.

  19. The downside of downtime: The prevalence and work pacing consequences of idle time at work.

    Science.gov (United States)

    Brodsky, Andrew; Amabile, Teresa M

    2018-05-01

    Although both media commentary and academic research have focused much attention on the dilemma of employees being too busy, this paper presents evidence of the opposite phenomenon, in which employees do not have enough work to fill their time and are left with hours of meaningless idle time each week. We conducted six studies that examine the prevalence and work pacing consequences of involuntary idle time. In a nationally representative cross-occupational survey (Study 1), we found that idle time occurs frequently across all occupational categories; we estimate that employers in the United States pay roughly $100 billion in wages for time that employees spend idle. Studies 2a-3b experimentally demonstrate that there are also collateral consequences of idle time; when workers expect idle time following a task, their work pace declines and their task completion time increases. This decline reverses the well-documented deadline effect, producing a deadtime effect, whereby workers slow down as a task progresses. Our analyses of work pace patterns provide evidence for a time discounting mechanism: workers discount idle time when it is relatively distant, but act to avoid it increasingly as it becomes more proximate. Finally, Study 4 demonstrates that the expectation of being able to engage in leisure activities during posttask free time (e.g., surfing the Internet) can mitigate the collateral work pace losses due to idle time. Through examination and discussion of the effects of idle time at work, we broaden theory on work pacing. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Statistics Online Computational Resource for Education

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2009-01-01

    The Statistics Online Computational Resource (http://www.SOCR.ucla.edu) provides one of the largest collections of free Internet-based resources for probability and statistics education. SOCR develops, validates and disseminates two core types of materials--instructional resources and computational libraries. (Contains 2 figures.)

  1. Delivering LHC software to HPC compute elements

    CERN Document Server

    Blomer, Jakob; Hardi, Nikola; Popescu, Radu

    2017-01-01

    In recent years, there was a growing interest in improving the utilization of supercomputers by running applications of experiments at the Large Hadron Collider (LHC) at CERN when idle cores cannot be assigned to traditional HPC jobs. At the same time, the upcoming LHC machine and detector upgrades will produce some 60 times higher data rates and challenge LHC experiments to use so far untapped compute resources. LHC experiment applications are tailored to run on high-throughput computing resources and they have a different anatomy than HPC applications. LHC applications comprise a core framework that allows hundreds of researchers to plug in their specific algorithms. The software stacks easily accumulate to many gigabytes for a single release. New releases are often produced on a daily basis. To facilitate the distribution of these software stacks to world-wide distributed computing resources, LHC experiments use a purpose-built, global, POSIX file system, the CernVM File System. CernVM-FS pre-processes dat...

  2. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  3. Effect of Alcohol on Diesel Engine Combustion Operating with Biodiesel-Diesel Blend at Idling Conditions

    Science.gov (United States)

    Mahmudul, H. M.; Hagos, Ftwi. Y.; A, M. Mukhtar N.; Mamat, Rizalman; Abdullah, A. Adam

    2018-03-01

    Biodiesel is a promising alternative fuel to run the automotive engine. However, its blends have not been properly investigated during idling as it is the main problem to run the vehicles in a big city. The purpose of this study is to evaluate the impact of alcohol additives such as butanol and ethanol on combustion parameters under idling conditions when a single cylinder diesel engine operates with diesel, diesel-biodiesel blends, and diesel biodiesel-alcohol blends. The engine combustion parameters such as peak pressure, heat release rate and ignition delay were computed. This investigation has revealed that alcohol blends with diesel and biodiesel, BU20 blend yield higher maximum peak cylinder pressure than diesel. B5 blend was found with the lowest energy release among all. B20 was slightly lower than diesel. BU20 blend was seen with the highest peak energy release where E20 blend was found advance than diesel. Among all, the blends alcohol component revealed shorter ignition delay. B5 and B20 blends were influenced by biodiesel interference and the burning fraction were found slightly slower than conventional diesel where BU20 and E20 blends was found slightly faster than diesel So, based on the result, it can be said that among the alcohol blends butanol and ethanol can be promising alternative at idling conditions and can be used without any engine modifications.

  4. Easy web interfaces to IDL code for NSTX Data Analysis

    International Nuclear Information System (INIS)

    Davis, W.M.

    2012-01-01

    Highlights: ► Web interfaces to IDL code can be developed quickly. ► Dozens of Web Tools are used effectively on NSTX for Data Analysis. ► Web interfaces are easier to use than X-window applications. - Abstract: Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.

  5. A community-based participatory research partnership to reduce vehicle idling near public schools.

    Science.gov (United States)

    Eghbalnia, Cynthia; Sharkey, Ken; Garland-Porter, Denisha; Alam, Mohammad; Crumpton, Marilyn; Jones, Camille; Ryan, Patrick H

    2013-05-01

    The authors implemented and assessed the effectiveness of a public health initiative aimed at reducing traffic-related air pollution exposure of the school community at four Cincinnati public schools. A partnership was fostered with academic environmental health researchers and community members. Anti-idling campaign materials were developed and education and training were provided to school bus drivers, students, parents, and school staff. Pledge drives and pre- and posteducation assessments were documented to measure the effectiveness of the program. After completing the educational component of the public health initiative, bus drivers (n = 397), community members (n = 53), and staff (n = 214) demonstrated significantly increased knowledge about the health effects of idling (p public health intervention. A community-driven public health initiative can be effective in both 1) enhancing community awareness about the benefits of reducing idling vehicles and 2) increasing active participation in idling reduction. The partnership initially developed has continued to develop toward a sustainable and growing process.

  6. Enabling opportunistic resources for CMS Computing Operations

    Energy Technology Data Exchange (ETDEWEB)

    Hufnagel, Dick [Fermilab

    2015-11-19

    With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize “opportunistic” resourcesresources not owned by, or a priori configured for CMS — to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.

  7. A review on idling reduction strategies to improve fuel economy and reduce exhaust emissions of transport vehicles

    International Nuclear Information System (INIS)

    Shancita, I.; Masjuki, H.H.; Kalam, M.A.; Rizwanul Fattah, I.M.; Rashed, M.M.; Rashedul, H.K.

    2014-01-01

    Highlights: • Introduce various idling reduction technologies for transport vehicles. • Exhibit their energy use, advantages, disadvantages to understand their capability. • Conduct critical review to improve fuel economy and exhaust emissions. • Suggest better technology according to their performance ability. - Abstract: To achieve reductions in vehicle idling, strategies and actions must be taken to minimize the time spent by drivers idling their engines. A number of benefits can be obtained in limiting the idling time. These benefits include savings in fuel use and maintenance costs, vehicle life extension, and reduction in exhaust emissions. The main objective of idling reduction (IR) devices is to reduce the amount of energy wasted by idling trucks, rail locomotives, and automobiles. During idling, gasoline vehicles emit a minimum amount of nitrogen oxides (NO x ) and negligible particulate matter (PM). However, generally a large amount of carbon monoxide (CO) and hydrocarbons (HC) are produced from these vehicles. Gasoline vehicles consume far more fuel at an hourly rate than their diesel counterparts during idling. Higher NOx and comparatively larger PM are produced by diesel vehicles than gasoline vehicles on the average during idling. Auxiliary power unit (APU), direct-fired heaters, fuel cells, thermal storage system, truck stop electrification, battery-based systems, engine idle management (shutdown) systems, electrical (shore power) solutions, cab comfort system, and hybridization are some of the available IR technologies whose performances for reducing fuel consumption and exhaust emissions have been compared. This paper analyzes the availability and capability of most efficient technologies to reduce fuel consumption and exhaust emissions from diesel and gasoline vehicles by comparing the findings of previous studies. The analysis reveals that among all the options direct fired heaters, APUs and electrified parking spaces exhibit better

  8. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  9. Green Cloud Computing: An Experimental Validation

    International Nuclear Information System (INIS)

    Monteiro, Rogerio Castellar; Dantas, M A R; Rodriguez y Rodriguez, Martius Vicente

    2014-01-01

    Cloud configurations can be computational environment with interesting cost efficiency for several organizations sizes. However, the indiscriminate action of buying servers and network devices may not represent a correspondent performance number. In the academic and commercial literature, some researches highlight that these environments are idle for long periods. Therefore, energy management is an essential approach in any organization, because energy bills can causes remarkable negative impacts to these organizations in term of costs. In this paper, we present a research work that is characterized by an analysis of energy consumption in a private cloud computing environment, considering both computational resources and network devices. This study was motivated by a real case of a large organization. Therefore, the first part of the study we considered empirical experiments. In a second moment we used the GreenCloud simulator which was utilized to foresee some different configurations. The research reached a successful and differentiated goal in presenting key issues from computational resources and network, related to the energy consumption for real private cloud

  10. The downside of downtime: The prevalence and work pacing consequences of idle time at work

    OpenAIRE

    Brodsky, Andrew; Amabile, Teresa M.

    2018-01-01

    Although both media commentary and academic research have focused much attention on the dilemma of employees being too busy, this paper presents evidence of the opposite phenomenon, in which employees do not have enough work to fill their time and are left with hours of meaningless idle time each week. We conducted six studies that examine the prevalence and work pacing consequences of involuntary idle time. In a nationally representative cross-occupational survey (Study 1), we found that idl...

  11. Characterization of high level nuclear waste glass samples following extended melter idling

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Kevin M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Peeler, David K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kruger, Albert A. [USDOE Office of River Protection, Richland, WA (United States)

    2015-06-16

    The Savannah River Site Defense Waste Processing Facility (DWPF) melter was recently idled with glass remaining in the melt pool and riser for approximately three months. This situation presented a unique opportunity to collect and analyze glass samples since outages of this duration are uncommon. The objective of this study was to obtain insight into the potential for crystal formation in the glass resulting from an extended idling period. The results will be used to support development of a crystal-tolerant approach for operation of the high-level waste melter at the Hanford Tank Waste Treatment and Immobilization Plant (WTP). Two glass pour stream samples were collected from DWPF when the melter was restarted after idling for three months. The samples did not contain crystallization that was detectible by X-ray diffraction. Electron microscopy identified occasional spinel and noble metal crystals of no practical significance. Occasional platinum particles were observed by microscopy as an artifact of the sample collection method. Reduction/oxidation measurements showed that the pour stream glasses were fully oxidized, which was expected after the extended idling period. Chemical analysis of the pour stream glasses revealed slight differences in the concentrations of some oxides relative to analyses of the melter feed composition prior to the idling period. While these differences may be within the analytical error of the laboratories, the trends indicate that there may have been some amount of volatility associated with some of the glass components, and that there may have been interaction of the glass with the refractory components of the melter. These changes in composition, although small, can be attributed to the idling of the melter for an extended period. The changes in glass composition resulted in a 70-100 °C increase in the predicted spinel liquidus temperature (TL) for the pour stream glass samples relative to the analysis of the melter feed prior to

  12. Processing Optimization of Typed Resources with Synchronized Storage and Computation Adaptation in Fog Computing

    Directory of Open Access Journals (Sweden)

    Zhengyang Song

    2018-01-01

    Full Text Available Wide application of the Internet of Things (IoT system has been increasingly demanding more hardware facilities for processing various resources including data, information, and knowledge. With the rapid growth of generated resource quantity, it is difficult to adapt to this situation by using traditional cloud computing models. Fog computing enables storage and computing services to perform at the edge of the network to extend cloud computing. However, there are some problems such as restricted computation, limited storage, and expensive network bandwidth in Fog computing applications. It is a challenge to balance the distribution of network resources. We propose a processing optimization mechanism of typed resources with synchronized storage and computation adaptation in Fog computing. In this mechanism, we process typed resources in a wireless-network-based three-tier architecture consisting of Data Graph, Information Graph, and Knowledge Graph. The proposed mechanism aims to minimize processing cost over network, computation, and storage while maximizing the performance of processing in a business value driven manner. Simulation results show that the proposed approach improves the ratio of performance over user investment. Meanwhile, conversions between resource types deliver support for dynamically allocating network resources.

  13. Substituting computers for services - potential to reduce ICT's environmental footprint

    Energy Technology Data Exchange (ETDEWEB)

    Plepys, A. [The International Inst. for Industrial Environmental Economics at Lund Univ. (Sweden)

    2004-07-01

    The environmental footprint of IT products are significant and, in spite of manufacturing and product design improvements, growing consumption of electronics results in increasing absolute environmental impact. Computers have short technological lifespan and a lot of the in-build performance, although necessary, remains idling for most of the time. Today, most of computers used in non-residential sectors are connected to networks. The premise of this paper is that computer networks are an untapped resource, which could allow addressing environmental impacts of IT products through centralising and sharing computing resources. The article presents results of a comparative study of two computing architectures. The first one is the traditional decentralised PC-based system and the second - centralised server-based computing (SBC) system. Both systems deliver equivalent functions to the final users and this can be compared on a one-to-one basis. The study evaluates product lifespan, energy consumption in user stage, product design and its environmental implications in manufacturing. (orig.)

  14. Computer Resources | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  15. Idle emissions from heavy-duty diesel and natural gas vehicles at high altitude.

    Science.gov (United States)

    McCormick, R L; Graboski, M S; Alleman, T L; Yanowitz, J

    2000-11-01

    Idle emissions of total hydrocarbon (THC), CO, NOx, and particulate matter (PM) were measured from 24 heavy-duty diesel-fueled (12 trucks and 12 buses) and 4 heavy-duty compressed natural gas (CNG)-fueled vehicles. The volatile organic fraction (VOF) of PM and aldehyde emissions were also measured for many of the diesel vehicles. Experiments were conducted at 1609 m above sea level using a full exhaust flow dilution tunnel method identical to that used for heavy-duty engine Federal Test Procedure (FTP) testing. Diesel trucks averaged 0.170 g/min THC, 1.183 g/min CO, 1.416 g/min NOx, and 0.030 g/min PM. Diesel buses averaged 0.137 g/min THC, 1.326 g/min CO, 2.015 g/min NOx, and 0.048 g/min PM. Results are compared to idle emission factors from the MOBILE5 and PART5 inventory models. The models significantly (45-75%) overestimate emissions of THC and CO in comparison with results measured from the fleet of vehicles examined in this study. Measured NOx emissions were significantly higher (30-100%) than model predictions. For the pre-1999 (pre-consent decree) truck engines examined in this study, idle NOx emissions increased with model year with a linear fit (r2 = 0.6). PART5 nationwide fleet average emissions are within 1 order of magnitude of emissions for the group of vehicles tested in this study. Aldehyde emissions for bus idling averaged 6 mg/min. The VOF averaged 19% of total PM for buses and 49% for trucks. CNG vehicle idle emissions averaged 1.435 g/min for THC, 1.119 g/min for CO, 0.267 g/min for NOx, and 0.003 g/min for PM. The g/min PM emissions are only a small fraction of g/min PM emissions during vehicle driving. However, idle emissions of NOx, CO, and THC are significant in comparison with driving emissions.

  16. Exploitation of heterogeneous resources for ATLAS Computing

    CERN Document Server

    Chudoba, Jiri; The ATLAS collaboration

    2018-01-01

    LHC experiments require significant computational resources for Monte Carlo simulations and real data processing and the ATLAS experiment is not an exception. In 2017, ATLAS exploited steadily almost 3M HS06 units, which corresponds to about 300 000 standard CPU cores. The total disk and tape capacity managed by the Rucio data management system exceeded 350 PB. Resources are provided mostly by Grid computing centers distributed in geographically separated locations and connected by the Grid middleware. The ATLAS collaboration developed several systems to manage computational jobs, data files and network transfers. ATLAS solutions for job and data management (PanDA and Rucio) were generalized and now are used also by other collaborations. More components are needed to include new resources such as private and public clouds, volunteers' desktop computers and primarily supercomputers in major HPC centers. Workflows and data flows significantly differ for these less traditional resources and extensive software re...

  17. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  18. 40 CFR 86.1527 - Idle test procedure; overview.

    Science.gov (United States)

    2010-07-01

    ... from a single exhaust pipe in which exhaust products are homogeneously mixed. The configuration for... additional “Y” pipe be placed in the exhaust system before dilution. [48 FR 52252, Nov. 16, 1983... determine the raw concentration (in percent) of CO in the exhaust flow at idle. The test procedure begins...

  19. EFFECTS OF ENGINE SPEED AND ACCESSORY LOAD ON IDLING EMISSIONS FROM HEAVY-DUTY DIESEL TRUCK ENGINES

    Science.gov (United States)

    A nontrivial portion of heavy-duty vehicle emissions of nitrogen oxides (NOx) and particulate matter (PM) occurs during idling. Regulators and the environmental community are interested in curtailing truck idling emissions, but current emissions models do not characterize them ac...

  20. Framework of Resource Management for Intercloud Computing

    Directory of Open Access Journals (Sweden)

    Mohammad Aazam

    2014-01-01

    Full Text Available There has been a very rapid increase in digital media content, due to which media cloud is gaining importance. Cloud computing paradigm provides management of resources and helps create extended portfolio of services. Through cloud computing, not only are services managed more efficiently, but also service discovery is made possible. To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible for standalone clouds to handle everything with the increasing user demands. For scalability and better service provisioning, at times, clouds have to communicate with other clouds and share their resources. This scenario is called Intercloud computing or cloud federation. The study on Intercloud computing is still in its start. Resource management is one of the key concerns to be addressed in Intercloud computing. Already done studies discuss this issue only in a trivial and simplistic way. In this study, we present a resource management model, keeping in view different types of services, different customer types, customer characteristic, pricing, and refunding. The presented framework was implemented using Java and NetBeans 8.0 and evaluated using CloudSim 3.0.3 toolkit. Presented results and their discussion validate our model and its efficiency.

  1. Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling

    Energy Technology Data Exchange (ETDEWEB)

    Lustbader, Jason Aaron; Kekelia, Bidzina; Tomerlin, Jeff; Kreutzer, Cory J.; Yeakel, Skip; Adelman, Steven; Luo, Zhiming; Zehme, John

    2016-04-05

    Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers will enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation

  2. Improving ATLAS computing resource utilization with HammerCloud

    CERN Document Server

    Schovancova, Jaroslava; The ATLAS collaboration

    2018-01-01

    HammerCloud is a framework to commission, test, and benchmark ATLAS computing resources and components of various distributed systems with realistic full-chain experiment workflows. HammerCloud contributes to ATLAS Distributed Computing (ADC) Operations and automation efforts, providing the automated resource exclusion and recovery tools, that help re-focus operational manpower to areas which have yet to be automated, and improve utilization of available computing resources. We present recent evolution of the auto-exclusion/recovery tools: faster inclusion of new resources in testing machinery, machine learning algorithms for anomaly detection, categorized resources as master vs. slave for the purpose of blacklisting, and a tool for auto-exclusion/recovery of resources triggered by Event Service job failures that is being extended to other workflows besides the Event Service. We describe how HammerCloud helped commissioning various concepts and components of distributed systems: simplified configuration of qu...

  3. ResourceGate: A New Solution for Cloud Computing Resource Allocation

    OpenAIRE

    Abdullah A. Sheikh

    2012-01-01

    Cloud computing has taken place to be focused by educational and business communities. These concerns include their needs to improve the Quality of Services (QoS) provided, also services such as reliability, performance and reducing costs. Cloud computing provides many benefits in terms of low cost and accessibility of data. Ensuring these benefits is considered to be the major factor in the cloud computing environment. This paper surveys recent research related to cloud computing resource al...

  4. PERHITUNGAN IDLE CAPACITY DENGAN MENGGUNAKAN CAM-I CAPACITY MODEL DALAM RANGKA EFISIENSI BIAYA PADA PT X

    Directory of Open Access Journals (Sweden)

    Muammar Aditya

    2015-09-01

    Full Text Available Aim for this research are to analyze capacity cost which incure from company production machines and human resources whose operate the production machine using CAM-I capacity model. CAM-I capacity model is an approach which focus  upon how to manage company resources. This research initiated at PT X which focus to production activity that used small mixer machine, extruder machine, oven drying machine, enrober machine, pan coting machine which consist of hot and cold pan coating machine, and packing machine which consist of vertical packing machine and horizontal packing machine as well as human resources that operates those machine. This research focus on rate capacity, productive capacity, idle capacity, and nonproductive capacity to measure capacity cost. Result of this research shows most of the capacity owned by either by production machine or human resources are not utilized to its maximum potential. There are need to reduce capacity cost owned by production machine and human resoures to increase the product sales but if its unachieveable there will be need to increase efficiency from production machine and human resources by reducing their quantityDOI: 10.15408/ess.v4i1.1961

  5. Aggregated Computational Toxicology Resource (ACTOR)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Aggregated Computational Toxicology Resource (ACTOR) is a database on environmental chemicals that is searchable by chemical name and other identifiers, and by...

  6. Aggregated Computational Toxicology Online Resource

    Data.gov (United States)

    U.S. Environmental Protection Agency — Aggregated Computational Toxicology Online Resource (AcTOR) is EPA's online aggregator of all the public sources of chemical toxicity data. ACToR aggregates data...

  7. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  8. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  9. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  10. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    Science.gov (United States)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk

  11. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  12. Framework Resources Multiply Computing Power

    Science.gov (United States)

    2010-01-01

    As an early proponent of grid computing, Ames Research Center awarded Small Business Innovation Research (SBIR) funding to 3DGeo Development Inc., of Santa Clara, California, (now FusionGeo Inc., of The Woodlands, Texas) to demonstrate a virtual computer environment that linked geographically dispersed computer systems over the Internet to help solve large computational problems. By adding to an existing product, FusionGeo enabled access to resources for calculation- or data-intensive applications whenever and wherever they were needed. Commercially available as Accelerated Imaging and Modeling, the product is used by oil companies and seismic service companies, which require large processing and data storage capacities.

  13. Idleness, returns to education and child labor

    Directory of Open Access Journals (Sweden)

    José Raimundo Carvalho

    2012-12-01

    Full Text Available Although recent trends about child labor are positive, see ILO (2006, there still are important shortcomings which require further investigation. Among them, the exclusion of the category "idle children" (those who neither work nor study from past studies, as well as the lack of reliable information on returns to education are two significant omissions. By using a data base that contains details on idle children and a proxy for the returns to education, we find evidence that confirms traditional findings both with regard to the strong positive effect of parental background and to the positive relationship between the number of children in the household and child labor. On the other hand, our estimates point out new insights, such as the great regional variation of estimates and the fact that the Body Mass Index effect is positive. Finally, we suggest a new perspective on the issue of "street children" through the analysis of the category of "idle children".Apesar das recentes tendências sobre trabalho infantil serem positivas, ver ILO (2006, há importantes deficiências no entendimento do fenômeno. A exclusão da categoria "crianças desocupadas" (não trabalham e nem estudam em estudos passados, como também a ausência de informações fidedignas sobre retornos da educação são importantes omissões. Utilizando uma base de dados mais detalhada, bem como uma proxy para retornos da educação encontramos evidências que confirmam resultados tradicionais como o efeito positivo das características dos pais e a relação positiva entre o número de crianças, no trabalho infantil. Por outro lado, apontamos para novos entendimentos, como o fato da variável Índice de Massa Corporal possuir efeito positivo e a grande variação regional dos efeitos das estimativas. Por fim, sugerimos uma nova perspectiva sobre a questão das "crianças de rua" através da análise da categoria de "crianças desocupadas".

  14. Long-Haul Truck Sleeper Heating Load Reduction Package for Rest Period Idling: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Lustbader, Jason; Kekelia, Bidzina; Tomerlin, Jeff; Kreutzer, Cory; Adelman, Steve; Yeakel, Skip; Luo, Zhiming; Zehme, John

    2016-03-24

    Annual fuel use for sleeper cab truck rest period idling is estimated at 667 million gallons in the United States, or 6.8% of long-haul truck fuel use. Truck idling during a rest period represents zero freight efficiency and is largely done to supply accessory power for climate conditioning of the cab. The National Renewable Energy Laboratory's CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck thermal management systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers will enable smaller, lighter, and more cost-effective idle reduction solutions. In addition, if the fuel savings provide a one- to three-year payback period, fleet owners will be economically motivated to incorporate them. For candidate idle reduction technologies to be implemented by original equipment manufacturers and fleets, their effectiveness must be quantified. To address this need, several promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. Load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, conductive pathways, and efficient equipment. Technologies in each of these focus areas were investigated in collaboration with industry partners. The most promising of these technologies were then combined with the goal of exceeding a 30% reduction in HVAC loads. These technologies included 'ultra-white' paint, advanced insulation, and advanced curtain design. Previous testing showed more than a 35.7% reduction in air conditioning loads. This paper describes the overall heat transfer coefficient testing of this advanced load reduction technology package that showed more than a 43% reduction in heating load. Adding an additional layer of advanced insulation

  15. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  16. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    A new algorithm efficiently computes the tightest exact bound on the levels of resources induced by a flexible activity plan (see figure). Tightness of bounds is extremely important for computations involved in planning because tight bounds can save potentially exponential amounts of search (through early backtracking and detection of solutions), relative to looser bounds. The bound computed by the new algorithm, denoted the resource-level envelope, constitutes the measure of maximum and minimum consumption of resources at any time for all fixed-time schedules in the flexible plan. At each time, the envelope guarantees that there are two fixed-time instantiations one that produces the minimum level and one that produces the maximum level. Therefore, the resource-level envelope is the tightest possible resource-level bound for a flexible plan because any tighter bound would exclude the contribution of at least one fixed-time schedule. If the resource- level envelope can be computed efficiently, one could substitute looser bounds that are currently used in the inner cores of constraint-posting scheduling algorithms, with the potential for great improvements in performance. What is needed to reduce the cost of computation is an algorithm, the measure of complexity of which is no greater than a low-degree polynomial in N (where N is the number of activities). The new algorithm satisfies this need. In this algorithm, the computation of resource-level envelopes is based on a novel combination of (1) the theory of shortest paths in the temporal-constraint network for the flexible plan and (2) the theory of maximum flows for a flow network derived from the temporal and resource constraints. The measure of asymptotic complexity of the algorithm is O(N O(maxflow(N)), where O(x) denotes an amount of computing time or a number of arithmetic operations proportional to a number of the order of x and O(maxflow(N)) is the measure of complexity (and thus of cost) of a maximumflow

  17. Idling Reduction for Long-Haul Trucks: An Economic Comparison of On-Board and Wayside Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Gaines, Linda [Argonne National Lab. (ANL), Argonne, IL (United States); Weikersheimer, Patricia [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-01

    Reducing the idling of long-haul heavy-duty trucks has long been recognized as a particularly low-hanging fruit of fuel efficiency and emissions reduction. The displacement of about 10 hours of diesel idling every day, for most days of the year, for as many as a million long-haul trucks has very clear benefits. This report considers the costs and return on investment (ROI) for idling reduction (IR) equipment for both truck owners and electrified parking space (EPS) equipment owners. For the truck owners, the key variables examined are idling hours to be displaced (generally 1,000 to 2,000 hours per year) and the price of fuel ($0 to $5/gal). The ideal IR option would provide complete services in varied climates in any location and offer the best ROI on trucks that log many idling hours. For trucks that have fewer idling hours, options with a fixed cost per hour (i.e., EPS) might be most attractive if they were available to all, or even most, truck drivers. EPS, however, is particularly cost effective for trucks on prescribed routes with a need for regular, extended stops at terminals. (EPS is also called truck stop electrification, or TSE.) The analysis shows that all IR options save money when fuel costs more than $2/gal. For trucks requiring bunk heat, a simple heater (plug-in or diesel) is almost always the most costeffective way to provide heat, even if the truck is equipped with an auxiliary power unit (APU) or is parked at a single-system EPS location. For trucks requiring bunk air-conditioning, the use of single-system EPS is most cost effective for those logging fewer idling hours. Even for trucks with higher idling hours, the cost of EPS may be about the same as that for on-board air-conditioning. Clearly, trucks’ locations and seasonal factors—and the availability of EPS— are significant in the choice of “best fit” IR equipment for truck owners. This report also considers costs and payback for owners of EPS infrastructure. An industry that 5

  18. Some issues of creation of belarusian language computer resources

    OpenAIRE

    Rubashko, N.; Nevmerjitskaia, G.

    2003-01-01

    The main reason for creation of computer resources of natural language is the necessity to bring into accord the ways of language normalization with the form of its existence - the computer form of language usage should correspond to the computer form of language standards fixation. This paper discusses various aspects of the creation of Belarusian language computer resources. It also briefly gives an overview of the objectives of the project involved.

  19. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  20. Estimation of fuel loss due to idling of vehicles at a signalized intersection in Chennai, India

    Science.gov (United States)

    Vasantha Kumar, S.; Gulati, Himanshu; Arora, Shivam

    2017-11-01

    The vehicles while waiting at signalized intersections are generally found to be in idling condition, i.e., not switching off their vehicles during red times. This phenomenon of idling of vehicles during red times at signalized intersections may lead to huge economic loss as lot of fuel is consumed by vehicles when they are in idling condition. The situation may even be worse in countries like India as different vehicle types consume varying amount of fuel. Only limited studies have been reported on estimation of fuel loss due to idling of vehicles in India. In the present study, one of the busy intersections in Chennai, namely, Tidel Park Junction in Rajiv Gandhi salai was considered. Data collection was carried out in one approach road of the intersection during morning and evening peak hours on a typical working day by manually noting down the red timings of each cycle and the corresponding number of two-wheelers, three-wheelers, passenger cars, light commercial vehicles (LCV) and heavy motorized vehicles (HMV) that were in idling mode. Using the fuel consumption values of various vehicles types suggested by Central Road Research Institute (CRRI), the total fuel loss during the study period was found to be Rs. 4,93,849/-. The installation of red timers, synchronization of signals, use of non-motorized transport for short trips and public awareness are some of the measures which government need to focus to save the fuel wasted at signalized intersections in major cities of India.

  1. Research on elastic resource management for multi-queue under cloud computing environment

    Science.gov (United States)

    CHENG, Zhenjing; LI, Haibo; HUANG, Qiulan; Cheng, Yaodong; CHEN, Gang

    2017-10-01

    As a new approach to manage computing resource, virtualization technology is more and more widely applied in the high-energy physics field. A virtual computing cluster based on Openstack was built at IHEP, using HTCondor as the job queue management system. In a traditional static cluster, a fixed number of virtual machines are pre-allocated to the job queue of different experiments. However this method cannot be well adapted to the volatility of computing resource requirements. To solve this problem, an elastic computing resource management system under cloud computing environment has been designed. This system performs unified management of virtual computing nodes on the basis of job queue in HTCondor based on dual resource thresholds as well as the quota service. A two-stage pool is designed to improve the efficiency of resource pool expansion. This paper will present several use cases of the elastic resource management system in IHEPCloud. The practical run shows virtual computing resource dynamically expanded or shrunk while computing requirements change. Additionally, the CPU utilization ratio of computing resource was significantly increased when compared with traditional resource management. The system also has good performance when there are multiple condor schedulers and multiple job queues.

  2. A Semi-Preemptive Computational Service System with Limited Resources and Dynamic Resource Ranking

    Directory of Open Access Journals (Sweden)

    Fang-Yie Leu

    2012-03-01

    Full Text Available In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short, which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET, users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.

  3. Using distributed processing on a local area network to increase available computing power

    International Nuclear Information System (INIS)

    Capps, K.S.; Sherry, K.J.

    1996-01-01

    The migration from central computers to desktop computers distributed the total computing horsepower of a system over many different machines. A typical engineering office may have several networked desktop computers that are sometimes idle, especially after work hours and when people are absent. Users would benefit if applications were able to use these networked computers collectively. This paper describes a method of distributing the workload of an application on one desktop system to otherwise idle systems on the network. The authors present this discussion from a developer's viewpoint, because the developer must modify an application before the user can realize any benefit of distributed computing on available systems

  4. Lean hydrous and anhydrous bioethanol combustion in spark ignition engine at idle

    International Nuclear Information System (INIS)

    Chuepeng, Sathaporn; Srisuwan, Sudecha; Tongroon, Manida

    2016-01-01

    Highlights: • Anhydrous ethanol burns fastest in uncalibrated engine at equal equivalence ratio. • The leaner hydrous ethanol combustion tends to elevate the COV in imep. • Hydrous ethanol consumption was 10% greater than anhydrous ethanol at ϕ = 0.67 limit. • Optimizing alternative fuel engine at idle for stability and emission is suggested. - Abstract: The applications of anhydrous bioethanol to substitute or replace gasoline fuel have shown to attain benefits in terms of engine thermal efficiency, power output and exhaust emissions from spark ignition engines. A hydrous bioethanol has also been gained more attention due to its energy and cost effectiveness. The main aim of this work is to minimize fuel quantity injected to the intake ports of a four-cylinder engine under idle condition. The engine running with hydrous ethanol undergoes within lean-burn condition as its combustion stability is analyzed using an engine indicating system. Coefficient of variation in indicated mean effective pressure is an indicator for combustion stability with hydrocarbon and carbon monoxide emission monitoring as a supplement. Anhydrous ethanol burns faster than hydrous ethanol and gasoline in the uncalibrated engine at the same fuel-to-air equivalence ratio under idle condition. The leaner hydrous ethanol combustion tends to elevate the coefficient of variation in indicated mean effective pressure. The experimental results have found that the engine consumes greater hydrous ethanol by 10% on mass basis compared with those of anhydrous ethanol at the lean limit of fuel-to-air equivalence ratio of 0.67. The results of exhaust gas analysis were compared with those predicted by chemical equilibrium analysis of the fuel-air combustion; the resemble trends were found. Calibrating the alternative fueled engine for fuel injection quantity should be accomplished at idle with combustion stability and emissions optimization.

  5. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  6. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  7. PROSPECTS OF GEOTHERMAL RESOURCES DEVELOPMENT FOR EAST CISCAUCASIA

    Directory of Open Access Journals (Sweden)

    A. B. Alkhasov

    2013-01-01

    Full Text Available Abstract. Work subject. Aim. The Northern Caucasus is one of the prospective regions for development of geothermal energy.The hydrogeothermal resources of the only East Ciscaucasian Artesian basin are estimated up to 10000 MW of heat and 1000 MW of electric power. For their large-scale development it is necessary to built wells of big diameter and high flow rate involving huge capital investments. Reconstruction of idle wells for production of thermal water will allow to reduce capital investments for building of geothermal power installations. In the East Ciscaucasian Artesian basin there are a lot of promising areas with idle wells which can be converted for production of thermal water. The purpose of work is substantiation possibility of efficient development of geothermal resources of the Northern Caucasus region using idle oil and gas wells.Methods. The schematic diagram is submitted for binary geothermal power plant (GPP with use of idle gas-oil wells where the primary heat carrier in a loop of geothermal circulation system is used for heating and evaporation of the low-boiling working agent circulating in a secondary contour of steam-power unit. Calculations are carried out for selection of the optimum parameters of geothermal circulation system for obtaining the maximum useful power of GPP. The thermodynamic analysis of low-boiling working agents is made. Development of medial enthalpy thermal waters in the combined geothermal-steam-gas power installations is offered where exhaust gases of gas-turbine installation are used for evaporation and overheat of the working agent circulating in a contour of GPP. Heating of the working agent in GPP up to the temperature of evaporation is carried out by thermal water.Results. The possibility of efficient development of geothermal resources of the Northern Caucasus region by construction of binary geothermal power plants using idle oil and gas wells is substantiated. The capacities and the basic

  8. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  9. 40 CFR 85.2218 - Preconditioned idle test-EPA 91.

    Science.gov (United States)

    2010-07-01

    .... (2) Idle mode—(i) Ford Motor Company and Honda vehicles. The engines of 1981-1987 model year Ford Motor Company vehicles and 1984-1985 model year Honda Preludes must be shut off for not more than ten...-1989 model year Ford Motor Company vehicles but may not be used for other vehicles. (ii) The mode timer...

  10. Reduction of atmospheric fine particle level by restricting the idling vehicles around a sensitive area.

    Science.gov (United States)

    Lee, Yen-Yi; Lin, Sheng-Lun; Yuan, Chung-Shin; Lin, Ming-Yeng; Chen, Kang-Shin

    2018-07-01

    Atmospheric particles are a major problem that could lead to harmful effects on human health, especially in densely populated urban areas. Chiayi is a typical city with very high population and traffic density, as well as being located at the downwind side of several pollution sources. Multiple contributors for PM 2.5 (particulate matter with an aerodynamic diameter ≥2.5 μm) and ultrafine particles cause complicated air quality problems. This study focused on the inhibition of local emission sources by restricting the idling vehicles around a school area and evaluating the changes in surrounding atmospheric PM conditions. Two stationary sites were monitored, including a background site on the upwind side of the school and a campus site inside the school, to monitor the exposure level, before and after the idling prohibition. In the base condition, the PM 2.5  mass concentrations were found to increase 15% from the background, whereas the nitrate (NO 3 - ) content had a significant increase at the campus site. The anthropogenic metal contents in PM 2.5 were higher at the campus site than the background site. Mobile emissions were found to be the most likely contributor to the school hot spot area by chemical mass balance modeling (CMB8.2). On the other hand, the PM 2.5 in the school campus fell to only 2% after idling vehicle control, when the mobile source contribution reduced from 42.8% to 36.7%. The mobile monitoring also showed significant reductions in atmospheric PM 2.5 , PM 0.1 , polycyclic aromatic hydrocarbons (PAHs), and black carbon (BC) levels by 16.5%, 33.3%, 48.0%, and 11.5%, respectively. Consequently, the restriction of local idling emission was proven to significantly reduce PM and harmful pollutants in the hot spots around the school environment. The emission of idling vehicles strongly affects the levels of particles and relative pollutants in near-ground air around a school area. The PM 2.5 mass concentration at a campus site increased from

  11. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  12. Sleeper Cab Climate Control Load Reduction for Long-Haul Truck Rest Period Idling

    Energy Technology Data Exchange (ETDEWEB)

    Lustbader, J. A.; Kreutzer, C.; Adelman, S.; Yeakel, S.; Zehme, J.

    2015-04-29

    Annual fuel use for long-haul truck rest period idling is estimated at 667 million gallons in the United States. The U.S. Department of Energy’s National Renewable Energy Laboratory’s CoolCab project aims to reduce heating, ventilating, and air conditioning (HVAC) loads and resulting fuel use from rest period idling by working closely with industry to design efficient long-haul truck climate control systems while maintaining occupant comfort. Enhancing the thermal performance of cab/sleepers will enable smaller, lighter, and more cost-effective idle reduction solutions. In order for candidate idle reduction technologies to be implemented at the original equipment manufacturer and fleet level, their effectiveness must be quantified. To address this need, a number of promising candidate technologies were evaluated through experimentation and modeling to determine their effectiveness in reducing rest period HVAC loads. For this study, load reduction strategies were grouped into the focus areas of solar envelope, occupant environment, and conductive pathways. The technologies selected for a complete-cab package of technologies were “ultra-white” paint, advanced insulation, and advanced curtains. To measure the impact of these technologies, a nationally-averaged solar-weighted reflectivity long-haul truck paint color was determined and applied to the baseline test vehicle. Using the complete-cab package of technologies, electrical energy consumption for long-haul truck daytime rest period air conditioning was reduced by at least 35% for summer weather conditions in Colorado. The National Renewable Energy Laboratory's CoolCalc model was then used to extrapolate the performance of the thermal load reduction technologies nationally for 161 major U.S. cities using typical weather conditions for each location over an entire year.

  13. 41 CFR 101-25.109-1 - Identification of idle equipment.

    Science.gov (United States)

    2010-07-01

    ... comprised of senior program management, property management, and scientific personnel who are familiar with... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Identification of idle equipment. 101-25.109-1 Section 101-25.109-1 Public Contracts and Property Management Federal Property...

  14. Optimal Computing Resource Management Based on Utility Maximization in Mobile Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Haoyu Meng

    2017-01-01

    Full Text Available Mobile crowdsourcing, as an emerging service paradigm, enables the computing resource requestor (CRR to outsource computation tasks to each computing resource provider (CRP. Considering the importance of pricing as an essential incentive to coordinate the real-time interaction among the CRR and CRPs, in this paper, we propose an optimal real-time pricing strategy for computing resource management in mobile crowdsourcing. Firstly, we analytically model the CRR and CRPs behaviors in form of carefully selected utility and cost functions, based on concepts from microeconomics. Secondly, we propose a distributed algorithm through the exchange of control messages, which contain the information of computing resource demand/supply and real-time prices. We show that there exist real-time prices that can align individual optimality with systematic optimality. Finally, we also take account of the interaction among CRPs and formulate the computing resource management as a game with Nash equilibrium achievable via best response. Simulation results demonstrate that the proposed distributed algorithm can potentially benefit both the CRR and CRPs. The coordinator in mobile crowdsourcing can thus use the optimal real-time pricing strategy to manage computing resources towards the benefit of the overall system.

  15. Argonne Laboratory Computing Resource Center - FY2004 Report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R.

    2005-04-14

    In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz.

  16. Load/resource matching for period-of-record computer simulation

    International Nuclear Information System (INIS)

    Lindsey, E.D. Jr.; Robbins, G.E. III

    1991-01-01

    The Southwestern Power Administration (Southwestern), an agency of the Department of Energy, is responsible for marketing the power and energy produced at Federal hydroelectric power projects developed by the U.S. Army Corps of Engineers in the southwestern United States. This paper reports that in order to maximize benefits from limited resources, to evaluate proposed changes in the operation of existing projects, and to determine the feasibility and marketability of proposed new projects, Southwestern utilizes a period-of-record computer simulation model created in the 1960's. Southwestern is constructing a new computer simulation model to take advantage of changes in computers, policy, and procedures. Within all hydroelectric power reservoir systems, the ability of the resources to match the load demand is critical and presents complex problems. Therefore, the method used to compare available energy resources to energy load demands is a very important aspect of the new model. Southwestern has developed an innovative method which compares a resource duration curve with a load duration curve, adjusting the resource duration curve to make the most efficient use of the available resources

  17. Distributed Problem Solving: Adaptive Networks with a Computer Intermediary Resource. Intelligent Executive Computer Communication

    Science.gov (United States)

    1991-06-01

    Proceedings of The National Conference on Artificial Intelligence , pages 181-184, The American Association for Aritificial Intelligence , Pittsburgh...Intermediary Resource: Intelligent Executive Computer Communication John Lyman and Carla J. Conaway University of California at Los Angeles for Contracting...Include Security Classification) Interim Report: Distributed Problem Solving: Adaptive Networks With a Computer Intermediary Resource: Intelligent

  18. Shared-resource computing for small research labs.

    Science.gov (United States)

    Ackerman, M J

    1982-04-01

    A real time laboratory computer network is described. This network is composed of four real-time laboratory minicomputers located in each of four division laboratories and a larger minicomputer in a centrally located computer room. Off the shelf hardware and software were used with no customization. The network is configured for resource sharing using DECnet communications software and the RSX-11-M multi-user real-time operating system. The cost effectiveness of the shared resource network and multiple real-time processing using priority scheduling is discussed. Examples of utilization within a medical research department are given.

  19. Towards minimal resources of measurement-based quantum computation

    International Nuclear Information System (INIS)

    Perdrix, Simon

    2007-01-01

    We improve the upper bound on the minimal resources required for measurement-only quantum computation (M A Nielsen 2003 Phys. Rev. A 308 96-100; D W Leung 2004 Int. J. Quantum Inform. 2 33; S Perdrix 2005 Int. J. Quantum Inform. 3 219-23). Minimizing the resources required for this model is a key issue for experimental realization of a quantum computer based on projective measurements. This new upper bound also allows one to reply in the negative to the open question presented by Perdrix (2004 Proc. Quantum Communication Measurement and Computing) about the existence of a trade-off between observable and ancillary qubits in measurement-only QC

  20. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  1. Effect of oxygenate additive on diesel engine fuel consumption and emissions operating with biodiesel-diesel blend at idling conditions

    Science.gov (United States)

    Mahmudul, H. M.; Hagos, F. Y.; Mamat, R.; Noor, M. M.; Yusri, I. M.

    2017-10-01

    Biodiesel is promising alternative fuel to run the automotive engine but idling is the main problem to run the vehicles in a big city. Vehicles running with idling condition cause higher fuel supply and higher emission level due to being having fuel residues in the exhaust. The purpose of this study is to evaluate the impact of alcohol additive on fuel consumption and emissions parameters under idling conditions when a multicylinder diesel engine operates with the diesel-biodiesel blend. The study found that using 5% butanol as an additive with B5 (5% Palm biodiesel + 95% diesel) blends fuel lowers brake specific fuel consumption and CO emissions by 38% and 20% respectively. But the addition of butanol increases NOx and CO2 emissions. Based on the result it can be said that 5% butanol can be used in a diesel engine with B5 without any engine modifications to tackle the idling problem.

  2. A review of Computer Science resources for learning and teaching with K-12 computing curricula: an Australian case study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-10-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age children, with the intention to engage children and increase interest, rather than to formally teach concepts and skills. What is the educational quality of existing Computer Science resources and to what extent are they suitable for classroom learning and teaching? In this paper, an assessment framework is presented to evaluate the quality of online Computer Science resources. Further, a semi-systematic review of available online Computer Science resources was conducted to evaluate resources available for classroom learning and teaching and to identify gaps in resource availability, using the Australian curriculum as a case study analysis. The findings reveal a predominance of quality resources, however, a number of critical gaps were identified. This paper provides recommendations and guidance for the development of new and supplementary resources and future research.

  3. A multipurpose computing center with distributed resources

    Science.gov (United States)

    Chudoba, J.; Adam, M.; Adamová, D.; Kouba, T.; Mikula, A.; Říkal, V.; Švec, J.; Uhlířová, J.; Vokáč, P.; Svatoš, M.

    2017-10-01

    The Computing Center of the Institute of Physics (CC IoP) of the Czech Academy of Sciences serves a broad spectrum of users with various computing needs. It runs WLCG Tier-2 center for the ALICE and the ATLAS experiments; the same group of services is used by astroparticle physics projects the Pierre Auger Observatory (PAO) and the Cherenkov Telescope Array (CTA). OSG stack is installed for the NOvA experiment. Other groups of users use directly local batch system. Storage capacity is distributed to several locations. DPM servers used by the ATLAS and the PAO are all in the same server room, but several xrootd servers for the ALICE experiment are operated in the Nuclear Physics Institute in Řež, about 10 km away. The storage capacity for the ATLAS and the PAO is extended by resources of the CESNET - the Czech National Grid Initiative representative. Those resources are in Plzen and Jihlava, more than 100 km away from the CC IoP. Both distant sites use a hierarchical storage solution based on disks and tapes. They installed one common dCache instance, which is published in the CC IoP BDII. ATLAS users can use these resources using the standard ATLAS tools in the same way as the local storage without noticing this geographical distribution. Computing clusters LUNA and EXMAG dedicated to users mostly from the Solid State Physics departments offer resources for parallel computing. They are part of the Czech NGI infrastructure MetaCentrum with distributed batch system based on torque with a custom scheduler. Clusters are installed remotely by the MetaCentrum team and a local contact helps only when needed. Users from IoP have exclusive access only to a part of these two clusters and take advantage of higher priorities on the rest (1500 cores in total), which can also be used by any user of the MetaCentrum. IoP researchers can also use distant resources located in several towns of the Czech Republic with a capacity of more than 12000 cores in total.

  4. Exploiting volatile opportunistic computing resources with Lobster

    Science.gov (United States)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  5. Chapter 29: Using an Existing Environment in the VO (IDL)

    Science.gov (United States)

    Miller, C. J.

    The local environment of a Brightest Cluster Galaxy (BCG) can provide insight into the (still not understood) formation process of the BCG itself. BCGs are the most massive galaxies in the Universe, and their formation and evolution are a popular and current research topic (Linden et al. 2006, Bernardi et al. 2006, Lauer et al. 2006). They have been studied for some time (Sandage 1972, Ostriker & Tremaine 1975, White 1976, Thuan & Romanishin 1981, Merritt 1985, Postman and Lauer 1995, among many others). Our goal in this chapter is to study how the local environment can affect the physical and measurable properties of BCGs. We will conduct an exploratory research exercise. In this chapter, we will show how the Virtual Observatory (VO) can be effectively utilized for doing modern scientific research on BCGs. We identify the scientific functionalities we need, the datasets we require, and the service locations in order to discover and access those data. This chapter utilizes IDL's VOlib, which is described in Chapter 24 of this book and is available at http://www.nvo.noao.edu. IDL provides the capability to perform the entire range of astronomical scientific analyses in one environment: from image reduction and analysis to complex catalog manipulations, statistics, and publication quality figures. At the 2005 and 2006 NVO Summer Schools, user statistics show that IDL was the most commonly used programming language by the students (nearly 3-to-1 over languages like IRAF, Perl, and Python). In this chapter we show how the integration of IDL to the VO through VOlib provides even greater capabilities and possibilities for conducting science in the era of the Virtual Observatory. The reader should familiarize themselves with the VOlib libraries before attempting the examples in this tutorial. We first build a research plan. We then discover the service URLs we will need to access the data. We then apply the necessary functions and tools to these data before we can do our

  6. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  7. Argonne's Laboratory Computing Resource Center 2009 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B. (CLS-CI)

    2011-05-13

    Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions.

  8. Using OSG Computing Resources with (iLC)Dirac

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Petric, Marko

    2017-01-01

    CPU cycles for small experiments and projects can be scarce, thus making use of all available resources, whether dedicated or opportunistic, is mandatory. While enabling uniform access to the LCG computing elements (ARC, CREAM), the DIRAC grid interware was not able to use OSG computing elements (GlobusCE, HTCondor-CE) without dedicated support at the grid site through so called 'SiteDirectors', which directly submit to the local batch system. This in turn requires additional dedicated effort for small experiments on the grid site. Adding interfaces to the OSG CEs through the respective grid middleware is therefore allowing accessing them within the DIRAC software without additional sitespecific infrastructure. This enables greater use of opportunistic resources for experiments and projects without dedicated clusters or an established computing infrastructure with the DIRAC software. To allow sending jobs to HTCondor-CE and legacy Globus computing elements inside DIRAC the required wrapper classes were develo...

  9. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    Science.gov (United States)

    Cirasella, Jill

    2009-01-01

    This article presents an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news. All of the quantum computing resources described in this article are freely available, English-language web sites that fall into one…

  10. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  11. GNU Data Language (GDL) - a free and open-source implementation of IDL

    Science.gov (United States)

    Arabas, Sylwester; Schellens, Marc; Coulais, Alain; Gales, Joel; Messmer, Peter

    2010-05-01

    GNU Data Language (GDL) is developed with the aim of providing an open-source drop-in replacement for the ITTVIS's Interactive Data Language (IDL). It is free software developed by an international team of volunteers led by Marc Schellens - the project's founder (a list of contributors is available on the project's website). The development is hosted on SourceForge where GDL continuously ranks in the 99th percentile of most active projects. GDL with its library routines is designed as a tool for numerical data analysis and visualisation. As its proprietary counterparts (IDL and PV-WAVE), GDL is used particularly in geosciences and astronomy. GDL is dynamically-typed, vectorized and has object-oriented programming capabilities. The library routines handle numerical calculations, data visualisation, signal/image processing, interaction with host OS and data input/output. GDL supports several data formats such as netCDF, HDF4, HDF5, GRIB, PNG, TIFF, DICOM, etc. Graphical output is handled by X11, PostScript, SVG or z-buffer terminals, the last one allowing output to be saved in a variety of raster graphics formats. GDL is an incremental compiler with integrated debugging facilities. It is written in C++ using the ANTLR language-recognition framework. Most of the library routines are implemented as interfaces to open-source packages such as GNU Scientific Library, PLPlot, FFTW, ImageMagick, and others. GDL features a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). Extensions to GDL can be written in C++, GDL, and Python. A number of open software libraries written in IDL, such as the NASA Astronomy Library, MPFIT, CMSVLIB and TeXtoIDL are fully or partially functional under GDL. Packaged versions of GDL are available for several Linux distributions and Mac OS X. The source code compiles on some other UNIX systems, including BSD and OpenSolaris. The presentation will cover the current status of the project, the key

  12. Argonne's Laboratory computing resource center : 2006 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P.

    2007-05-31

    Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff

  13. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Tupputi, S A; Girolamo, A Di; Kouba, T; Schovancová, J

    2014-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  14. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  15. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  16. Development of Computer-Based Resources for Textile Education.

    Science.gov (United States)

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  17. Effect of leaving milk trucks empty and idle for 6 h between raw milk loads.

    Science.gov (United States)

    Kuhn, Eva; Meunier-Goddik, Lisbeth; Waite-Cusic, Joy G

    2018-02-01

    The US Pasteurized Milk Ordinance (PMO) allows milk tanker trucks to be used repeatedly for 24 h before mandatory clean-in-place cleaning, but no specifications are given for the length of time a tanker can be empty between loads. We defined a worst-case hauling scenario as a hauling vessel left empty and dirty (idle) for extended periods between loads, especially in warm weather. Initial studies were conducted using 5-gallon milk cans (pilot-scale) as a proof-of-concept and to demonstrate that extended idle time intervals could contribute to compromised raw milk quality. Based on pilot-scale results, a commercial hauling study was conducted through partnership with a Pacific Northwest dairy co-op to verify that extended idle times of 6 h between loads have minimal influence on the microbiological populations and enzyme activity in subsequent loads of milk. Milk cans were used to haul raw milk (load 1), emptied, incubated at 30°C for 3, 6, 10, and 20 h, and refilled with commercially pasteurized whole milk (load 2) to measure cross-contamination. For the commercial-scale study, a single tanker was filled with milk from a farm known to have poorer quality milk (farm A, load 1), emptied, and refilled immediately (0 h) or after a delay (6 h) with milk from a farm known to have superior quality milk (farm B, load 2). In both experiments, milk samples were obtained from each farm's bulk tank and from the milk can or tanker before unloading. Each sample was microbiologically assessed for standard plate count (SPC), lactic acid bacteria (LAB), and coliform counts. Selected isolates were assessed for lipolytic and proteolytic activity using spirit blue agar and skim milk agar, respectively. The pilot-scale experiment effectively demonstrated that extended periods of idle (>3 h) of soiled hauling vessels can significantly affect the microbiological quality of raw milk in subsequent loads; however, extended idle times of 6 h or less would not measurably compromise milk

  18. Reflections on the different sides of idleness in contemporary times

    Directory of Open Access Journals (Sweden)

    Patrícia Zaczuk Bassinello

    2015-04-01

    Full Text Available Over the last century, idleness experienced a modernization and democratization process especially with the crisis of a society focused on work – the post-Industrial Revolution - and the emergence of new ideas that put the free time, the leisure and recreation in the role of structural elements of the new social context and like tools for the new ways of life. In this work, we we seek to focus on the significant aspects of reality and function of leisure in our time, clarifying their relationship with the processes of personal, social and economic innovation by establishing a balance of our acts in thinking the leisure and work and leisure and life from different angles of approach. In order to analyze this phenomenon, we were based on scientific sources which are representative in the context, and then we elaborated a general overview of the subject from the contributions of the bakhtinian perspectives. We observed that the increase in leisure options in the last decades of the twentieth century, along with the growth of the studies of the idleness phenomenon and its possibilities, allowed an evolution of its concepts, from activities or practices associated to the consumption and to digital entertainment, to its understanding as an experience whose key of the discussion is the subject living these experiences. We believe that this reflection about idleness may open possibilities of a better comprehension of its insertion in the social and human sciences field and, especially, in its contribution to a new attitude of the relational production, centered on the subject, which stimulates a society that creates and innovates goods and services and who deepens the studies of leisure from the dynamic experiential horizon to the right to the otherness and to its time – the own one and the others’ – such as "the right to unfunctionality", from listening to the other word.

  19. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  20. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  1. Argonne's Laboratory Computing Resource Center : 2005 annual report.

    Energy Technology Data Exchange (ETDEWEB)

    Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P.

    2007-06-30

    Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure

  2. Dimetrodon: Processor-level Preventive Thermal Management via Idle Cycle Injection

    OpenAIRE

    Reddi, Vijay Janapa; Gandhi, Sanjay; Brooks, David M.; Seltzer, Margo I.; Bailis, Peter

    2011-01-01

    Processor-level dynamic thermal management techniques have long targeted worst-case thermal margins. We examine the thermal-performance trade-offs in average-case, preventive thermal management by actively degrading application performance to achieve long-term thermal control. We propose Dimetrodon, the use of idle cycle injection, a flexible, per-thread technique, as a preventive thermal management mechanism and demonstrate its efficiency compared to hardware techniques in a commodity operatin...

  3. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  4. Computing Resource And Work Allocations Using Social Profiles

    Directory of Open Access Journals (Sweden)

    Peter Lavin

    2013-01-01

    Full Text Available If several distributed and disparate computer resources exist, many of whichhave been created for different and diverse reasons, and several large scale com-puting challenges also exist with similar diversity in their backgrounds, then oneproblem which arises in trying to assemble enough of these resources to addresssuch challenges is the need to align and accommodate the different motivationsand objectives which may lie behind the existence of both the resources andthe challenges. Software agents are offered as a mainstream technology formodelling the types of collaborations and relationships needed to do this. Asan initial step towards forming such relationships, agents need a mechanism toconsider social and economic backgrounds. This paper explores addressing so-cial and economic differences using a combination of textual descriptions knownas social profiles and search engine technology, both of which are integrated intoan agent technology.

  5. An Economic Framework for Resource Allocation in Ad-hoc Grids

    OpenAIRE

    Pourebrahimi, B.

    2009-01-01

    In this dissertation, we present an economic framework to study and develop different market-based mechanisms for resource allocation in an ad-hoc Grid. Such an economic framework helps to understand the impact of certain choices and explores what are the suitable mechanisms from Grid user/owner perspectives under given circumstances. We focus on resource allocation in a Grid-based environment in the case where some resources are lying idle and could be linked with overloaded nodes in a netwo...

  6. Idle reduction assessment for the New York State Department of Transportation region 4 fleet.

    Science.gov (United States)

    2015-03-01

    Energetics Incorporated conducted a study to evaluate the operational, economic, and environmental impacts of advanced technologies to reduce idling in : the New York State Department of Transportation (NYSDOT) Region 4 fleet without compromising fun...

  7. IUE Data Analysis Software for Personal Computers

    Science.gov (United States)

    Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.

    1996-01-01

    This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.

  8. GridFactory - Distributed computing on ephemeral resources

    DEFF Research Database (Denmark)

    Orellana, Frederik; Niinimaki, Marko

    2011-01-01

    A novel batch system for high throughput computing is presented. The system is specifically designed to leverage virtualization and web technology to facilitate deployment on cloud and other ephemeral resources. In particular, it implements a security model suited for forming collaborations...

  9. LHCb Computing Resources: 2012 re-assessment, 2013 request and 2014 forecast

    CERN Document Server

    Graciani Diaz, Ricardo

    2012-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2012 data-taking period, request of computing resource needs for 2013, and a first forecast of the 2014 needs, when restart of data-taking is foreseen. Estimates are based on 2011 experience, as well as on the results of a simulation of the computing model described in the document. Differences in the model and deviations in the estimates from previous presented results are stressed.

  10. Large Data at Small Universities: Astronomical processing using a computer classroom

    Science.gov (United States)

    Fuller, Nathaniel James; Clarkson, William I.; Fluharty, Bill; Belanger, Zach; Dage, Kristen

    2016-06-01

    The use of large computing clusters for astronomy research is becoming more commonplace as datasets expand, but access to these required resources is sometimes difficult for research groups working at smaller Universities. As an alternative to purchasing processing time on an off-site computing cluster, or purchasing dedicated hardware, we show how one can easily build a crude on-site cluster by utilizing idle cycles on instructional computers in computer-lab classrooms. Since these computers are maintained as part of the educational mission of the University, the resource impact on the investigator is generally low.By using open source Python routines, it is possible to have a large number of desktop computers working together via a local network to sort through large data sets. By running traditional analysis routines in an “embarrassingly parallel” manner, gains in speed are accomplished without requiring the investigator to learn how to write routines using highly specialized methodology. We demonstrate this concept here applied to 1. photometry of large-format images and 2. Statistical significance-tests for X-ray lightcurve analysis. In these scenarios, we see a speed-up factor which scales almost linearly with the number of cores in the cluster. Additionally, we show that the usage of the cluster does not severely limit performance for a local user, and indeed the processing can be performed while the computers are in use for classroom purposes.

  11. 40 CFR 85.2219 - Idle test with loaded preconditioning-EPA 91.

    Science.gov (United States)

    2010-07-01

    ... (5.1-6.3). 7 or more 32-35 (52-56) 8.4-10.8 (6.3-8.1). (2) Idle mode—(i) Ford Motor Company and Honda vehicles. (Optional.) The engines of 1981-1987 model year Ford Motor Company vehicles and 1984-1985 model... also be used for 1988-1989 model year Ford Motor Company vehicles but may not be used for other...

  12. LHCb Computing Resources: 2011 re-assessment, 2012 request and 2013 forecast

    CERN Document Server

    Graciani, R

    2011-01-01

    This note covers the following aspects: re-assessment of computing resource usage estimates for 2011 data taking period, request of computing resource needs for 2012 data taking period and a first forecast of the 2013 needs, when no data taking is foreseen. Estimates are based on 2010 experienced and last updates from LHC schedule, as well as on a new implementation of the computing model simulation tool. Differences in the model and deviations in the estimates from previous presented results are stressed.

  13. Can the identification of an idle line facilitate its removal? A comparison between a proposed guideline and clinical practice.

    Science.gov (United States)

    Kara, Areeba; Johnson, Cynthia S; Murray, Michelle; Dillon, Jill; Hui, Siu L

    2016-07-01

    There are 250,000 cases of central line-associated blood stream infections in the United States annually, some of which may be prevented by the removal of lines that are no longer needed. To test the performance of criteria to identify an idle line as a guideline to facilitate its removal. Patients with central lines on the wards were identified. Criteria for justified use were defined. If none were met, the line was considered "idle." We proposed the guideline that a line may be removed the day following the first idle day and compared actual practice with our proposed guideline. One hundred twenty-six lines in 126 patients were observed. Eighty-three (65.9%) were peripherally inserted central catheters. Twenty-seven percent (n= 34) were placed for antibiotics. Seventy-six patients had lines removed prior to discharge. In these patients, the line was in place for 522 days, of which 32.7% were idle. The most common reasons to justify the line included parenteral antibiotics and meeting systemic inflammatory response (SIRS) criteria. In 11 (14.5%) patients, the line was removed prior to the proposed guideline. Most (n = 36, 47.4%) line removals were observed to be in accordance with our guideline. In another 29 (38.2%), line removal was delayed compared to our guideline. Idle days are common. Central line days may be reduced by the consistent daily reevaluation of a line's justification using defined criteria. The practice of routine central line placement for prolonged antibiotics and the inclusion of SIRS criteria to justify the line may need to be reevaluated. Journal of Hospital Medicine 2016;11:489-493. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.

  14. Economic models for management of resources in peer-to-peer and grid computing

    Science.gov (United States)

    Buyya, Rajkumar; Stockinger, Heinz; Giddy, Jonathan; Abramson, David

    2001-07-01

    The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains peer-to-peer resources located on five continents: Asia, Australia, Europe, North America, and South America.

  15. Next Generation Computer Resources: Reference Model for Project Support Environments (Version 2.0)

    National Research Council Canada - National Science Library

    Brown, Alan

    1993-01-01

    The objective of the Next Generation Computer Resources (NGCR) program is to restructure the Navy's approach to acquisition of standard computing resources to take better advantage of commercial advances and investments...

  16. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  17. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  18. Active resources concept of computation for enterprise software

    Directory of Open Access Journals (Sweden)

    Koryl Maciej

    2017-06-01

    Full Text Available Traditional computational models for enterprise software are still to a great extent centralized. However, rapid growing of modern computation techniques and frameworks causes that contemporary software becomes more and more distributed. Towards development of new complete and coherent solution for distributed enterprise software construction, synthesis of three well-grounded concepts is proposed: Domain-Driven Design technique of software engineering, REST architectural style and actor model of computation. As a result new resources-based framework arises, which after first cases of use seems to be useful and worthy of further research.

  19. LHCb Computing Resource usage in 2017

    CERN Document Server

    Bozzi, Concezio

    2018-01-01

    This document reports the usage of computing resources by the LHCb collaboration during the period January 1st – December 31st 2017. The data in the following sections have been compiled from the EGI Accounting portal: https://accounting.egi.eu. For LHCb specific information, the data is taken from the DIRAC Accounting at the LHCb DIRAC Web portal: http://lhcb-portal-dirac.cern.ch.

  20. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Science.gov (United States)

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  1. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Nan Zhang

    Full Text Available Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  2. Multicriteria Resource Brokering in Cloud Computing for Streaming Service

    Directory of Open Access Journals (Sweden)

    Chih-Lun Chou

    2015-01-01

    Full Text Available By leveraging cloud computing such as Infrastructure as a Service (IaaS, the outsourcing of computing resources used to support operations, including servers, storage, and networking components, is quite beneficial for various providers of Internet application. With this increasing trend, resource allocation that both assures QoS via Service Level Agreement (SLA and avoids overprovisioning in order to reduce cost becomes a crucial priority and challenge in the design and operation of complex service-based platforms such as streaming service. On the other hand, providers of IaaS also concern their profit performance and energy consumption while offering these virtualized resources. In this paper, considering both service-oriented and infrastructure-oriented criteria, we regard this resource allocation problem as Multicriteria Decision Making problem and propose an effective trade-off approach based on goal programming model. To validate its effectiveness, a cloud architecture for streaming application is addressed and extensive analysis is performed for related criteria. The results of numerical simulations show that the proposed approach strikes a balance between these conflicting criteria commendably and achieves high cost efficiency.

  3. Application of Finite Element Based Simulation and Modal Testing Methods to Improve Vehicle Powertrain Idle Vibration

    Directory of Open Access Journals (Sweden)

    Polat Sendur

    2017-01-01

    Full Text Available Current practice of analytical and test methods related to the analysis, testing and improvement of vehicle vibrations is overviewed. The methods are illustrated on the determination and improvement of powertrain induced steering wheel vibration of a heavy commercial truck. More specifically, the transmissibility of powertrain idle vibration to cabin is investigated with respect to powertrain rigid body modes and modal alignment of the steering column/wheel system is considered. It is found out that roll mode of the powertrain is not separated from idle excitation for effective vibration isolation as well as steering wheel column mode is close to the 3rd engine excitation frequency order, which results in high vibration levels. Powertrain roll mode is optimized by tuning the powertrain mount stiffness to improve the performance. Steering column mode is also separated from the 3rd engine excitation frequency by the application of a mass absorber. It is concluded that the use of analytical and test methods to address the complex relation between design parameters and powertrain idle response is effective to optimize the system performance and evaluate the trade-offs in the vehicle design such as vibration performance and weight. Reference Number: www.asrongo.org/doi:4.2017.2.1.10

  4. Dynamic provisioning of local and remote compute resources with OpenStack

    Science.gov (United States)

    Giffels, M.; Hauth, T.; Polgart, F.; Quast, G.

    2015-12-01

    Modern high-energy physics experiments rely on the extensive usage of computing resources, both for the reconstruction of measured events as well as for Monte-Carlo simulation. The Institut fur Experimentelle Kernphysik (EKP) at KIT is participating in both the CMS and Belle experiments with computing and storage resources. In the upcoming years, these requirements are expected to increase due to growing amount of recorded data and the rise in complexity of the simulated events. It is therefore essential to increase the available computing capabilities by tapping into all resource pools. At the EKP institute, powerful desktop machines are available to users. Due to the multi-core nature of modern CPUs, vast amounts of CPU time are not utilized by common desktop usage patterns. Other important providers of compute capabilities are classical HPC data centers at universities or national research centers. Due to the shared nature of these installations, the standardized software stack required by HEP applications cannot be installed. A viable way to overcome this constraint and offer a standardized software environment in a transparent manner is the usage of virtualization technologies. The OpenStack project has become a widely adopted solution to virtualize hardware and offer additional services like storage and virtual machine management. This contribution will report on the incorporation of the institute's desktop machines into a private OpenStack Cloud. The additional compute resources provisioned via the virtual machines have been used for Monte-Carlo simulation and data analysis. Furthermore, a concept to integrate shared, remote HPC centers into regular HEP job workflows will be presented. In this approach, local and remote resources are merged to form a uniform, virtual compute cluster with a single point-of-entry for the user. Evaluations of the performance and stability of this setup and operational experiences will be discussed.

  5. LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

    CERN Document Server

    Bozzi, Concezio

    2017-01-01

    This document presents the computing resources needed by LHCb in 2019 and a reassessment of the 2018 requests, as resulting from the current experience of Run2 data taking and minor changes in the LHCb computing model parameters.

  6. A Review of Computer Science Resources for Learning and Teaching with K-12 Computing Curricula: An Australian Case Study

    Science.gov (United States)

    Falkner, Katrina; Vivian, Rebecca

    2015-01-01

    To support teachers to implement Computer Science curricula into classrooms from the very first year of school, teachers, schools and organisations seek quality curriculum resources to support implementation and teacher professional development. Until now, many Computer Science resources and outreach initiatives have targeted K-12 school-age…

  7. Resource-Aware Load Balancing Scheme using Multi-objective Optimization in Cloud Computing

    OpenAIRE

    Kavita Rana; Vikas Zandu

    2016-01-01

    Cloud computing is a service based, on-demand, pay per use model consisting of an interconnected and virtualizes resources delivered over internet. In cloud computing, usually there are number of jobs that need to be executed with the available resources to achieve optimal performance, least possible total time for completion, shortest response time, and efficient utilization of resources etc. Hence, job scheduling is the most important concern that aims to ensure that use’s requirement are ...

  8. VECTR: Virtual Environment Computational Training Resource

    Science.gov (United States)

    Little, William L.

    2018-01-01

    The Westridge Middle School Curriculum and Community Night is an annual event designed to introduce students and parents to potential employers in the Central Florida area. NASA participated in the event in 2017, and has been asked to come back for the 2018 event on January 25. We will be demonstrating our Microsoft Hololens Virtual Rovers project, and the Virtual Environment Computational Training Resource (VECTR) virtual reality tool.

  9. BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way

    Science.gov (United States)

    Wu, Wenjing; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo; Kan, Wenxiao; Urquijo, Phillip

    2017-10-01

    The exploitation of volunteer computing resources has become a popular practice in the HEP computing community as the huge amount of potential computing power it provides. In the recent HEP experiments, the grid middleware has been used to organize the services and the resources, however it relies heavily on the X.509 authentication, which is contradictory to the untrusted feature of volunteer computing resources, therefore one big challenge to utilize the volunteer computing resources is how to integrate them into the grid middleware in a secure way. The DIRAC interware which is commonly used as the major component of the grid computing infrastructure for several HEP experiments proposes an even bigger challenge to this paradox as its pilot is more closely coupled with operations requiring the X.509 authentication compared to the implementations of pilot in its peer grid interware. The Belle II experiment is a B-factory experiment at KEK, and it uses DIRAC for its distributed computing. In the project of BelleII@home, in order to integrate the volunteer computing resources into the Belle II distributed computing platform in a secure way, we adopted a new approach which detaches the payload running from the Belle II DIRAC pilot which is a customized pilot pulling and processing jobs from the Belle II distributed computing platform, so that the payload can run on volunteer computers without requiring any X.509 authentication. In this approach we developed a gateway service running on a trusted server which handles all the operations requiring the X.509 authentication. So far, we have developed and deployed the prototype of BelleII@home, and tested its full workflow which proves the feasibility of this approach. This approach can also be applied on HPC systems whose work nodes do not have outbound connectivity to interact with the DIRAC system in general.

  10. Quantum Computing: Selected Internet Resources for Librarians, Researchers, and the Casually Curious

    OpenAIRE

    Cirasella, Jill

    2009-01-01

    This article is an annotated selection of the most important and informative Internet resources for learning about quantum computing, finding quantum computing literature, and tracking quantum computing news.

  11. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  12. General-purpose computer networks and resource sharing in ERDA. Volume 3. Remote resource-sharing experience and findings

    Energy Technology Data Exchange (ETDEWEB)

    1977-07-15

    The investigation focused on heterogeneous networks in which a variety of dissimilar computers and operating systems were interconnected nationwide. Homogeneous networks, such as MFE net and SACNET, were not considered since they could not be used for general purpose resource sharing. Issues of privacy and security are of concern in any network activity. However, consideration of privacy and security of sensitive data arise to a much lesser degree in unclassified scientific research than in areas involving personal or proprietary information. Therefore, the existing mechanisms at individual sites for protecting sensitive data were relied on, and no new protection mechanisms to prevent infringement of privacy and security were attempted. Further development of ERDA networking will need to incorporate additional mechanisms to prevent infringement of privacy. The investigation itself furnishes an excellent example of computational resource sharing through a heterogeneous network. More than twenty persons, representing seven ERDA computing sites, made extensive use of both ERDA and non-ERDA computers in coordinating, compiling, and formatting the data which constitute the bulk of this report. Volume 3 analyzes the benefits and barriers encountered in actual resource sharing experience, and provides case histories of typical applications.

  13. Low IDL-B and high LDL-1 subfraction levels in serum of ALS patients.

    Science.gov (United States)

    Delaye, J B; Patin, F; Piver, E; Bruno, C; Vasse, M; Vourc'h, P; Andres, C R; Corcia, P; Blasco, H

    2017-09-15

    Converging evidence highlights that lipid metabolism plays a key role in ALS pathophysiology. Dyslipidemia has been described in ALS patients and may be protective but peripheral lipoprotein subclasses have never been studied. We collected sera from 30 ALS patients and 30 gender and age-matched controls. We analyzed 11 distinct lipoprotein subclasses by linear polyacrylamide gel electrophoresis (Lipoprint, Quantimetrix Corporation, USA). We also measured lipoprotein (a), apolipoprotein B, and apolipoprotein E levels. ALS patients had significant higher total cholesterol, HDL-cholesterol, and LDL-cholesterol levels than controls (pALS patients than controls. Our preliminary work confirmed the association between ALS and dyslipidemia. The low IDL-B levels may explain the hepatic steatosis frequently reported in ALS. The high levels of the cholesterol-rich LDL-1 subfraction is consistent with previously reported hypercholesterolemia. This study describes, for the first time, the distribution of serum lipoproteins in ALS patients, with low IDL-B and high LDL-1 subfraction level. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Associations of PM2.5 and black carbon concentrations with traffic, idling, background pollution, and meteorology during school dismissals.

    Science.gov (United States)

    Richmond-Bryant, J; Saganich, C; Bukiewicz, L; Kalin, R

    2009-05-01

    An air quality study was performed outside a cluster of schools in the East Harlem neighborhood of New York City. PM(2.5) and black carbon concentrations were monitored using real-time equipment with a one-minute averaging interval. Monitoring was performed at 1:45-3:30 PM during school days over the period October 31-November 17, 2006. The designated time period was chosen to capture vehicle emissions during end-of-day dismissals from the schools. During the monitoring period, minute-by-minute volume counts of idling and passing school buses, diesel trucks, and automobiles were obtained. These data were transcribed into time series of number of diesel vehicles idling, number of gasoline automobiles idling, number of diesel vehicles passing, and number of automobiles passing along the block adjacent to the school cluster. Multivariate regression models of the log-transform of PM(2.5) and black carbon (BC) concentrations in the East Harlem street canyon were developed using the observation data and data from the New York State Department of Environmental Conservation on meteorology and background PM(2.5). Analysis of variance was used to test the contribution of each covariate to variability in the log-transformed concentrations as a means to judge the relative contribution of each covariate. The models demonstrated that variability in background PM(2.5) contributes 80.9% of the variability in log[PM(2.5)] and 81.5% of the variability in log[BC]. Local traffic sources were demonstrated to contribute 5.8% of the variability in log[BC] and only 0.43% of the variability in log[PM(2.5)]. Diesel idling and passing were both significant contributors to variability in log[BC], while diesel passing was a significant contributor to log[PM(2.5)]. Automobile idling and passing did not contribute significant levels of variability to either concentration. The remainder of variability in each model was explained by temperature, along-canyon wind, and cross-canyon wind, which were

  15. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  16. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  17. FATCOP: A Fault Tolerant Condor-PVM Mixed Integer Program Solver

    National Research Council Canada - National Science Library

    Chen, Qun

    1999-01-01

    We describe FATCOP, a new parallel mixed integer program solver written in PVM. The implementation uses the Condor resource management system to provide a virtual machine composed of otherwise idle computers...

  18. CloudGC: Recycling Idle Virtual Machines in the Cloud

    OpenAIRE

    Zhang , Bo; Al-Dhuraibi , Yahya; Rouvoy , Romain; Paraiso , Fawaz; Seinturier , Lionel

    2017-01-01

    International audience; Cloud computing conveys the image of a pool of unlimited virtual resources that can be quickly and easily provisioned to accommodate the user requirements. However, this flexibility may require to adjust physical resources at the infrastructure level to keep the pace of user requests. While elasticity can be considered as the de facto solution to support this issue, this elasticity can still be broken by budget requirements or physical limitations of a private cloud. I...

  19. Optimised resource construction for verifiable quantum computation

    International Nuclear Information System (INIS)

    Kashefi, Elham; Wallden, Petros

    2017-01-01

    Recent developments have brought the possibility of achieving scalable quantum networks and quantum devices closer. From the computational point of view these emerging technologies become relevant when they are no longer classically simulatable. Hence a pressing challenge is the construction of practical methods to verify the correctness of the outcome produced by universal or non-universal quantum devices. A promising approach that has been extensively explored is the scheme of verification via encryption through blind quantum computation. We present here a new construction that simplifies the required resources for any such verifiable protocol. We obtain an overhead that is linear in the size of the input (computation), while the security parameter remains independent of the size of the computation and can be made exponentially small (with a small extra cost). Furthermore our construction is generic and could be applied to any universal or non-universal scheme with a given underlying graph. (paper)

  20. Impact of changing computer technology on hydrologic and water resource modeling

    OpenAIRE

    Loucks, D.P.; Fedra, K.

    1987-01-01

    The increasing availability of substantial computer power at relatively low costs and the increasing ease of using computer graphics, of communicating with other computers and data bases, and of programming using high-level problem-oriented computer languages, is providing new opportunities and challenges for those developing and using hydrologic and water resources models. This paper reviews some of the progress made towards the development and application of computer support systems designe...

  1. ACToR - Aggregated Computational Toxicology Resource

    International Nuclear Information System (INIS)

    Judson, Richard; Richard, Ann; Dix, David; Houck, Keith; Elloumi, Fathi; Martin, Matthew; Cathey, Tommy; Transue, Thomas R.; Spencer, Richard; Wolf, Maritja

    2008-01-01

    ACToR (Aggregated Computational Toxicology Resource) is a database and set of software applications that bring into one central location many types and sources of data on environmental chemicals. Currently, the ACToR chemical database contains information on chemical structure, in vitro bioassays and in vivo toxicology assays derived from more than 150 sources including the U.S. Environmental Protection Agency (EPA), Centers for Disease Control (CDC), U.S. Food and Drug Administration (FDA), National Institutes of Health (NIH), state agencies, corresponding government agencies in Canada, Europe and Japan, universities, the World Health Organization (WHO) and non-governmental organizations (NGOs). At the EPA National Center for Computational Toxicology, ACToR helps manage large data sets being used in a high-throughput environmental chemical screening and prioritization program called ToxCast TM

  2. Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony Optimization Algorithm

    OpenAIRE

    Lingna He; Qingshui Li; Linan Zhu

    2012-01-01

    In order to replace the traditional Internet software usage patterns and enterprise management mode, this paper proposes a new business calculation mode- cloud computing, resources scheduling strategy is the key technology in cloud computing, Based on the study of cloud computing system structure and the mode of operation, The key research for cloud computing the process of the work scheduling and resource allocation problems based on ant colony algorithm , Detailed analysis and design of the...

  3. Perspectives on Sharing Models and Related Resources in Computational Biomechanics Research.

    Science.gov (United States)

    Erdemir, Ahmet; Hunter, Peter J; Holzapfel, Gerhard A; Loew, Leslie M; Middleton, John; Jacobs, Christopher R; Nithiarasu, Perumal; Löhner, Rainlad; Wei, Guowei; Winkelstein, Beth A; Barocas, Victor H; Guilak, Farshid; Ku, Joy P; Hicks, Jennifer L; Delp, Scott L; Sacks, Michael; Weiss, Jeffrey A; Ateshian, Gerard A; Maas, Steve A; McCulloch, Andrew D; Peng, Grace C Y

    2018-02-01

    The role of computational modeling for biomechanics research and related clinical care will be increasingly prominent. The biomechanics community has been developing computational models routinely for exploration of the mechanics and mechanobiology of diverse biological structures. As a result, a large array of models, data, and discipline-specific simulation software has emerged to support endeavors in computational biomechanics. Sharing computational models and related data and simulation software has first become a utilitarian interest, and now, it is a necessity. Exchange of models, in support of knowledge exchange provided by scholarly publishing, has important implications. Specifically, model sharing can facilitate assessment of reproducibility in computational biomechanics and can provide an opportunity for repurposing and reuse, and a venue for medical training. The community's desire to investigate biological and biomechanical phenomena crossing multiple systems, scales, and physical domains, also motivates sharing of modeling resources as blending of models developed by domain experts will be a required step for comprehensive simulation studies as well as the enhancement of their rigor and reproducibility. The goal of this paper is to understand current perspectives in the biomechanics community for the sharing of computational models and related resources. Opinions on opportunities, challenges, and pathways to model sharing, particularly as part of the scholarly publishing workflow, were sought. A group of journal editors and a handful of investigators active in computational biomechanics were approached to collect short opinion pieces as a part of a larger effort of the IEEE EMBS Computational Biology and the Physiome Technical Committee to address model reproducibility through publications. A synthesis of these opinion pieces indicates that the community recognizes the necessity and usefulness of model sharing. There is a strong will to facilitate

  4. Computer-aided resource planning and scheduling for radiological services

    Science.gov (United States)

    Garcia, Hong-Mei C.; Yun, David Y.; Ge, Yiqun; Khan, Javed I.

    1996-05-01

    There exists tremendous opportunity in hospital-wide resource optimization based on system integration. This paper defines the resource planning and scheduling requirements integral to PACS, RIS and HIS integration. An multi-site case study is conducted to define the requirements. A well-tested planning and scheduling methodology, called Constrained Resource Planning model, has been applied to the chosen problem of radiological service optimization. This investigation focuses on resource optimization issues for minimizing the turnaround time to increase clinical efficiency and customer satisfaction, particularly in cases where the scheduling of multiple exams are required for a patient. How best to combine the information system efficiency and human intelligence in improving radiological services is described. Finally, an architecture for interfacing a computer-aided resource planning and scheduling tool with the existing PACS, HIS and RIS implementation is presented.

  5. Assessing attitudes toward computers and the use of Internet resources among undergraduate microbiology students

    Science.gov (United States)

    Anderson, Delia Marie Castro

    Computer literacy and use have become commonplace in our colleges and universities. In an environment that demands the use of technology, educators should be knowledgeable of the components that make up the overall computer attitude of students and be willing to investigate the processes and techniques of effective teaching and learning that can take place with computer technology. The purpose of this study is two fold. First, it investigates the relationship between computer attitudes and gender, ethnicity, and computer experience. Second, it addresses the question of whether, and to what extent, students' attitudes toward computers change over a 16 week period in an undergraduate microbiology course that supplements the traditional lecture with computer-driven assignments. Multiple regression analyses, using data from the Computer Attitudes Scale (Loyd & Loyd, 1985), showed that, in the experimental group, no significant relationships were found between computer anxiety and gender or ethnicity or between computer confidence and gender or ethnicity. However, students who used computers the longest (p = .001) and who were self-taught (p = .046) had the lowest computer anxiety levels. Likewise students who used computers the longest (p = .001) and who were self-taught (p = .041) had the highest confidence levels. No significant relationships between computer liking, usefulness, or the use of Internet resources and gender, ethnicity, or computer experience were found. Dependent T-tests were performed to determine whether computer attitude scores (pretest and posttest) increased over a 16-week period for students who had been exposed to computer-driven assignments and other Internet resources. Results showed that students in the experimental group were less anxious about working with computers and considered computers to be more useful. In the control group, no significant changes in computer anxiety, confidence, liking, or usefulness were noted. Overall, students in

  6. Image enhancement by using IDL for a mammographic x-ray image in Medical Physics Laboratory

    International Nuclear Information System (INIS)

    Asmaliza Hashim; Md Saion Salikin; Wan Hazlinda Ismail; Norriza Mohd Isa; Azuhar Ripin

    2004-01-01

    Digital image enhancement technique can have a significant impact on the diagnostic quality of a radiographic image. The main aim of image enhancement is to process the image so that the enhanced image is clearer and more useful for specific application. There are three types of image enhancement namely noise reduction, edge enhancement and contrast enhancement. The objective of this project is to enhance the mammographic image by using Interactive Data Language (IDL) software in spatial and frequency domain by using various methods. In spatial domain method, direct manipulation of pixel in an image is used whereas, in frequency domain method, modifying the spectral component or Fourier Transform of an image is used In order to obtain the good quality mammographic image, breast phantom Model 12A with 4.0 cm compressed thickness and Bennett Model DMF- 150 Mammography Machine with various kV and mA are employed. The results of enhanced image with selected technique by using IDL are presented in this paper. (Author)

  7. Energy-efficient pulse-coupled synchronization strategy design for wireless sensor networks through reduced idle listening

    Science.gov (United States)

    Wang, Yongqiang; Núñez, Felipe; Doyle, Francis J.

    2013-01-01

    Synchronization is crucial to wireless sensor networks due to their decentralized structure. We propose an energy-efficient pulse-coupled synchronization strategy to achieve this goal. The basic idea is to reduce idle listening by intentionally introducing a large refractory period in the sensors’ cooperation. The large refractory period greatly reduces idle listening in each oscillation period, and is analytically proven to have no influence on the time to synchronization. Hence, it significantly reduces the total energy consumption in a synchronization process. A topology control approach tailored for pulse-coupled synchronization is given to guarantee a k-edge strongly connected interaction topology, which is tolerant to communication-link failures. The topology control approach is totally decentralized and needs no information exchange among sensors, and it is applicable to dynamic network topologies as well. This facilitates a completely decentralized implementation of the synchronization strategy. The strategy is applicable to mobile sensor networks, too. QualNet case studies confirm the effectiveness of the synchronization strategy. PMID:24307831

  8. Can the Teachers' Creativity Overcome Limited Computer Resources?

    Science.gov (United States)

    Nikolov, Rumen; Sendova, Evgenia

    1988-01-01

    Describes experiences of the Research Group on Education (RGE) at the Bulgarian Academy of Sciences and the Ministry of Education in using limited computer resources when teaching informatics. Topics discussed include group projects; the use of Logo; ability grouping; and out-of-class activities, including publishing a pupils' magazine. (13…

  9. Social Media and the Idle No More Movement: Citizenship, Activism and Dissent in Canada

    Science.gov (United States)

    Tupper, Jennifer

    2014-01-01

    This paper, informed by a critique of traditional understandings of citizenship and civic education, explores the use of social media as a means of fostering activism and dissent. Specifically, the paper explores the ways in which the Idle No More Movement, which began in Canada in 2012 marshalled social media to educate about and protest Bill…

  10. 75 FR 63110 - Small Business Investment Companies-Conflicts of Interest and Investment of Idle Funds

    Science.gov (United States)

    2010-10-14

    ... conflict of interest exemption for a particular type of transaction. This change is expected to reduce the...--Conflicts of Interest and Investment of Idle Funds AGENCY: U.S. Small Business Administration. ACTION... rules, unless it first obtains a conflict of interest exemption from SBA. The revision would eliminate...

  11. 77 FR 20292 - Small Business Investment Companies-Conflicts of Interest and Investment of Idle Funds

    Science.gov (United States)

    2012-04-04

    ... conflict of interest, unless the SBIC obtains a prior written exemption from SBA. The most common type of...--Conflicts of Interest and Investment of Idle Funds AGENCY: U.S. Small Business Administration. ACTION: Final..., unless it first obtains a conflict of interest exemption from SBA. The revision eliminates the...

  12. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  13. Gaseous and Particulate Emissions from Diesel Engines at Idle and under Load: Comparison of Biodiesel Blend and Ultralow Sulfur Diesel Fuels.

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A; Northrop, William F; Bohac, Stanislav V; Assanis, Dennis N

    2012-11-15

    Diesel exhaust emissions have been reported for a number of engine operating strategies, after-treatment technologies, and fuels. However, information is limited regarding emissions of many pollutants during idling and when biodiesel fuels are used. This study investigates regulated and unregulated emissions from both light-duty passenger car (1.7 L) and medium-duty (6.4 L) diesel engines at idle and load and compares a biodiesel blend (B20) to conventional ultralow sulfur diesel (ULSD) fuel. Exhaust aftertreatment devices included a diesel oxidation catalyst (DOC) and a diesel particle filter (DPF). For the 1.7 L engine under load without a DOC, B20 reduced brake-specific emissions of particulate matter (PM), elemental carbon (EC), nonmethane hydrocarbons (NMHCs), and most volatile organic compounds (VOCs) compared to ULSD; however, formaldehyde brake-specific emissions increased. With a DOC and high load, B20 increased brake-specific emissions of NMHC, nitrogen oxides (NO x ), formaldehyde, naphthalene, and several other VOCs. For the 6.4 L engine under load, B20 reduced brake-specific emissions of PM 2.5 , EC, formaldehyde, and most VOCs; however, NO x brake-specific emissions increased. When idling, the effects of fuel type were different: B20 increased NMHC, PM 2.5 , EC, formaldehyde, benzene, and other VOC emission rates from both engines, and changes were sometimes large, e.g., PM 2.5 increased by 60% for the 6.4 L/2004 calibration engine, and benzene by 40% for the 1.7 L engine with the DOC, possibly reflecting incomplete combustion and unburned fuel. Diesel exhaust emissions depended on the fuel type and engine load (idle versus loaded). The higher emissions found when using B20 are especially important given the recent attention to exposures from idling vehicles and the health significance of PM 2.5 . The emission profiles demonstrate the effects of fuel type, engine calibration, and emission control system, and they can be used as source profiles for

  14. Gaseous and Particulate Emissions from Diesel Engines at Idle and under Load: Comparison of Biodiesel Blend and Ultralow Sulfur Diesel Fuels

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A.; Northrop, William F.; Bohac, Stanislav V.; Assanis, Dennis N.

    2015-01-01

    Diesel exhaust emissions have been reported for a number of engine operating strategies, after-treatment technologies, and fuels. However, information is limited regarding emissions of many pollutants during idling and when biodiesel fuels are used. This study investigates regulated and unregulated emissions from both light-duty passenger car (1.7 L) and medium-duty (6.4 L) diesel engines at idle and load and compares a biodiesel blend (B20) to conventional ultralow sulfur diesel (ULSD) fuel. Exhaust aftertreatment devices included a diesel oxidation catalyst (DOC) and a diesel particle filter (DPF). For the 1.7 L engine under load without a DOC, B20 reduced brake-specific emissions of particulate matter (PM), elemental carbon (EC), nonmethane hydrocarbons (NMHCs), and most volatile organic compounds (VOCs) compared to ULSD; however, formaldehyde brake-specific emissions increased. With a DOC and high load, B20 increased brake-specific emissions of NMHC, nitrogen oxides (NOx), formaldehyde, naphthalene, and several other VOCs. For the 6.4 L engine under load, B20 reduced brake-specific emissions of PM2.5, EC, formaldehyde, and most VOCs; however, NOx brake-specific emissions increased. When idling, the effects of fuel type were different: B20 increased NMHC, PM2.5, EC, formaldehyde, benzene, and other VOC emission rates from both engines, and changes were sometimes large, e.g., PM2.5 increased by 60% for the 6.4 L/2004 calibration engine, and benzene by 40% for the 1.7 L engine with the DOC, possibly reflecting incomplete combustion and unburned fuel. Diesel exhaust emissions depended on the fuel type and engine load (idle versus loaded). The higher emissions found when using B20 are especially important given the recent attention to exposures from idling vehicles and the health significance of PM2.5. The emission profiles demonstrate the effects of fuel type, engine calibration, and emission control system, and they can be used as source profiles for apportionment

  15. Recent development of computational resources for new antibiotics discovery

    DEFF Research Database (Denmark)

    Kim, Hyun Uk; Blin, Kai; Lee, Sang Yup

    2017-01-01

    Understanding a complex working mechanism of biosynthetic gene clusters (BGCs) encoding secondary metabolites is a key to discovery of new antibiotics. Computational resources continue to be developed in order to better process increasing volumes of genome and chemistry data, and thereby better...

  16. Mobile Cloud Computing: Resource Discovery, Session Connectivity and Other Open Issues

    NARCIS (Netherlands)

    Schüring, Markus; Karagiannis, Georgios

    2011-01-01

    Abstract—Cloud computing can be considered as a model that provides network access to a shared pool of resources, such as storage and computing power, which can be rapidly provisioned and released with minimal management effort. This paper describes a research activity in the area of mobile cloud

  17. Big Data and HPC collocation: Using HPC idle resources for Big Data Analytics

    OpenAIRE

    MERCIER , Michael; Glesser , David; Georgiou , Yiannis; Richard , Olivier

    2017-01-01

    International audience; Executing Big Data workloads upon High Performance Computing (HPC) infrastractures has become an attractive way to improve their performances. However, the collocation of HPC and Big Data workloads is not an easy task, mainly because of their core concepts' differences. This paper focuses on the challenges related to the scheduling of both Big Data and HPC workloads on the same computing platform. In classic HPC workloads, the rigidity of jobs tends to create holes in ...

  18. Computational resources for ribosome profiling: from database to Web server and software.

    Science.gov (United States)

    Wang, Hongwei; Wang, Yan; Xie, Zhi

    2017-08-14

    Ribosome profiling is emerging as a powerful technique that enables genome-wide investigation of in vivo translation at sub-codon resolution. The increasing application of ribosome profiling in recent years has achieved remarkable progress toward understanding the composition, regulation and mechanism of translation. This benefits from not only the awesome power of ribosome profiling but also an extensive range of computational resources available for ribosome profiling. At present, however, a comprehensive review on these resources is still lacking. Here, we survey the recent computational advances guided by ribosome profiling, with a focus on databases, Web servers and software tools for storing, visualizing and analyzing ribosome profiling data. This review is intended to provide experimental and computational biologists with a reference to make appropriate choices among existing resources for the question at hand. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    Science.gov (United States)

    Meyer, Jörg; Quadt, Arnulf; Weber, Pavel; ATLAS Collaboration

    2011-12-01

    GoeGrid is a grid resource center located in Göttingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields of grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community, GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and personpower resources.

  20. PROSPECTS OF GEOTHERMAL RESOURCES DEVELOPMENT FOR EAST CISCAUCASIA

    OpenAIRE

    A. B. Alkhasov; D. A. Alkhasova

    2013-01-01

    Abstract. Work subject. Aim. The Northern Caucasus is one of the prospective regions for development of geothermal energy.The hydrogeothermal resources of the only East Ciscaucasian Artesian basin are estimated up to 10000 MW of heat and 1000 MW of electric power. For their large-scale development it is necessary to built wells of big diameter and high flow rate involving huge capital investments. Reconstruction of idle wells for production of thermal water will allow to reduce capital invest...

  1. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  2. Function Package for Computing Quantum Resource Measures

    Science.gov (United States)

    Huang, Zhiming

    2018-05-01

    In this paper, we present a function package for to calculate quantum resource measures and dynamics of open systems. Our package includes common operators and operator lists, frequently-used functions for computing quantum entanglement, quantum correlation, quantum coherence, quantum Fisher information and dynamics in noisy environments. We briefly explain the functions of the package and illustrate how to use the package with several typical examples. We expect that this package is a useful tool for future research and education.

  3. Functional requirements of computer systems for the U.S. Geological Survey, Water Resources Division, 1988-97

    Science.gov (United States)

    Hathaway, R.M.; McNellis, J.M.

    1989-01-01

    Investigating the occurrence, quantity, quality, distribution, and movement of the Nation 's water resources is the principal mission of the U.S. Geological Survey 's Water Resources Division. Reports of these investigations are published and available to the public. To accomplish this mission, the Division requires substantial computer technology to process, store, and analyze data from more than 57,000 hydrologic sites. The Division 's computer resources are organized through the Distributed Information System Program Office that manages the nationwide network of computers. The contract that provides the major computer components for the Water Resources Division 's Distributed information System expires in 1991. Five work groups were organized to collect the information needed to procure a new generation of computer systems for the U. S. Geological Survey, Water Resources Division. Each group was assigned a major Division activity and asked to describe its functional requirements of computer systems for the next decade. The work groups and major activities are: (1) hydrologic information; (2) hydrologic applications; (3) geographic information systems; (4) reports and electronic publishing; and (5) administrative. The work groups identified 42 functions and described their functional requirements for 1988, 1992, and 1997. A few new functions such as Decision Support Systems and Executive Information Systems, were identified, but most are the same as performed today. Although the number of functions will remain about the same, steady growth in the size, complexity, and frequency of many functions is predicted for the next decade. No compensating increase in the Division 's staff is anticipated during this period. To handle the increased workload and perform these functions, new approaches will be developed that use advanced computer technology. The advanced technology is required in a unified, tightly coupled system that will support all functions simultaneously

  4. Universal resources for approximate and stochastic measurement-based quantum computation

    International Nuclear Information System (INIS)

    Mora, Caterina E.; Piani, Marco; Miyake, Akimasa; Van den Nest, Maarten; Duer, Wolfgang; Briegel, Hans J.

    2010-01-01

    We investigate which quantum states can serve as universal resources for approximate and stochastic measurement-based quantum computation in the sense that any quantum state can be generated from a given resource by means of single-qubit (local) operations assisted by classical communication. More precisely, we consider the approximate and stochastic generation of states, resulting, for example, from a restriction to finite measurement settings or from possible imperfections in the resources or local operations. We show that entanglement-based criteria for universality obtained in M. Van den Nest et al. [New J. Phys. 9, 204 (2007)] for the exact, deterministic case can be lifted to the much more general approximate, stochastic case. This allows us to move from the idealized situation (exact, deterministic universality) considered in previous works to the practically relevant context of nonperfect state preparation. We find that any entanglement measure fulfilling some basic requirements needs to reach its maximum value on some element of an approximate, stochastic universal family of resource states, as the resource size grows. This allows us to rule out various families of states as being approximate, stochastic universal. We prove that approximate, stochastic universality is in general a weaker requirement than deterministic, exact universality and provide resources that are efficient approximate universal, but not exact deterministic universal. We also study the robustness of universal resources for measurement-based quantum computation under realistic assumptions about the (imperfect) generation and manipulation of entangled states, giving an explicit expression for the impact that errors made in the preparation of the resource have on the possibility to use it for universal approximate and stochastic state preparation. Finally, we discuss the relation between our entanglement-based criteria and recent results regarding the uselessness of states with a high

  5. Conceptualization of Idle (Laghw) and its relation to medical futility.

    Science.gov (United States)

    Rezaei Aderyani, Mohsen; Javadi, Mohsen; Nazari Tavakkoli, Saeid; Kiani, Mehrzad; Abbasi, Mahmood

    2016-01-01

    A major debate in medical ethics is the request for futile treatment. The topic of medical futility requires discrete assessment in Iran for at least two reasons. First, the common principles and foundations of medical ethics have taken shape in the context of Western culture and secularism. Accordingly, the implementation of the same guidelines and codes of medical ethics as Western societies in Muslim communities does not seem rational. Second, the challenges arising in health service settings are divergent across different countries. The Quranic concept of idle (laghw) and its derivatives are used in 11 honorable verses of the Holy Quran. Among these verses, the 3rd verse of the blessed Al-Muminūn Surah was selected for its closer connection to the concept under examination. The selected verse was researched in the context of all dictionaries presented in Noor Jami` al-Tafasir 2 (The Noor Collection of Interpretations 2) software. "Idle" is known as any insignificant speech, act, or thing that is not beneficial; an action from which no benefit is gained; any falsehood (that is not stable or realized); an entertaining act; any foul, futile talk and action unworthy of attention; loss of hope; and something that is not derived from method and thought. The word has also been used to refer to anything insignificant. The notes and derived interpretations were placed in the following categories: A) Having no significant benefit (When medical care does not benefit the patient (his body and/or soul and his life in this world and/or the Hereafter), it is wrong to proceed with that medical modality; B) Falsehood (Actions that fail to provide, maintain, and improve health are clearly futile); C) Unworthy of attention (An action that neither improves health nor threatens it is wrong and impermissible).

  6. 78 FR 11751 - Approval and Promulgation of Implementation Plans; State of Kansas; Idle Reduction of Heavy-Duty...

    Science.gov (United States)

    2013-02-20

    ...; mechanical work; armored vehicles; bus idling for passenger comfort (no greater than fifteen minutes in any...).) List of Subjects in 40 CFR Part 52 Environmental protection, Air pollution control, Carbon monoxide, Incorporation by reference, Intergovernmental relations, Motor carriers, Motor vehicles, Motor vehicle pollution...

  7. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  8. An IDL-based analysis package for COBE and other skycube-formatted astronomical data

    Science.gov (United States)

    Ewing, J. A.; Isaacman, Richard B.; Gales, J. M.

    1992-01-01

    UIMAGE is a data analysis package written in IDL for the Cosmic Background Explorer (COBE) project. COBE has extraordinarily stringent accuracy requirements: 1 percent mid-infrared absolute photometry, 0.01 percent submillimeter absolute spectrometry, and 0.0001 percent submillimeter relative photometry. Thus, many of the transformations and image enhancements common to analysis of large data sets must be done with special care. UIMAGE is unusual in this sense in that it performs as many of its operations as possible on the data in its native format and projection, which in the case of COBE is the quadrilateralized sphereical cube ('skycube'). That is, after reprojecting the data, e.g., onto an Aitoff map, the user who performs an operation such as taking a crosscut or extracting data from a pixel is transparently acting upon the skycube data from which the projection was made, thereby preserving the accuracy of the result. Current plans call for formatting external data bases such as CO maps into the skycube format with a high-accuracy transformation, thereby allowing Guest Investigators to use UIMAGE for direct comparison of the COBE maps with those at other wavelengths from other instruments. It is completely menu-driven so that its use requires no knowledge of IDL. Its functionality includes I/O from the COBE archives, FITS files, and IDL save sets as well as standard analysis operations such as smoothing, reprojection, zooming, statistics of areas, spectral analysis, etc. One of UIMAGE's more advanced and attractive features is its terminal independence. Most of the operations (e.g., menu-item selection or pixel selection) that are driven by the mouse on an X-windows terminal are also available using arrow keys and keyboard entry (e.g., pixel coordinates) on VT200 and Tektronix-class terminals. Even limited grey scales of images are available this way. Obviously, image processing is very limited on this type of terminal, but it is nonetheless surprising how

  9. The Trope Tank: A Laboratory with Material Resources for Creative Computing

    Directory of Open Access Journals (Sweden)

    Nick Montfort

    2014-12-01

    Full Text Available http://dx.doi.org/10.5007/1807-9288.2014v10n2p53 Principles for organizing and making use of a laboratory with material computing resources are articulated. This laboratory, the Trope Tank, is a facility for teaching, research, and creative collaboration and offers hardware (in working condition and set up for use from the 1970s, 1980s, and 1990s, including videogame systems, home computers, and an arcade cabinet. To aid in investigating the material history of texts, the lab has a small 19th century letterpress, a typewriter, a print terminal, and dot-matrix printers. Other resources include controllers, peripherals, manuals, books, and software on physical media. These resources are used for teaching, loaned for local exhibitions and presentations, and accessed by researchers and artists. The space is primarily a laboratory (rather than a library, studio, or museum, so materials are organized by platform and intended use. Textual information about the historical contexts of the available systems, and resources are set up to allow easy operation, and even casual use, by researchers, teachers, students, and artists.

  10. A multi-site analysis of the association between black carbon concentrations and vehicular idling, traffic, background pollution, and meteorology during school dismissals.

    Science.gov (United States)

    Richmond-Bryant, J; Bukiewicz, L; Kalin, R; Galarraga, C; Mirer, F

    2011-05-01

    A study was performed to assess the relationship between black carbon (BC), passing traffic, and vehicular idling outside New York City (NYC) schools during student dismissal. Monitoring was performed at three school sites in East Harlem, the Bronx, and Brooklyn for 1month per year over a two-year period from November 2006-October 2008. Monitoring at each site was conducted before and after the Asthma Free School Zone (AFSZ) asthma reduction education program was administered. Real-time equipment with a one-minute averaging interval was used to obtain the BC data, while volume counts of idling and passing school busses, trucks, and automobiles were collected each minute by study staff. These data were matched to ambient PM(2.5) and meteorology data obtained from the New York State Department of Environmental Conservation. A generalized additive model (GAM) model was run to examine the relationship between BC concentration and each variable while accounting for site-to-site differences. F-tests were employed to assess the significance of each of the predictor variables. The model results suggested that variability in ambient PM(2.5) concentration contributed 24% of the variability in transformed BC concentration, while variability in the number of idling busses and trucks on the street during dismissal contributed 20% of the variability in transformed BC concentration. The results of this study suggest that a combination of urban scale and local traffic control approaches in combination with cessation of school bus idling will produce improved local BC concentration outside schools. Published by Elsevier B.V.

  11. NMRbox: A Resource for Biomolecular NMR Computation.

    Science.gov (United States)

    Maciejewski, Mark W; Schuyler, Adam D; Gryk, Michael R; Moraru, Ion I; Romero, Pedro R; Ulrich, Eldon L; Eghbalnia, Hamid R; Livny, Miron; Delaglio, Frank; Hoch, Jeffrey C

    2017-04-25

    Advances in computation have been enabling many recent advances in biomolecular applications of NMR. Due to the wide diversity of applications of NMR, the number and variety of software packages for processing and analyzing NMR data is quite large, with labs relying on dozens, if not hundreds of software packages. Discovery, acquisition, installation, and maintenance of all these packages is a burdensome task. Because the majority of software packages originate in academic labs, persistence of the software is compromised when developers graduate, funding ceases, or investigators turn to other projects. To simplify access to and use of biomolecular NMR software, foster persistence, and enhance reproducibility of computational workflows, we have developed NMRbox, a shared resource for NMR software and computation. NMRbox employs virtualization to provide a comprehensive software environment preconfigured with hundreds of software packages, available as a downloadable virtual machine or as a Platform-as-a-Service supported by a dedicated compute cloud. Ongoing development includes a metadata harvester to regularize, annotate, and preserve workflows and facilitate and enhance data depositions to BioMagResBank, and tools for Bayesian inference to enhance the robustness and extensibility of computational analyses. In addition to facilitating use and preservation of the rich and dynamic software environment for biomolecular NMR, NMRbox fosters the development and deployment of a new class of metasoftware packages. NMRbox is freely available to not-for-profit users. Copyright © 2017 Biophysical Society. All rights reserved.

  12. Forecasting Model for Network Throughput of Remote Data Access in Computing Grids

    CERN Document Server

    Begy, Volodimir; The ATLAS collaboration

    2018-01-01

    Computing grids are one of the key enablers of eScience. Researchers from many fields (e.g. High Energy Physics, Bioinformatics, Climatology, etc.) employ grids to run computational jobs in a highly distributed manner. The current state of the art approach for data access in the grid is data placement: a job is scheduled to run at a specific data center, and its execution starts only when the complete input data has been transferred there. This approach has two major disadvantages: (1) the jobs are staying idle while waiting for the input data; (2) due to the limited infrastructure resources, the distributed data management system handling the data placement, may queue the transfers up to several days. An alternative approach is remote data access: a job may stream the input data directly from storage elements, which may be located at local or remote data centers. Remote data access brings two innovative benefits: (1) the jobs can be executed asynchronously with respect to the data transfer; (2) when combined...

  13. A novel resource management method of providing operating system as a service for mobile transparent computing.

    Science.gov (United States)

    Xiong, Yonghua; Huang, Suzhen; Wu, Min; Zhang, Yaoxue; She, Jinhua

    2014-01-01

    This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU) virtualization and mobile agent for mobile transparent computing (MTC) to devise a method of managing shared resources and services management (SRSM). It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user's requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  14. A Novel Resource Management Method of Providing Operating System as a Service for Mobile Transparent Computing

    Directory of Open Access Journals (Sweden)

    Yonghua Xiong

    2014-01-01

    Full Text Available This paper presents a framework for mobile transparent computing. It extends the PC transparent computing to mobile terminals. Since resources contain different kinds of operating systems and user data that are stored in a remote server, how to manage the network resources is essential. In this paper, we apply the technologies of quick emulator (QEMU virtualization and mobile agent for mobile transparent computing (MTC to devise a method of managing shared resources and services management (SRSM. It has three layers: a user layer, a manage layer, and a resource layer. A mobile virtual terminal in the user layer and virtual resource management in the manage layer cooperate to maintain the SRSM function accurately according to the user’s requirements. An example of SRSM is used to validate this method. Experiment results show that the strategy is effective and stable.

  15. Measuring the impact of computer resource quality on the software development process and product

    Science.gov (United States)

    Mcgarry, Frank; Valett, Jon; Hall, Dana

    1985-01-01

    The availability and quality of computer resources during the software development process was speculated to have measurable, significant impact on the efficiency of the development process and the quality of the resulting product. Environment components such as the types of tools, machine responsiveness, and quantity of direct access storage may play a major role in the effort to produce the product and in its subsequent quality as measured by factors such as reliability and ease of maintenance. During the past six years, the NASA Goddard Space Flight Center has conducted experiments with software projects in an attempt to better understand the impact of software development methodologies, environments, and general technologies on the software process and product. Data was extracted and examined from nearly 50 software development projects. All were related to support of satellite flight dynamics ground-based computations. The relationship between computer resources and the software development process and product as exemplified by the subject NASA data was examined. Based upon the results, a number of computer resource-related implications are provided.

  16. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    Karavakis, E; Andreeva, J; Campana, S; Saiz, P; Gayazov, S; Jezequel, S; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  17. Studi Implementasi Lean Six Sigma dengan Pendekatan Value Stream Mapping untuk Mereduksi Idle Time Material pada Gudang Pelat dan Profil

    Directory of Open Access Journals (Sweden)

    Wawan Widiatmoko

    2013-03-01

    Full Text Available Peningkatan volume kegiatan industri maritim di Indonesia menuntut industri perkapalan di daerah Surabaya untuk lebih meningkatkan pelayanan baik berupa bangunan baru maupun reparasi kapal. Berdasarkan hal tersebut galangan harus mampu mengelola proses produksi dengan baik sehingga menghasilkan keuntungan yang maksimum. Salah satunya adalah proses inventory dan transport of materials yang efektif. Tugas akhir bertujuan untuk mengetahui sistem inventori yang diterapkan oleh perusahaan yang dijadikan sampel serta idle time material pelat dan profil yang ada di gudang bahan baku dengan menggunakan metode lean six sigma dengan pendekatan value stream mapping. Dari hasil perhitungan menggunakan diperoleh nilai sigma perhitungan idle time sebesar 0.1976 sehingga perlu dilakukan upaya peningkatan nilai sigma pengadaan material itu sendiri. Berdasarkan hasil analisa penyebab adanya idle time dengan menggunakan RCA diperoleh beberapa faktor yaitu : rendahnya nilai sigma penggunaan material, tidak tercapainya target pengerjaan pada proses fabrikasi, proses pengadaan material yang tidak mempertimbangkan strategi proses pembangunan kapal. Dengan penerapan lean six sigma dengan pendekatan value stream mapping dihasilkan usulan perbaikan proses inventori di perusahaan antara lain : meningkatkan nilai sigma penggunaan material, melakukan strategi pembelian material sesuai strategi pembangunan kapal berdasarkan zona, memperbaiki kerjasama dengan supplier material pelat dan profil. Pembuatan future state mapping mendapatkan usulan perbaikan dengan pembuatan perencanaan pengadaan material dengan mempertimbangkan strategi pembangunan kapal berdasarkan zona pembangunannya. Diperoleh strategi pengadaan material yang dilakukan sebanyak 4 kali order.

  18. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  19. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  20. Regional research exploitation of the LHC a case-study of the required computing resources

    CERN Document Server

    Almehed, S; Eerola, Paule Anna Mari; Mjörnmark, U; Smirnova, O G; Zacharatou-Jarlskog, C; Åkesson, T

    2002-01-01

    A simulation study to evaluate the required computing resources for a research exploitation of the Large Hadron Collider (LHC) has been performed. The evaluation was done as a case study, assuming existence of a Nordic regional centre and using the requirements for performing a specific physics analysis as a yard-stick. Other imput parameters were: assumption for the distribution of researchers at the institutions involved, an analysis model, and two different functional structures of the computing resources.

  1. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  2. Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach

    Science.gov (United States)

    Yu, Bing; Shu, Wenjun

    2017-03-01

    The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.

  3. Uranium resource assessments

    International Nuclear Information System (INIS)

    1981-01-01

    The objective of this investigation is to examine what is generally known about uranium resources, what is subject to conjecture, how well do the explorers themselves understand the occurrence of uranium, and who are the various participants in the exploration process. From this we hope to reach a better understanding of the quality of uranium resource estimates as well as the nature of the exploration process. The underlying questions will remain unanswered. But given an inability to estimate precisely our uranium resources, how much do we really need to know. To answer this latter question, the various Department of Energy needs for uranium resource estimates are examined. This allows consideration of whether or not given the absence of more complete long-term supply data and the associated problems of uranium deliverability for the electric utility industry, we are now threatened with nuclear power plants eventually standing idle due to an unanticipated lack of fuel for their reactors. Obviously this is of some consequence to the government and energy consuming public. The report is organized into four parts. Section I evaluates the uranium resource data base and the various methodologies of resource assessment. Part II describes the manner in which a private company goes about exploring for uranium and the nature of its internal need for resource information. Part III examines the structure of the industry for the purpose of determining the character of the industry with respect to resource development. Part IV arrives at conclusions about the emerging pattern of industrial behavior with respect to uranium supply and the implications this has for coping with national energy issues

  4. Elaboration d’Indice composite de Développement du secteur bovin Laitier (IDL

    Directory of Open Access Journals (Sweden)

    K. KESSAB BELKHAYAT

    2014-03-01

    Full Text Available Plusieurs travaux de recherche ont été publiés sur la mesure du niveau de développement du secteur bovin laitier. Toutefois, aucun de ces travaux ne traite du secteur dans sa globalité. L’objectif du travail est de construire un tableau de bord du secteur bovin laitier à travers le développement d’un indice composite.Pour la construction de l’indice composite, 39 indicateurs du secteur bovin laitier sont identifiés dans le cadre conceptuel couvrant 8 dimensions. La collecte des données a concerné 41 variables composant les indicateurs, 37 pays et sur une période de 11 années (2000-2010. Après le traitement des données manquantes, la base de données complète est constituée de 21 indicateurs, de 23 pays sur 9 années. Un modèle a été développé pour la normalisation, la pondération des indicateurs puis pour le calcul de l’indice composite. Le test de robustesse est déroulé par le calcul du coefficient de corrélation de Pearson. Ce test a montré que de l’indice composite calculé selon 3 méthodes différentes de normalisation et de pondération est robuste. Les pays ont été classés selon leur l’IDL. Plusieurs axes d’analyse sont possibles à travers l’IDL notamment son évolution dans le temps, les points forts et les points faibles pour chaque pays et les leviers de développement du secteur.

  5. Network robustness assessed within a dual connectivity framework: joint dynamics of the Active and Idle Networks.

    Science.gov (United States)

    Tejedor, Alejandro; Longjas, Anthony; Zaliapin, Ilya; Ambroj, Samuel; Foufoula-Georgiou, Efi

    2017-08-17

    Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. But current definition of robustness is only accounting for half of the story: the connectivity of the nodes unaffected by the attack. Here we propose a new framework to assess network robustness, wherein the connectivity of the affected nodes is also taken into consideration, acknowledging that it plays a crucial role in properly evaluating the overall network robustness in terms of its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and that of building-up the IN. We show, via analysis of well-known prototype networks and real world data, that trade-offs between the efficiency of Active and Idle Network dynamics give rise to surprising robustness crossovers and re-rankings, which can have significant implications for decision making.

  6. Research on the digital education resources of sharing pattern in independent colleges based on cloud computing environment

    Science.gov (United States)

    Xiong, Ting; He, Zhiwen

    2017-06-01

    Cloud computing was first proposed by Google Company in the United States, which was based on the Internet center, providing a standard and open network sharing service approach. With the rapid development of the higher education in China, the educational resources provided by colleges and universities had greatly gap in the actual needs of teaching resources. therefore, Cloud computing of using the Internet technology to provide shared methods liked the timely rain, which had become an important means of the Digital Education on sharing applications in the current higher education. Based on Cloud computing environment, the paper analyzed the existing problems about the sharing of digital educational resources in Jiangxi Province Independent Colleges. According to the sharing characteristics of mass storage, efficient operation and low input about Cloud computing, the author explored and studied the design of the sharing model about the digital educational resources of higher education in Independent College. Finally, the design of the shared model was put into the practical applications.

  7. An integrated system for land resources supervision based on the IoT and cloud computing

    Science.gov (United States)

    Fang, Shifeng; Zhu, Yunqiang; Xu, Lida; Zhang, Jinqu; Zhou, Peiji; Luo, Kan; Yang, Jie

    2017-01-01

    Integrated information systems are important safeguards for the utilisation and development of land resources. Information technologies, including the Internet of Things (IoT) and cloud computing, are inevitable requirements for the quality and efficiency of land resources supervision tasks. In this study, an economical and highly efficient supervision system for land resources has been established based on IoT and cloud computing technologies; a novel online and offline integrated system with synchronised internal and field data that includes the entire process of 'discovering breaches, analysing problems, verifying fieldwork and investigating cases' was constructed. The system integrates key technologies, such as the automatic extraction of high-precision information based on remote sensing, semantic ontology-based technology to excavate and discriminate public sentiment on the Internet that is related to illegal incidents, high-performance parallel computing based on MapReduce, uniform storing and compressing (bitwise) technology, global positioning system data communication and data synchronisation mode, intelligent recognition and four-level ('device, transfer, system and data') safety control technology. The integrated system based on a 'One Map' platform has been officially implemented by the Department of Land and Resources of Guizhou Province, China, and was found to significantly increase the efficiency and level of land resources supervision. The system promoted the overall development of informatisation in fields related to land resource management.

  8. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  9. A Ternary Hybrid EEG-NIRS Brain-Computer Interface for the Classification of Brain Activation Patterns during Mental Arithmetic, Motor Imagery, and Idle State.

    Science.gov (United States)

    Shin, Jaeyoung; Kwon, Jinuk; Im, Chang-Hwan

    2018-01-01

    The performance of a brain-computer interface (BCI) can be enhanced by simultaneously using two or more modalities to record brain activity, which is generally referred to as a hybrid BCI. To date, many BCI researchers have tried to implement a hybrid BCI system by combining electroencephalography (EEG) and functional near-infrared spectroscopy (NIRS) to improve the overall accuracy of binary classification. However, since hybrid EEG-NIRS BCI, which will be denoted by hBCI in this paper, has not been applied to ternary classification problems, paradigms and classification strategies appropriate for ternary classification using hBCI are not well investigated. Here we propose the use of an hBCI for the classification of three brain activation patterns elicited by mental arithmetic, motor imagery, and idle state, with the aim to elevate the information transfer rate (ITR) of hBCI by increasing the number of classes while minimizing the loss of accuracy. EEG electrodes were placed over the prefrontal cortex and the central cortex, and NIRS optodes were placed only on the forehead. The ternary classification problem was decomposed into three binary classification problems using the "one-versus-one" (OVO) classification strategy to apply the filter-bank common spatial patterns filter to EEG data. A 10 × 10-fold cross validation was performed using shrinkage linear discriminant analysis (sLDA) to evaluate the average classification accuracies for EEG-BCI, NIRS-BCI, and hBCI when the meta-classification method was adopted to enhance classification accuracy. The ternary classification accuracies for EEG-BCI, NIRS-BCI, and hBCI were 76.1 ± 12.8, 64.1 ± 9.7, and 82.2 ± 10.2%, respectively. The classification accuracy of the proposed hBCI was thus significantly higher than those of the other BCIs ( p < 0.005). The average ITR for the proposed hBCI was calculated to be 4.70 ± 1.92 bits/minute, which was 34.3% higher than that reported for a previous binary hBCI study.

  10. An interactive computer approach to performing resource analysis for a multi-resource/multi-project problem. [Spacelab inventory procurement planning

    Science.gov (United States)

    Schlagheck, R. A.

    1977-01-01

    New planning techniques and supporting computer tools are needed for the optimization of resources and costs for space transportation and payload systems. Heavy emphasis on cost effective utilization of resources has caused NASA program planners to look at the impact of various independent variables that affect procurement buying. A description is presented of a category of resource planning which deals with Spacelab inventory procurement analysis. Spacelab is a joint payload project between NASA and the European Space Agency and will be flown aboard the Space Shuttle starting in 1980. In order to respond rapidly to the various procurement planning exercises, a system was built that could perform resource analysis in a quick and efficient manner. This system is known as the Interactive Resource Utilization Program (IRUP). Attention is given to aspects of problem definition, an IRUP system description, questions of data base entry, the approach used for project scheduling, and problems of resource allocation.

  11. Testing a computer-based ostomy care training resource for staff nurses.

    Science.gov (United States)

    Bales, Isabel

    2010-05-01

    Fragmented teaching and ostomy care provided by nonspecialized clinicians unfamiliar with state-of-the-art care and products have been identified as problems in teaching ostomy care to the new ostomate. After conducting a literature review of theories and concepts related to the impact of nurse behaviors and confidence on ostomy care, the author developed a computer-based learning resource and assessed its effect on staff nurse confidence. Of 189 staff nurses with a minimum of 1 year acute-care experience employed in the acute care, emergency, and rehabilitation departments of an acute care facility in the Midwestern US, 103 agreed to participate and returned completed pre- and post-tests, each comprising the same eight statements about providing ostomy care. F and P values were computed for differences between pre- and post test scores. Based on a scale where 1 = totally disagree and 5 = totally agree with the statement, baseline confidence and perceived mean knowledge scores averaged 3.8 and after viewing the resource program post-test mean scores averaged 4.51, a statistically significant improvement (P = 0.000). The largest difference between pre- and post test scores involved feeling confident in having the resources to learn ostomy skills independently. The availability of an electronic ostomy care resource was rated highly in both pre- and post testing. Studies to assess the effects of increased confidence and knowledge on the quality and provision of care are warranted.

  12. Blockchain-Empowered Fair Computational Resource Sharing System in the D2D Network

    Directory of Open Access Journals (Sweden)

    Zhen Hong

    2017-11-01

    Full Text Available Device-to-device (D2D communication is becoming an increasingly important technology in future networks with the climbing demand for local services. For instance, resource sharing in the D2D network features ubiquitous availability, flexibility, low latency and low cost. However, these features also bring along challenges when building a satisfactory resource sharing system in the D2D network. Specifically, user mobility is one of the top concerns for designing a cooperative D2D computational resource sharing system since mutual communication may not be stably available due to user mobility. A previous endeavour has demonstrated and proven how connectivity can be incorporated into cooperative task scheduling among users in the D2D network to effectively lower average task execution time. There are doubts about whether this type of task scheduling scheme, though effective, presents fairness among users. In other words, it can be unfair for users who contribute many computational resources while receiving little when in need. In this paper, we propose a novel blockchain-based credit system that can be incorporated into the connectivity-aware task scheduling scheme to enforce fairness among users in the D2D network. Users’ computational task cooperation will be recorded on the public blockchain ledger in the system as transactions, and each user’s credit balance can be easily accessible from the ledger. A supernode at the base station is responsible for scheduling cooperative computational tasks based on user mobility and user credit balance. We investigated the performance of the credit system, and simulation results showed that with a minor sacrifice of average task execution time, the level of fairness can obtain a major enhancement.

  13. Interactive Whiteboards and Computer Games at Highschool Level: Digital Resources for Enhancing Reflection in Teaching and Learning

    DEFF Research Database (Denmark)

    Sorensen, Elsebeth Korsgaard; Poulsen, Mathias; Houmann, Rita

    The general potential of computer games for teaching and learning is becoming widely recognized. In particular, within the application contexts of primary and lower secondary education, the relevance and value and computer games seem more accepted, and the possibility and willingness to incorporate...... computer games as a possible resource at the level of other educational resources seem more frequent. For some reason, however, to apply computer games in processes of teaching and learning at the high school level, seems an almost non-existent event. This paper reports on study of incorporating...... the learning game “Global Conflicts: Latin America” as a resource into the teaching and learning of a course involving the two subjects “English language learning” and “Social studies” at the final year in a Danish high school. The study adapts an explorative research design approach and investigates...

  14. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  15. Discrete sliding mode control for engine idle speed%离散滑模控制在发动机怠速设计中的应用

    Institute of Scientific and Technical Information of China (English)

    郭兴进; 刘珺

    2009-01-01

    A novel discrete sliding mode(DSM)controller is designed. Engine idle speed can be controlled by a nonlinear model for the idle speed control(ISC)system of a 4-cylinder, 4-1itre engine. The experimental results show that DSM control system has superior performance on tracking the desired idle speed and rejecting the system disturbances when compared with the existing controller.%利用一种新的离散滑模控制方法, 设计了发动机怠速的离散滑模(DSM)控制器, 用已开发出的4缸、1.4L的AJR发动机怠速控制系统的非线性模型进行发动机怠速转速的控制. 实验结果表明, 与原机的控制器相比较而言, DSM控制器在跟踪期望怠速转速及抗干扰等方面具有优良的性能.

  16. Negative quasi-probability as a resource for quantum computation

    International Nuclear Information System (INIS)

    Veitch, Victor; Ferrie, Christopher; Emerson, Joseph; Gross, David

    2012-01-01

    A central problem in quantum information is to determine the minimal physical resources that are required for quantum computational speed-up and, in particular, for fault-tolerant quantum computation. We establish a remarkable connection between the potential for quantum speed-up and the onset of negative values in a distinguished quasi-probability representation, a discrete analogue of the Wigner function for quantum systems of odd dimension. This connection allows us to resolve an open question on the existence of bound states for magic state distillation: we prove that there exist mixed states outside the convex hull of stabilizer states that cannot be distilled to non-stabilizer target states using stabilizer operations. We also provide an efficient simulation protocol for Clifford circuits that extends to a large class of mixed states, including bound universal states. (paper)

  17. Values of decentralized systems that avoid investments in idle capacity within the wastewater sector: a theoretical justification.

    Science.gov (United States)

    Wang, Sheng

    2014-04-01

    In this work, the values of decentralized (onsite) systems that avoid investments in idle capacity within wastewater plans are quantitatively justified using the specific net present value (SNPV) approach. SNPV is a currently proposed criterion in environmental engineering economics that is defined as the net present value of the cost per unit of service or per population equivalent (PE). The SNPV approach was reintroduced with bugs fixed and then applied to the economic analysis of the capital and operating costs of one-stage completed central plants, stage-expanded central plants, and decentralized treatment facilities. The results show that under a demand growth scenario, the central plant will inevitably reach idle capacity, which can be reduced by a staged expansion. However, the staged expansion plan will lose the economies of scale and, hence, is only viable under projections of a low or moderate price inflation rate or high demand growth rate. Onsite treatment systems can theoretically achieve 100% utilization. Assuming that the capital costs per PE of the onsite and central systems are equal, the former is economically favorable in most cases of price inflation as a result of its cost saving on idle capacity. Onsite treatment systems can be viable even though their capital expenditures per PE are higher than that of a comparable centralized option as to a capital investment. This finding suggests wide opening of onsite technology choices. Use of the SNPV showed that average operating expenses of centralized plants decrease as demand growth rates increase as a benefit of economies of scale, whereas those of onsite treatment systems depend only on price inflation. Semi-decentralized systems feature both the financial advantage of the onsite system (capital investment) and the superiority of centralized systems (operation and maintenance); thus, it is worth consideration. The results of this study illustrate not only the value of decentralized systems but

  18. Emotor control: computations underlying bodily resource allocation, emotions, and confidence.

    Science.gov (United States)

    Kepecs, Adam; Mensh, Brett D

    2015-12-01

    Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.

  19. Modified stretched exponential model of computer system resources management limitations-The case of cache memory

    Science.gov (United States)

    Strzałka, Dominik; Dymora, Paweł; Mazurek, Mirosław

    2018-02-01

    In this paper we present some preliminary results in the field of computer systems management with relation to Tsallis thermostatistics and the ubiquitous problem of hardware limited resources. In the case of systems with non-deterministic behaviour, management of their resources is a key point that guarantees theirs acceptable performance and proper working. This is very wide problem that stands for many challenges in financial, transport, water and food, health, etc. areas. We focus on computer systems with attention paid to cache memory and propose to use an analytical model that is able to connect non-extensive entropy formalism, long-range dependencies, management of system resources and queuing theory. Obtained analytical results are related to the practical experiment showing interesting and valuable results.

  20. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    Science.gov (United States)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  1. Collocational Relations in Japanese Language Textbooks and Computer-Assisted Language Learning Resources

    Directory of Open Access Journals (Sweden)

    Irena SRDANOVIĆ

    2011-05-01

    Full Text Available In this paper, we explore presence of collocational relations in the computer-assisted language learning systems and other language resources for the Japanese language, on one side, and, in the Japanese language learning textbooks and wordlists, on the other side. After introducing how important it is to learn collocational relations in a foreign language, we examine their coverage in the various learners’ resources for the Japanese language. We particularly concentrate on a few collocations at the beginner’s level, where we demonstrate their treatment across various resources. A special attention is paid to what is referred to as unpredictable collocations, which have a bigger foreign language learning-burden than the predictable ones.

  2. Cost-Benefit Analysis of Computer Resources for Machine Learning

    Science.gov (United States)

    Champion, Richard A.

    2007-01-01

    Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.

  3. Photonic entanglement as a resource in quantum computation and quantum communication

    OpenAIRE

    Prevedel, Robert; Aspelmeyer, Markus; Brukner, Caslav; Jennewein, Thomas; Zeilinger, Anton

    2008-01-01

    Entanglement is an essential resource in current experimental implementations for quantum information processing. We review a class of experiments exploiting photonic entanglement, ranging from one-way quantum computing over quantum communication complexity to long-distance quantum communication. We then propose a set of feasible experiments that will underline the advantages of photonic entanglement for quantum information processing.

  4. Critical phenomena in communication/computation networks with various topologies and suboptimal to optimal resource allocation

    Science.gov (United States)

    Cogoni, Marco; Busonera, Giovanni; Anedda, Paolo; Zanetti, Gianluigi

    2015-01-01

    We generalize previous studies on critical phenomena in communication networks [1,2] by adding computational capabilities to the nodes. In our model, a set of tasks with random origin, destination and computational structure is distributed on a computational network, modeled as a graph. By varying the temperature of a Metropolis Montecarlo, we explore the global latency for an optimal to suboptimal resource assignment at a given time instant. By computing the two-point correlation function for the local overload, we study the behavior of the correlation distance (both for links and nodes) while approaching the congested phase: a transition from peaked to spread g(r) is seen above a critical (Montecarlo) temperature Tc. The average latency trend of the system is predicted by averaging over several network traffic realizations while maintaining a spatially detailed information for each node: a sharp decrease of performance is found over Tc independently of the workload. The globally optimized computational resource allocation and network routing defines a baseline for a future comparison of the transition behavior with respect to existing routing strategies [3,4] for different network topologies.

  5. LHCb: Control and Monitoring of the Online Computer Farm for Offline processing in LHCb

    CERN Multimedia

    Granado Cardoso, L A; Closier, J; Frank, M; Gaspar, C; Jost, B; Liu, G; Neufeld, N; Callot, O

    2013-01-01

    LHCb, one of the 4 experiments at the LHC accelerator at CERN, uses approximately 1500 PCs (averaging 12 cores each) for processing the High Level Trigger (HLT) during physics data taking. During periods when data acquisition is not required most of these PCs are idle. In these periods it is possible to profit from the unused processing capacity to run offline jobs, such as Monte Carlo simulation. The LHCb offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control). In LHCbDIRAC, job agents are started on Worker Nodes, pull waiting tasks from the central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the job agents for the offline data processing on the HLT Farm. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit. It has been extensively used launching and monitoring 22.000+ agents simultaneo...

  6. Arquitectura para balancear carga dinámica por demanda adaptativa, utilizando CORBA en JAVA-IDL

    OpenAIRE

    Jesús Chávez Esparza; Gerardo Rentería Castillo; Francisco Javier Luna Rosas

    2008-01-01

    En este trabajo desglosaremos la elaboración de una nueva Arquitectura para balancear carga dinámica, por demanda adaptativa, utilizando CORBA en JAVA-IDL. Una arquitectura de balanceo de carga es un sistema que permite distribuir el trabajo computacional entre varias máquinas, con el objetivo de reducir el tiempo de respuesta global del sistema. A través de las pruebas se justifica el uso de la arquitectura y se definen los parámetros a considerar para obtener un óptimo desempeño, refiriéndo...

  7. Resource-constrained project scheduling: computing lower bounds by solving minimum cut problems

    NARCIS (Netherlands)

    Möhring, R.H.; Nesetril, J.; Schulz, A.S.; Stork, F.; Uetz, Marc Jochen

    1999-01-01

    We present a novel approach to compute Lagrangian lower bounds on the objective function value of a wide class of resource-constrained project scheduling problems. The basis is a polynomial-time algorithm to solve the following scheduling problem: Given a set of activities with start-time dependent

  8. Building a Snow Data Management System using Open Source Software (and IDL)

    Science.gov (United States)

    Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Hart, A. F.; Painter, T.; Zimdars, P. A.; Bryant, A.; Brodzik, M.; Skiles, M.; Seidel, F. C.; Rittger, K. E.

    2012-12-01

    At NASA's Jet Propulsion Laboratory free and open source software is used everyday to support a wide range of projects, from planetary to climate to research and development. In this abstract I will discuss the key role that open source software has played in building a robust science data processing pipeline for snow hydrology research, and how the system is also able to leverage programs written in IDL, making JPL's Snow Data System a hybrid of open source and proprietary software. Main Points: - The Design of the Snow Data System (illustrate how the collection of sub-systems are combined to create a complete data processing pipeline) - Discuss the Challenges of moving from a single algorithm on a laptop, to running 100's of parallel algorithms on a cluster of servers (lesson's learned) - Code changes - Software license related challenges - Storage Requirements - System Evolution (from data archiving, to data processing, to data on a map, to near-real-time products and maps) - Road map for the next 6 months (including how easily we re-used the snowDS code base to support the Airborne Snow Observatory Mission) Software in Use and their Software Licenses: IDL - Used for pre and post processing of data. Licensed under a proprietary software license held by Excelis. Apache OODT - Used for data management and workflow processing. Licensed under the Apache License Version 2. GDAL - Geospatial Data processing library used for data re-projection currently. Licensed under the X/MIT license. GeoServer - WMS Server. Licensed under the General Public License Version 2.0 Leaflet.js - Javascript web mapping library. Licensed under the Berkeley Software Distribution License. Python - Glue code and miscellaneous data processing support. Licensed under the Python Software Foundation License. Perl - Script wrapper for running the SCAG algorithm. Licensed under the General Public License Version 3. PHP - Front-end web application programming. Licensed under the PHP License Version

  9. A Safety Resource Allocation Mechanism against Connection Fault for Vehicular Cloud Computing

    Directory of Open Access Journals (Sweden)

    Tianpeng Ye

    2016-01-01

    Full Text Available The Intelligent Transportation System (ITS becomes an important component of the smart city toward safer roads, better traffic control, and on-demand service by utilizing and processing the information collected from sensors of vehicles and road side infrastructure. In ITS, Vehicular Cloud Computing (VCC is a novel technology balancing the requirement of complex services and the limited capability of on-board computers. However, the behaviors of the vehicles in VCC are dynamic, random, and complex. Thus, one of the key safety issues is the frequent disconnections between the vehicle and the Vehicular Cloud (VC when this vehicle is computing for a service. More important, the connection fault will disturb seriously the normal services of VCC and impact the safety works of the transportation. In this paper, a safety resource allocation mechanism is proposed against connection fault in VCC by using a modified workflow with prediction capability. We firstly propose the probability model for the vehicle movement which satisfies the high dynamics and real-time requirements of VCC. And then we propose a Prediction-based Reliability Maximization Algorithm (PRMA to realize the safety resource allocation for VCC. The evaluation shows that our mechanism can improve the reliability and guarantee the real-time performance of the VCC.

  10. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    Science.gov (United States)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO

  11. Elastic Extension of a CMS Computing Centre Resources on External Clouds

    Science.gov (United States)

    Codispoti, G.; Di Maria, R.; Aiftimiei, C.; Bonacorsi, D.; Calligola, P.; Ciaschini, V.; Costantini, A.; Dal Pra, S.; DeGirolamo, D.; Grandi, C.; Michelotto, D.; Panella, M.; Peco, G.; Sapunenko, V.; Sgaravatto, M.; Taneja, S.; Zizzi, G.

    2016-10-01

    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps.

  12. Job Management and Task Bundling

    Directory of Open Access Journals (Sweden)

    Berkowitz Evan

    2018-01-01

    Full Text Available High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users’ current workflows or executables.

  13. Job Management and Task Bundling

    Science.gov (United States)

    Berkowitz, Evan; Jansen, Gustav R.; McElvain, Kenneth; Walker-Loud, André

    2018-03-01

    High Performance Computing is often performed on scarce and shared computing resources. To ensure computers are used to their full capacity, administrators often incentivize large workloads that are not possible on smaller systems. Measurements in Lattice QCD frequently do not scale to machine-size workloads. By bundling tasks together we can create large jobs suitable for gigantic partitions. We discuss METAQ and mpi_jm, software developed to dynamically group computational tasks together, that can intelligently backfill to consume idle time without substantial changes to users' current workflows or executables.

  14. PredMP: A Web Resource for Computationally Predicted Membrane Proteins via Deep Learning

    KAUST Repository

    Wang, Sheng; Fei, Shiyang; Zongan, Wang; Li, Yu; Zhao, Feng; Gao, Xin

    2018-01-01

    structures in Protein Data Bank (PDB). To elucidate the MP structures computationally, we developed a novel web resource, denoted as PredMP (http://52.87.130.56:3001/#/proteinindex), that delivers one-dimensional (1D) annotation of the membrane topology

  15. Open Educational Resources: The Role of OCW, Blogs and Videos in Computer Networks Classroom

    Directory of Open Access Journals (Sweden)

    Pablo Gil

    2012-09-01

    Full Text Available This paper analyzes the learning experiences and opinions obtained from a group of undergraduate students in their interaction with several on-line multimedia resources included in a free on-line course about Computer Networks. These new educational resources employed are based on the Web2.0 approach such as blogs, videos and virtual labs which have been added in a web-site for distance self-learning.

  16. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  17. In-Cabin Air Quality during Driving and Engine Idling in Air-Conditioned Private Vehicles in Hong Kong.

    Science.gov (United States)

    Barnes, Natasha Maria; Ng, Tsz Wai; Ma, Kwok Keung; Lai, Ka Man

    2018-03-27

    Many people spend lengthy periods each day in enclosed vehicles in Hong Kong. However, comparably limited data is available about in-cabin air quality in air-conditioned private vehicles, and the car usage that may affect the air quality. Fifty-one vehicles were tested for particulate matter (PM 0.3 and PM 2.5 ), total volatile organic compounds (TVOCs), carbon monoxide (CO), carbon dioxide (CO₂), airborne bacteria, and fungi levels during their routine travel journey. Ten of these vehicles were further examined for PM 0.3 , PM 2.5 , TVOCs, CO, and CO₂ during engine idling. In general, during driving PM 2.5 levels in-cabin reduced overtime, but not PM 0.3 . For TVOCs, 24% vehicles exceeded the recommended Indoor Air Quality (IAQ) level in offices and public places set by the Hong Kong Environmental Protection Department. The total volatile organic compounds (TVOC) concentration positively correlated with the age of the vehicle. Carbon monoxide (CO) levels in all of the vehicles were lower than the IAQ recommendation, while 96% vehicles exceeded the recommended CO₂ level of 1000 ppmv; 16% vehicles >5000 ppmv. Microbial counts were relatively low. TVOCs levels at idle engine were higher than that during driving. Although the time we spend in vehicles is short, the potential exposure to high levels of pollutants should not be overlooked.

  18. In-Cabin Air Quality during Driving and Engine Idling in Air-Conditioned Private Vehicles in Hong Kong

    Directory of Open Access Journals (Sweden)

    Natasha Maria Barnes

    2018-03-01

    Full Text Available Many people spend lengthy periods each day in enclosed vehicles in Hong Kong. However, comparably limited data is available about in-cabin air quality in air-conditioned private vehicles, and the car usage that may affect the air quality. Fifty-one vehicles were tested for particulate matter (PM0.3 and PM2.5, total volatile organic compounds (TVOCs, carbon monoxide (CO, carbon dioxide (CO2, airborne bacteria, and fungi levels during their routine travel journey. Ten of these vehicles were further examined for PM0.3, PM2.5, TVOCs, CO, and CO2 during engine idling. In general, during driving PM2.5 levels in-cabin reduced overtime, but not PM0.3. For TVOCs, 24% vehicles exceeded the recommended Indoor Air Quality (IAQ level in offices and public places set by the Hong Kong Environmental Protection Department. The total volatile organic compounds (TVOC concentration positively correlated with the age of the vehicle. Carbon monoxide (CO levels in all of the vehicles were lower than the IAQ recommendation, while 96% vehicles exceeded the recommended CO2 level of 1000 ppmv; 16% vehicles >5000 ppmv. Microbial counts were relatively low. TVOCs levels at idle engine were higher than that during driving. Although the time we spend in vehicles is short, the potential exposure to high levels of pollutants should not be overlooked.

  19. Higher capacity, lower carbon dioxide emissions. Idle power compensation in HV lines; Mehr Kapazitaet, weniger Kohlendioxid. Blindleistungskompensation bei Hochspannungsleitungen

    Energy Technology Data Exchange (ETDEWEB)

    Auer, Jan-Hendrik von [Alstom Grid GmbH, Berlin (Germany). Team Leistungselektronik und Kompensationsanlagen

    2012-07-01

    Even today, many HP lines have reached their limits. It is therefore highly urgent to find measures for optimum utilization of the available overhead transmssion capacities, e.g. by idle power compensation. Together with a filter for harmonics reduction, this will ensure higher grid stability and enhance transport capacities while reducing transport losses, thus saving money and reducing CO{sub 2} emissions. (orig./AKB)

  20. Production of palm and Calophyllum inophyllum based biodiesel and investigation of blend performance and exhaust emission in an unmodified diesel engine at high idling conditions

    International Nuclear Information System (INIS)

    Rahman, S.M. Ashrafur; Masjuki, H.H.; Kalam, M.A.; Abedin, M.J.; Sanjid, A.; Sajjad, H.

    2013-01-01

    Highlights: • Biodiesel produced from palm and Calophyllum oil using trans-esterification process. • Produced biodiesels properties were compared with ASTM D6751 standards. • Engine performance and exhaust emissions were evaluated at high idling conditions. • Idling CO and HC emission was reduced using biodiesel–diesel blends. • For low percentages of biodiesel–diesel blends NO X emission increased negligibly. - Abstract: Rapid depletion of fossil fuels, increasing fossil-fuel price, carbon price, and the quest of low carbon fuel for cleaner environment – these are the reason researchers are looking for alternatives of fossil fuels. Renewable, non-flammable, biodegradable, and non-toxic are some reasons that are making biodiesel as a suitable candidate to replace fossil-fuel in near future. In recent years, in many countries of the world production and use of biodiesel has gained popularity. In this research, biodiesel from palm and Calophyllum inophyllum oil has been produced using the trans-esterification process. Properties of the produced biodiesels were compared with the ASTM D6751 standard: biodiesel standard and testing methods. Density, kinematic viscosity, flash point, cloud point, pour point and calorific value, these are the six main physicochemical properties that were investigated. Both palm biodiesel and Calophyllum biodiesel were within the standard limits, so they both can be used as the alternative of diesel fuel. Furthermore, engine performance and emission parameters of a diesel engine run by both palm biodiesel–diesel and Calophyllum biodiesel–diesel blends were evaluated at high idling conditions. Brake specific fuel consumption increased for both the biodiesel–diesel blends compared to pure diesel fuel; however, at highest idling condition, this increase was almost negligible. Exhaust gas temperatures decreased as blend percentages increased for both the biodiesel–diesel blends. For low blend percentages increase in NO

  1. Resource allocation in grid computing

    NARCIS (Netherlands)

    Koole, Ger; Righter, Rhonda

    2007-01-01

    Grid computing, in which a network of computers is integrated to create a very fast virtual computer, is becoming ever more prevalent. Examples include the TeraGrid and Planet-lab.org, as well as applications on the existing Internet that take advantage of unused computing and storage capacity of

  2. Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management

    Science.gov (United States)

    Bahr, Thomas

    2016-04-01

    Since 2012, the state of California faces an extreme drought, which impacts water supply in many ways. Advanced remote sensing is an important technology to better assess water resources, monitor drought conditions and water supplies, plan for drought response and mitigation, and measure drought impacts. In the present case study latest time series analysis capabilities are used to examine surface water in reservoirs located along the western flank of the Sierra Nevada region of California. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. A time series from Landsat images (L-5 TM, L-7 ETM+, L-8 OLI) of the AOI was obtained for 1999 to 2015 (October acquisitions). Downloaded from the USGS EarthExplorer web site, they already were georeferenced to a UTM Zone 10N (WGS-84) coordinate system. ENVITasks were used to pre-process the Landsat images as follows: • Triangulation based gap-filling for the SLC-off Landsat-7 ETM+ images. • Spatial subsetting to the same geographic extent. • Radiometric correction to top-of-atmosphere (TOA) reflectance. • Atmospheric correction using QUAC®, which determines atmospheric correction parameters directly from the observed pixel spectra in a scene, without ancillary information. Spatio-temporal analysis was executed with the following tasks: • Creation of Modified Normalized Difference Water Index images (MNDWI, Xu 2006) to enhance open water features while suppressing noise from built-up land, vegetation, and soil. • Threshold based classification of the water index images to extract the water features. • Classification aggregation as a post-classification cleanup process. • Export of the respective water classes to vector layers for further evaluation in a GIS. • Animation of the classification series and export to

  3. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  4. An Automatic Decision-Making Mechanism for Virtual Machine Live Migration in Private Clouds

    Directory of Open Access Journals (Sweden)

    Ming-Tsung Kao

    2014-01-01

    Full Text Available Due to the increasing number of computer hosts deployed in an enterprise, automatic management of electronic applications is inevitable. To provide diverse services, there will be increases in procurement, maintenance, and electricity costs. Virtualization technology is getting popular in cloud computing environment, which enables the efficient use of computing resources and reduces the operating cost. In this paper, we present an automatic mechanism to consolidate virtual servers and shut down the idle physical machines during the off-peak hours, while activating more machines at peak times. Through the monitoring of system resources, heavy system loads can be evenly distributed over physical machines to achieve load balancing. By integrating the feature of load balancing with virtual machine live migration, we successfully develop an automatic private cloud management system. Experimental results demonstrate that, during the off-peak hours, we can save power consumption of about 69 W by consolidating the idle virtual servers. And the load balancing implementation has shown that two machines with 80% and 40% CPU loads can be uniformly balanced to 60% each. And, through the use of preallocated virtual machine images, the proposed mechanism can be easily applied to a large amount of physical machines.

  5. Holding-time-aware asymmetric spectrum allocation in virtual optical networks

    Science.gov (United States)

    Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng

    2017-10-01

    Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.

  6. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  7. INTEGRATED EXPLORATION OF GEOTHERMAL RESOURCES

    Directory of Open Access Journals (Sweden)

    A. B. Alkhasov

    2016-01-01

    Full Text Available The aim. The aim is to develop the energy efficient technologies to explore hydro geothermal resources of different energy potential.Methods. Evaluation of the effectiveness of the proposed technologies has been carried out with the use of physical and mathematical, thermodynamic and optimization methods of calculation and the physical and chemical experimental research.Results. We propose the technology of integrated exploration of low-grade geothermal resources with the application of heat and water resource potential on various purposes. We also argue for the possibility of effective exploration of geothermal resources by building a binary geothermal power plant using idle oil and gas wells. We prove the prospect of geothermal steam and gas technologies enabling highly efficient use of thermal water of low energy potential (80 - 100 ° C degrees to generate electricity; the prospects of complex processing of high-temperature geothermal brine of Tarumovsky field. Thermal energy is utilized in a binary geothermal power plant in the supercritical Rankine cycle operating with a low-boiling agent. The low temperature spent brine from the geothermal power plant with is supplied to the chemical plant, where the main chemical components are extracted - lithium carbonate, magnesium burning, calcium carbonate and sodium chloride. Next, the waste water is used for various water management objectives. Electricity generated in the binary geothermal power plant is used for the extraction of chemical components.Conclusions. Implementation of the proposed technologies will facilitate the most efficient development of hydro geothermal resources of the North Caucasus region. Integrated exploration of the Tarumovsky field resources will fully meet Russian demand for lithium carbonate and sodium chloride.

  8. Contract on using computer resources of another

    Directory of Open Access Journals (Sweden)

    Cvetković Mihajlo

    2016-01-01

    Full Text Available Contractual relations involving the use of another's property are quite common. Yet, the use of computer resources of others over the Internet and legal transactions arising thereof certainly diverge from the traditional framework embodied in the special part of contract law dealing with this issue. Modern performance concepts (such as: infrastructure, software or platform as high-tech services are highly unlikely to be described by the terminology derived from Roman law. The overwhelming novelty of high-tech services obscures the disadvantageous position of contracting parties. In most cases, service providers are global multinational companies which tend to secure their own unjustified privileges and gain by providing lengthy and intricate contracts, often comprising a number of legal documents. General terms and conditions in these service provision contracts are further complicated by the '.service level agreement', rules of conduct and (nonconfidentiality guarantees. Without giving the issue a second thought, users easily accept the pre-fabricated offer without reservations, unaware that such a pseudo-gratuitous contract actually conceals a highly lucrative and mutually binding agreement. The author examines the extent to which the legal provisions governing sale of goods and services, lease, loan and commodatum may apply to 'cloud computing' contracts, and analyses the scope and advantages of contractual consumer protection, as a relatively new area in contract law. The termination of a service contract between the provider and the user features specific post-contractual obligations which are inherent to an online environment.

  9. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  10. Big Data in Cloud Computing: A Resource Management Perspective

    Directory of Open Access Journals (Sweden)

    Saeed Ullah

    2018-01-01

    Full Text Available The modern day advancement is increasingly digitizing our lives which has led to a rapid growth of data. Such multidimensional datasets are precious due to the potential of unearthing new knowledge and developing decision-making insights from them. Analyzing this huge amount of data from multiple sources can help organizations to plan for the future and anticipate changing market trends and customer requirements. While the Hadoop framework is a popular platform for processing larger datasets, there are a number of other computing infrastructures, available to use in various application domains. The primary focus of the study is how to classify major big data resource management systems in the context of cloud computing environment. We identify some key features which characterize big data frameworks as well as their associated challenges and issues. We use various evaluation metrics from different aspects to identify usage scenarios of these platforms. The study came up with some interesting findings which are in contradiction with the available literature on the Internet.

  11. Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services

    Science.gov (United States)

    Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui

    2017-09-01

    In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.

  12. Mobile devices and computing cloud resources allocation for interactive applications

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2017-06-01

    Full Text Available Using mobile devices such as smartphones or iPads for various interactive applications is currently very common. In the case of complex applications, e.g. chess games, the capabilities of these devices are insufficient to run the application in real time. One of the solutions is to use cloud computing. However, there is an optimization problem of mobile device and cloud resources allocation. An iterative heuristic algorithm for application distribution is proposed. The algorithm minimizes the energy cost of application execution with constrained execution time.

  13. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2016-01-01

    The ATLAS Event Service (ES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the ES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Goggle Comput...

  14. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  15. Computer Processing 10-20-30. Teacher's Manual. Senior High School Teacher Resource Manual.

    Science.gov (United States)

    Fisher, Mel; Lautt, Ray

    Designed to help teachers meet the program objectives for the computer processing curriculum for senior high schools in the province of Alberta, Canada, this resource manual includes the following sections: (1) program objectives; (2) a flowchart of curriculum modules; (3) suggestions for short- and long-range planning; (4) sample lesson plans;…

  16. Evolutionary heuristic for makespan minimization in no-idle flow shop production systems - doi: 10.4025/actascitechnol.v35i2.12534

    Directory of Open Access Journals (Sweden)

    Marcelo Seido Nagano

    2013-04-01

    Full Text Available This paper deals with no-idle flow shop scheduling problem with the objective of minimizing makespan. A new hybrid metaheuristic is proposed for the scheduling problem solution. The proposed method is compared with the best method reported in the literature. Experimental results show that the new method provides better solutions regarding the solution quality to set of problems evaluated.  

  17. Edge detection using IDL for mammographic image in Medical Physics laboratory

    International Nuclear Information System (INIS)

    Wan Hazlinda Ismail; Md Saion Salikin; Asmaliza Hashim; Norriza Mohd Isa; Azuhar Ripin

    2004-01-01

    Over the decade, doctors, physicists and scientists have been using radiographic images to diagnosis patient illness as well as to study the anatomy of human body without having to cut them. Now days, in the advancement of technologies these images are available in digital form. The image data can be manipulated to determine exactly the information doctors, physicists and scientists want, which can help them in decision making when diagnosis as well as help them in understanding of the human body better. In this paper, the edge detection technique is discussed in brief which is extensive used in image y segmentation where the method is performed by finding the boundaries between objects, thus indirectly defining the object. Bennet Model DMF- 150 Mammography Machine and breast phantom model l2A with 4. 0 cm compressed thickness are employed in this study. A Vidar film digitizer is used to digitize the images. The digitized images are then manipulated by using Interactive Data language (IDL) software. Results of this study are presented in brief in this presentation. (Author)

  18. SYSTEMATIC LITERATURE REVIEW ON RESOURCE ALLOCATION AND RESOURCE SCHEDULING IN CLOUD COMPUTING

    OpenAIRE

    B. Muni Lavanya; C. Shoba Bindu

    2016-01-01

    The objective the work is intended to highlight the key features and afford finest future directions in the research community of Resource Allocation, Resource Scheduling and Resource management from 2009 to 2016. Exemplifying how research on Resource Allocation, Resource Scheduling and Resource management has progressively increased in the past decade by inspecting articles, papers from scientific and standard publications. Survey materialized in three-fold process. Firstly, investigate on t...

  19. The Mini-Grid Framework: Application Programming Support for Ad hoc Volunteer Grids

    DEFF Research Database (Denmark)

    Venkataraman, Neela Narayanan

    2013-01-01

    To harvest idle, unused computational resources in networked environments, researchers have proposed different architectures for desktop grid infrastructure. However, most of the existing research work focus on centralized approach. In this thesis, we present the development and deployment of one......, and the performance of the framework in a real grid environment. The main contribution of this thesis are: i) modeling entities such as resources and applications using their context, ii) the context-based auction strategy for dynamic task distribution, iii) scheduling through application specific quality parameters...

  20. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  1. Integrating GRID tools to build a computing resource broker: activities of DataGrid WP1

    International Nuclear Information System (INIS)

    Anglano, C.; Barale, S.; Gaido, L.; Guarise, A.; Lusso, S.; Werbrouck, A.

    2001-01-01

    Resources on a computational Grid are geographically distributed, heterogeneous in nature, owned by different individuals or organizations with their own scheduling policies, have different access cost models with dynamically varying loads and availability conditions. This makes traditional approaches to workload management, load balancing and scheduling inappropriate. The first work package (WP1) of the EU-funded DataGrid project is addressing the issue of optimizing the distribution of jobs onto Grid resources based on a knowledge of the status and characteristics of these resources that is necessarily out-of-date (collected in a finite amount of time at a very loosely coupled site). The authors describe the DataGrid approach in integrating existing software components (from Condor, Globus, etc.) to build a Grid Resource Broker, and the early efforts to define a workable scheduling strategy

  2. Resource allocation on computational grids using a utility model and the knapsack problem

    CERN Document Server

    Van der ster, Daniel C; Parra-Hernandez, Rafael; Sobie, Randall J

    2009-01-01

    This work introduces a utility model (UM) for resource allocation on computational grids and formulates the allocation problem as a variant of the 0–1 multichoice multidimensional knapsack problem. The notion of task-option utility is introduced, and it is used to effect allocation policies. We present a variety of allocation policies, which are expressed as functions of metrics that are both intrinsic and external to the task and resources. An external user-defined credit-value metric is shown to allow users to intervene in the allocation of urgent or low priority tasks. The strategies are evaluated in simulation against random workloads as well as those drawn from real systems. We measure the sensitivity of the UM-derived schedules to variations in the allocation policies and their corresponding utility functions. The UM allocation strategy is shown to optimally allocate resources congruent with the chosen policies.

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  4. Production experience with the ATLAS Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Calafiura, Paolo; Childers, John Taylor; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2017-01-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Comp...

  5. Learning for VMM + WTA Embedded Classifiers

    Science.gov (United States)

    2016-03-31

    Learning for VMM + WTA Embedded Classifiers Jennifer Hasler and Sahil Shah Electrical and Computer Engineering Georgia Institute of Technology...enabling correct classification of each novel acoustic signal (generator, idle car, and idle truck ). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car, and idle truck

  6. Exploiting short-term memory in soft body dynamics as a computational resource.

    Science.gov (United States)

    Nakajima, K; Li, T; Hauser, H; Pfeifer, R

    2014-11-06

    Soft materials are not only highly deformable, but they also possess rich and diverse body dynamics. Soft body dynamics exhibit a variety of properties, including nonlinearity, elasticity and potentially infinitely many degrees of freedom. Here, we demonstrate that such soft body dynamics can be employed to conduct certain types of computation. Using body dynamics generated from a soft silicone arm, we show that they can be exploited to emulate functions that require memory and to embed robust closed-loop control into the arm. Our results suggest that soft body dynamics have a short-term memory and can serve as a computational resource. This finding paves the way towards exploiting passive body dynamics for control of a large class of underactuated systems. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. LHCb: Self managing experiment resources

    CERN Multimedia

    Stagni, F

    2013-01-01

    Within this paper we present an autonomic Computing resources management system used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System ( Resource Status System ) delivering real time informatio...

  8. The NILE system architecture: fault-tolerant, wide-area access to computing and data resources

    International Nuclear Information System (INIS)

    Ricciardi, Aleta; Ogg, Michael; Rothfus, Eric

    1996-01-01

    NILE is a multi-disciplinary project building a distributed computing environment for HEP. It provides wide-area, fault-tolerant, integrated access to processing and data resources for collaborators of the CLEO experiment, though the goals and principles are applicable to many domains. NILE has three main objectives: a realistic distributed system architecture design, the design of a robust data model, and a Fast-Track implementation providing a prototype design environment which will also be used by CLEO physicists. This paper focuses on the software and wide-area system architecture design and the computing issues involved in making NILE services highly-available. (author)

  9. Selecting, Evaluating and Creating Policies for Computer-Based Resources in the Behavioral Sciences and Education.

    Science.gov (United States)

    Richardson, Linda B., Comp.; And Others

    This collection includes four handouts: (1) "Selection Critria Considerations for Computer-Based Resources" (Linda B. Richardson); (2) "Software Collection Policies in Academic Libraries" (a 24-item bibliography, Jane W. Johnson); (3) "Circulation and Security of Software" (a 19-item bibliography, Sara Elizabeth Williams); and (4) "Bibliography of…

  10. Building Resilient Cloud Over Unreliable Commodity Infrastructure

    OpenAIRE

    Kedia, Piyus; Bansal, Sorav; Deshpande, Deepak; Iyer, Sreekanth

    2012-01-01

    Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and lapto...

  11. Logical and physical resource management in the common node of a distributed function laboratory computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-01-01

    A scheme for managing resources required for transaction processing in the common node of a distributed function computer system has been given. The scheme has been found to be satisfactory for all common node services provided so far

  12. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  13. A computer simulation of the transient response of a 4 cylinder Stirling engine with burner and air preheater in a vehicle

    Science.gov (United States)

    Martini, W. R.

    1981-01-01

    A series of computer programs are presented with full documentation which simulate the transient behavior of a modern 4 cylinder Siemens arrangement Stirling engine with burner and air preheater. Cold start, cranking, idling, acceleration through 3 gear changes and steady speed operation are simulated. Sample results and complete operating instructions are given. A full source code listing of all programs are included.

  14. Developing Online Learning Resources: Big Data, Social Networks, and Cloud Computing to Support Pervasive Knowledge

    Science.gov (United States)

    Anshari, Muhammad; Alas, Yabit; Guan, Lim Sei

    2016-01-01

    Utilizing online learning resources (OLR) from multi channels in learning activities promise extended benefits from traditional based learning-centred to a collaborative based learning-centred that emphasises pervasive learning anywhere and anytime. While compiling big data, cloud computing, and semantic web into OLR offer a broader spectrum of…

  15. Multi-Layer Traffic Steering

    DEFF Research Database (Denmark)

    Fotiadis, Panagiotis; Polignano, Michele; Gimenez, Lucas Chavarria

    2013-01-01

    This paper investigates the potentials of traffic steering in the Radio Resource Control (RRC) Idle state by evaluating the Absolute Priorities (AP) framework in a multilayer Long Term Evolution (LTE) macrocell scenario. Frequency priorities are broadcast on the system information and RRC Idle...... periods are not significantly long. Finally, better alignment between the RRC Connected and Idle mobility procedures is observed, guarantying significant decrease of handovers/reselections and potential battery life savings by minimizing the Inter-Frequency (IF) measurement rate in the RRC Idle....

  16. Controlling user access to electronic resources without password

    Science.gov (United States)

    Smith, Fred Hewitt

    2015-06-16

    Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.

  17. Self managing experiment resources

    International Nuclear Information System (INIS)

    Stagni, F; Ubeda, M; Charpentier, P; Tsaregorodtsev, A; Romanovskiy, V; Roiser, S; Graciani, R

    2014-01-01

    Within this paper we present an autonomic Computing resources management system, used by LHCb for assessing the status of their Grid resources. Virtual Organizations Grids include heterogeneous resources. For example, LHC experiments very often use resources not provided by WLCG, and Cloud Computing resources will soon provide a non-negligible fraction of their computing power. The lack of standards and procedures across experiments and sites generated the appearance of multiple information systems, monitoring tools, ticket portals, etc... which nowadays coexist and represent a very precious source of information for running HEP experiments Computing systems as well as sites. These two facts lead to many particular solutions for a general problem: managing the experiment resources. In this paper we present how LHCb, via the DIRAC interware, addressed such issues. With a renewed Central Information Schema hosting all resources metadata and a Status System (Resource Status System) delivering real time information, the system controls the resources topology, independently of the resource types. The Resource Status System applies data mining techniques against all possible information sources available and assesses the status changes, that are then propagated to the topology description. Obviously, giving full control to such an automated system is not risk-free. Therefore, in order to minimise the probability of misbehavior, a battery of tests has been developed in order to certify the correctness of its assessments. We will demonstrate the performance and efficiency of such a system in terms of cost reduction and reliability.

  18. DrugSig: A resource for computational drug repositioning utilizing gene expression signatures.

    Directory of Open Access Journals (Sweden)

    Hongyu Wu

    Full Text Available Computational drug repositioning has been proved as an effective approach to develop new drug uses. However, currently existing strategies strongly rely on drug response gene signatures which scattered in separated or individual experimental data, and resulted in low efficient outputs. So, a fully drug response gene signatures database will be very helpful to these methods. We collected drug response microarray data and annotated related drug and targets information from public databases and scientific literature. By selecting top 500 up-regulated and down-regulated genes as drug signatures, we manually established the DrugSig database. Currently DrugSig contains more than 1300 drugs, 7000 microarray and 800 targets. Moreover, we developed the signature based and target based functions to aid drug repositioning. The constructed database can serve as a resource to quicken computational drug repositioning. Database URL: http://biotechlab.fudan.edu.cn/database/drugsig/.

  19. Improving Energy Efficiency in Idle Listening of IEEE 802.11 WLANs

    Directory of Open Access Journals (Sweden)

    Muhammad Adnan

    2016-01-01

    Full Text Available This paper aims to improve energy efficiency of IEEE 802.11 wireless local area networks (WLANs by effectively dealing with idle listening (IL, which is required for channel sensing and is unavoidable in a contention-based channel access mechanism. Firstly, we show that IL is a dominant source of energy drain in WLANs and it cannot be effectively alleviated by the power saving mechanism proposed in the IEEE 802.11 standard. To solve this problem, we propose an energy-efficient mechanism that combines three schemes in a systematic way: downclocking, frame aggregation, and contention window adjustment. The downclocking scheme lets a station remain in a semisleep state when overhearing frames destined to neighbor stations, whereby the station consumes the minimal energy without impairing channel access capability. As well as decreasing the channel access overhead, the frame aggregation scheme prolongs the period of semisleep time. Moreover, by controlling the size of contention window based on the number of stations, the proposed mechanism decreases unnecessary IL time due to collision and retransmission. By deriving an analysis model and performing extensive simulations, we confirm that the proposed mechanism significantly improves the energy efficiency and throughput, by up to 2.8 and 1.8 times, respectively, compared to the conventional power saving mechanisms.

  20. Berkeley lab checkpoint/restart (BLCR) for Linux clusters

    International Nuclear Information System (INIS)

    Hargrove, Paul H; Duell, Jason C

    2006-01-01

    This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to ''fault precursors'' (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instance reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters

  1. Client/server models for transparent, distributed computational resources

    International Nuclear Information System (INIS)

    Hammer, K.E.; Gilman, T.L.

    1991-01-01

    Client/server models are proposed to address issues of shared resources in a distributed, heterogeneous UNIX environment. Recent development of automated Remote Procedure Call (RPC) interface generator has simplified the development of client/server models. Previously, implementation of the models was only possible at the UNIX socket level. An overview of RPCs and the interface generator will be presented and will include a discussion of generation and installation of remote services, the RPC paradigm, and the three levels of RPC programming. Two applications, the Nuclear Plant Analyzer (NPA) and a fluids simulation using molecular modelling, will be presented to demonstrate how client/server models using RPCs and External Data Representations (XDR) have been used production/computation situations. The NPA incorporates a client/server interface for transferring/translation of TRAC or RELAP results from the UNICOS Cray to a UNIX workstation. The fluids simulation program utilizes the client/server model to access the Cray via a single function allowing it to become a shared co-processor to the workstation application. 5 refs., 6 figs

  2. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    International Nuclear Information System (INIS)

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered

  3. Planning Committee for a National Resource for Computation in Chemistry. Final report, October 1, 1974--June 30, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Bigeleisen, Jacob; Berne, Bruce J.; Coton, F. Albert; Scheraga, Harold A.; Simmons, Howard E.; Snyder, Lawrence C.; Wiberg, Kenneth B.; Wipke, W. Todd

    1978-11-01

    The Planning Committee for a National Resource for Computation in Chemistry (NRCC) was charged with the responsibility of formulating recommendations regarding organizational structure for an NRCC including the composition, size, and responsibilities of its policy board, the relationship of such a board to the operating structure of the NRCC, to federal funding agencies, and to user groups; desirable priorities, growth rates, and levels of operations for the first several years; and facilities, access and site requirements for such a Resource. By means of site visits, questionnaires, and a workshop, the Committee sought advice from a wide range of potential users and organizations interested in chemical computation. Chemical kinetics, crystallography, macromolecular science, nonnumerical methods, physical organic chemistry, quantum chemistry, and statistical mechanics are covered.

  4. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    Science.gov (United States)

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  5. A new Nawaz-Enscore-Ham-based heuristic for permutation flow-shop problems with bicriteria of makespan and machine idle time

    Science.gov (United States)

    Liu, Weibo; Jin, Yan; Price, Mark

    2016-10-01

    A new heuristic based on the Nawaz-Enscore-Ham algorithm is proposed in this article for solving a permutation flow-shop scheduling problem. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion with the objective of minimizing both makespan and machine idle time. Statistical tests illustrate better solution quality of the proposed algorithm compared to existing benchmark heuristics.

  6. Fabrication of 4-cylinder transparent engine and measurement of the flame propagation behavior with high speed camera at idle condition

    Energy Technology Data Exchange (ETDEWEB)

    Joo, S.H. [Yonsei University Graduate School, Seoul (Korea, Republic of); Chun, K.M. [Yonse University, Seoul (Korea, Republic of)

    1998-04-01

    A transparent engine for visualization study is made using a production 4 cylinder engine. Flame propagation results from individual combustion cycles with high-speed cinematography are presented and discussed for idle condition. The flame propagation image and the in-cylinder pressure were obtained simultaneously, and the image processing software which can calculate the flame area and the flame center was developed. The flame propagation behavior of each cycle shows high cyclic variations, and there are linear correlation between flame area and the in-cylinder pressure. (author). 4 refs., 6 figs., 1 tab.

  7. Statistical Evaluation of the Identified Structural Parameters of an idling Offshore Wind Turbine

    International Nuclear Information System (INIS)

    Kramers, Hendrik C.; Van der Valk, Paul L.C.; Van Wingerden, Jan-Willem

    2016-01-01

    With the increased need for renewable energy, new offshore wind farms are being developed at an unprecedented scale. However, as the costs of offshore wind energy are still too high, design optimization and new innovations are required for lowering its cost. The design of modern day offshore wind turbines relies on numerical models for estimating ultimate and fatigue loads of the turbines. The dynamic behavior and the resulting structural loading of the turbines is determined for a large part by its structural properties, such as the natural frequencies and damping ratios. Hence, it is important to obtain accurate estimates of these modal properties. For this purpose stochastic subspace identification (SSI), in combination with clustering and statistical evaluation methods, is used to obtain the variance of the identified modal properties of an installed 3.6MW offshore wind turbine in idling conditions. It is found that one is able to obtain confidence intervals for the means of eigenfrequencies and damping ratios of the fore-aft and side-side modes of the wind turbine. (paper)

  8. The DIII-D Computing Environment: Characteristics and Recent Changes

    International Nuclear Information System (INIS)

    McHarg, B.B. Jr.

    1999-01-01

    The DIII-D tokamak national fusion research facility along with its predecessor Doublet III has been operating for over 21 years. The DIII-D computing environment consists of real-time systems controlling the tokamak, heating systems, and diagnostics, and systems acquiring experimental data from instrumentation; major data analysis server nodes performing short term and long term data access and data analysis; and systems providing mechanisms for remote collaboration and the dissemination of information over the world wide web. Computer systems for the facility have undergone incredible changes over the course of time as the computer industry has changed dramatically. Yet there are certain valuable characteristics of the DIII-D computing environment that have been developed over time and have been maintained to this day. Some of these characteristics include: continuous computer infrastructure improvements, distributed data and data access, computing platform integration, and remote collaborations. These characteristics are being carried forward as well as new characteristics resulting from recent changes which have included: a dedicated storage system and a hierarchical storage management system for raw shot data, various further infrastructure improvements including deployment of Fast Ethernet, the introduction of MDSplus, LSF and common IDL based tools, and improvements to remote collaboration capabilities. This paper will describe this computing environment, important characteristics that over the years have contributed to the success of DIII-D computing systems, and recent changes to computer systems

  9. CLUSTER ENERGY OPTIMIZATION: A THEORETICAL APPROACH

    OpenAIRE

    Vikram Yadav; G. Sahoo

    2013-01-01

    The optimization of energy consumption in the cloud computing environment is the question how to use various energy conservation strategies to efficiently allocate resources. The need of differentresources in cloud environment is unpredictable. It is observed that load management in cloud is utmost needed in order to provide QOS. The jobs at over-loaded physical machine are shifted to under-loadedphysical machine and turning the idle machine off in order to provide green cloud. For energy opt...

  10. Optimizing qubit resources for quantum chemistry simulations in second quantization on a quantum computer

    International Nuclear Information System (INIS)

    Moll, Nikolaj; Fuhrer, Andreas; Staar, Peter; Tavernelli, Ivano

    2016-01-01

    Quantum chemistry simulations on a quantum computer suffer from the overhead needed for encoding the Fermionic problem in a system of qubits. By exploiting the block diagonality of a Fermionic Hamiltonian, we show that the number of required qubits can be reduced while the number of terms in the Hamiltonian will increase. All operations for this reduction can be performed in operator space. The scheme is conceived as a pre-computational step that would be performed prior to the actual quantum simulation. We apply this scheme to reduce the number of qubits necessary to simulate both the Hamiltonian of the two-site Fermi–Hubbard model and the hydrogen molecule. Both quantum systems can then be simulated with a two-qubit quantum computer. Despite the increase in the number of Hamiltonian terms, the scheme still remains a useful tool to reduce the dimensionality of specific quantum systems for quantum simulators with a limited number of resources. (paper)

  11. Resource management in utility and cloud computing

    CERN Document Server

    Zhao, Han

    2013-01-01

    This SpringerBrief reviews the existing market-oriented strategies for economically managing resource allocation in distributed systems. It describes three new schemes that address cost-efficiency, user incentives, and allocation fairness with regard to different scheduling contexts. The first scheme, taking the Amazon EC2? market as a case of study, investigates the optimal resource rental planning models based on linear integer programming and stochastic optimization techniques. This model is useful to explore the interaction between the cloud infrastructure provider and the cloud resource c

  12. The Usage of informal computer based communication in the context of organization’s technological resources

    OpenAIRE

    Raišienė, Agota Giedrė; Jonušauskas, Steponas

    2011-01-01

    Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization's technological resources. Methodology - meta analysis, survey and descriptive analysis. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the ...

  13. High-Performance Data Analysis Tools for Sun-Earth Connection Missions

    Science.gov (United States)

    Messmer, Peter

    2011-01-01

    The data analysis tool of choice for many Sun-Earth Connection missions is the Interactive Data Language (IDL) by ITT VIS. The increasing amount of data produced by these missions and the increasing complexity of image processing algorithms requires access to higher computing power. Parallel computing is a cost-effective way to increase the speed of computation, but algorithms oftentimes have to be modified to take advantage of parallel systems. Enhancing IDL to work on clusters gives scientists access to increased performance in a familiar programming environment. The goal of this project was to enable IDL applications to benefit from both computing clusters as well as graphics processing units (GPUs) for accelerating data analysis tasks. The tool suite developed in this project enables scientists now to solve demanding data analysis problems in IDL that previously required specialized software, and it allows them to be solved orders of magnitude faster than on conventional PCs. The tool suite consists of three components: (1) TaskDL, a software tool that simplifies the creation and management of task farms, collections of tasks that can be processed independently and require only small amounts of data communication; (2) mpiDL, a tool that allows IDL developers to use the Message Passing Interface (MPI) inside IDL for problems that require large amounts of data to be exchanged among multiple processors; and (3) GPULib, a tool that simplifies the use of GPUs as mathematical coprocessors from within IDL. mpiDL is unique in its support for the full MPI standard and its support of a broad range of MPI implementations. GPULib is unique in enabling users to take advantage of an inexpensive piece of hardware, possibly already installed in their computer, and achieve orders of magnitude faster execution time for numerically complex algorithms. TaskDL enables the simple setup and management of task farms on compute clusters. The products developed in this project have the

  14. Protocol to Exploit Waiting Resources for UASNs

    Directory of Open Access Journals (Sweden)

    Li-Ling Hung

    2016-03-01

    Full Text Available The transmission speed of acoustic waves in water is much slower than that of radio waves in terrestrial wireless sensor networks. Thus, the propagation delay in underwater acoustic sensor networks (UASN is much greater. Longer propagation delay leads to complicated communication and collision problems. To solve collision problems, some studies have proposed waiting mechanisms; however, long waiting mechanisms result in low bandwidth utilization. To improve throughput, this study proposes a slotted medium access control protocol to enhance bandwidth utilization in UASNs. The proposed mechanism increases communication by exploiting temporal and spatial resources that are typically idle in order to protect communication against interference. By reducing wait time, network performance and energy consumption can be improved. A performance evaluation demonstrates that when the data packets are large or sensor deployment is dense, the energy consumption of proposed protocol is less than that of existing protocols as well as the throughput is higher than that of existing protocols.

  15. Resource-adaptive cognitive processes

    CERN Document Server

    Crocker, Matthew W

    2010-01-01

    This book investigates the adaptation of cognitive processes to limited resources. The central topics of this book are heuristics considered as results of the adaptation to resource limitations, through natural evolution in the case of humans, or through artificial construction in the case of computational systems; the construction and analysis of resource control in cognitive processes; and an analysis of resource-adaptivity within the paradigm of concurrent computation. The editors integrated the results of a collaborative 5-year research project that involved over 50 scientists. After a mot

  16. The Identification of Land Utilization in Coastal Reclamation Areas in Tianjin Using High Resolution Remote Sensing Images

    Science.gov (United States)

    Meng, Y.; Cao, Y.; Tian, H.; Han, Z.

    2018-04-01

    In recent decades, land reclamation activities have been developed rapidly in Chinese coastal regions, especially in Bohai Bay. The land reclamation areas can effectively alleviate the contradiction between land resources shortage and human needs, but some idle lands that left unused after the government making approval the usage of sea areas are also supposed to pay attention to. Due to the particular features of land coverage identification in large regions, traditional monitoring approaches are unable to perfectly meet the needs of effectively and quickly land use classification. In this paper, Gaofen-1 remotely sensed satellite imagery data together with sea area usage ownership data were used to identify the land use classifications and find out the idle land resources. It can be seen from the result that most of the land use types and idle land resources can be identified precisely.

  17. Exploiting on-node heterogeneity for in-situ analytics of climate simulations via a functional partitioning framework

    Science.gov (United States)

    Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan

    2016-04-01

    Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP

  18. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  19. Using Mosix for Wide-Area Compuational Resources

    Science.gov (United States)

    Maddox, Brian G.

    2004-01-01

    One of the problems with using traditional Beowulf-type distributed processing clusters is that they require an investment in dedicated computer resources. These resources are usually needed in addition to pre-existing ones such as desktop computers and file servers. Mosix is a series of modifications to the Linux kernel that creates a virtual computer, featuring automatic load balancing by migrating processes from heavily loaded nodes to less used ones. An extension of the Beowulf concept is to run a Mosixenabled Linux kernel on a large number of computer resources in an organization. This configuration would provide a very large amount of computational resources based on pre-existing equipment. The advantage of this method is that it provides much more processing power than a traditional Beowulf cluster without the added costs of dedicating resources.

  20. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  1. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  2. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2011-12-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources.Methodology—meta analysis, survey and descriptive analysis.Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, thatsignificant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  3. The Usage of Informal Computer Based Communication in the Context of Organization’s Technological Resources

    Directory of Open Access Journals (Sweden)

    Agota Giedrė Raišienė

    2013-08-01

    Full Text Available Purpose of the article is theoretically and practically analyze the features of informal computer based communication in the context of organization’s technological resources. Methodology—meta analysis, survey and descriptive analysis. Findings. According to scientists, the functions of informal communication cover sharing of work related information, coordination of team activities, spread of organizational culture and feeling of interdependence and affinity. Also, informal communication widens the individuals’ recognition of reality, creates general context of environment between talkers, and strengthens interpersonal attraction. For these reasons, informal communication is desirable and even necessary in organizations because it helps to ensure efficient functioning of the enterprise. However, communicating through electronic channels suppresses informal connections or addresses them to the outside of the organization. So, electronic communication is not beneficial for developing ties in informal organizational network. The empirical research showed, that significant part of courts administration staff is prone to use technological resources of their office for informal communication. Representatives of courts administration choose friends for computer based communication much more often than colleagues (72% and 63%respectively. 93%of the research respondents use an additional e-mail box serviced by commercial providers for non work communication. High intensity of informal electronic communication with friends and familiars shows that workers of court administration are used to meet their psycho emotional needs outside the work place. The survey confirmed conclusion of the theoretical analysis: computer based communication is not beneficial for developing informal contacts between workers. In order for the informal communication could carry out its functions and technological recourses of organization would be used effectively, staff

  4. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Science.gov (United States)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  5. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Directory of Open Access Journals (Sweden)

    Chakravarthy Srinivas R.

    2018-03-01

    Full Text Available Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  6. Reducing usage of the computational resources by event driven approach to model predictive control

    Science.gov (United States)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  7. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  8. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  9. Piping data bank and erection system of Angra 2: structure, computational resources and systems

    International Nuclear Information System (INIS)

    Abud, P.R.; Court, E.G.; Rosette, A.C.

    1992-01-01

    The Piping Data Bank of Angra 2 called - Erection Management System - Was developed to manage the piping erection of the Nuclear Power Plant of Angra 2. Beyond the erection follow-up of piping and supports, it manages: the piping design, the material procurement, the flow of the fabrication documents, testing of welds and material stocks at the Warehouse. The works developed in the sense of defining the structure of the Data Bank, Computational Resources and System are here described. (author)

  10. Application of computer graphics to generate coal resources of the Cache coal bed, Recluse geologic model area, Campbell County, Wyoming

    Science.gov (United States)

    Schneider, G.B.; Crowley, S.S.; Carey, M.A.

    1982-01-01

    Low-sulfur subbituminous coal resources have been calculated, using both manual and computer methods, for the Cache coal bed in the Recluse Model Area, which covers the White Tail Butte, Pitch Draw, Recluse, and Homestead Draw SW 7 1/2 minute quadrangles, Campbell County, Wyoming. Approximately 275 coal thickness measurements obtained from drill hole data are evenly distributed throughout the area. The Cache coal and associated beds are in the Paleocene Tongue River Member of the Fort Union Formation. The depth from the surface to the Cache bed ranges from 269 to 1,257 feet. The thickness of the coal is as much as 31 feet, but in places the Cache coal bed is absent. Comparisons between hand-drawn and computer-generated isopach maps show minimal differences. Total coal resources calculated by computer show the bed to contain 2,316 million short tons or about 6.7 percent more than the hand-calculated figure of 2,160 million short tons.

  11. A Resource Service Model in the Industrial IoT System Based on Transparent Computing.

    Science.gov (United States)

    Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang

    2018-03-26

    The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.

  12. An evaluation of interventions for reducing the risk of PRRSV introduction to filtered farms via retrograde air movement through idle fans.

    Science.gov (United States)

    Alonso, Carmen; Otake, Satoshi; Davies, Peter; Dee, Scott

    2012-06-15

    Porcine reproductive and respiratory syndrome virus (PRRSV) is an economically significant pathogen of pigs that can be transported via the airborne route out to 9.1 km. To reduce this risk, large swine facilities have started to implement systems to filter contaminated incoming air. A proposed means of air filtration failure is the retrograde movement of air (back-drafting) from the external environment into the animal air space through non-filtered points such as idle wall fans; however, this risk has not been validated. Therefore, the purpose of this study was threefold: (1) to prove that PRRSV introduction via retrograde air movement through idle fans is a true risk; (2) to determine the minimum retrograde air velocity necessary to introduce PRRSV to an animal airspace from an external source; and (3) to evaluate the efficacy of different interventions designed to reduce this risk. A retrograde air movement model was used to test a range of velocities and interventions, including a standard plastic shutter, a plastic shutter plus a canvas cover, a nylon air chute, an aluminum shutter plus an air chute and a double shutter system. Results indicated that retrograde air movement is a real risk for PRRSV introduction to a filtered air space; however, it required a velocity of 0.76 m/s. In addition, while all the interventions designed to reduce this risk were superior when compared to a standard plastic shutter, significant differences were detected between treatments. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources

    Energy Technology Data Exchange (ETDEWEB)

    Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas

    2012-07-14

    The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.

  14. Computer Engineers.

    Science.gov (United States)

    Moncarz, Roger

    2000-01-01

    Looks at computer engineers and describes their job, employment outlook, earnings, and training and qualifications. Provides a list of resources related to computer engineering careers and the computer industry. (JOW)

  15. The Development of an Individualized Instructional Program in Beginning College Mathematics Utilizing Computer Based Resource Units. Final Report.

    Science.gov (United States)

    Rockhill, Theron D.

    Reported is an attempt to develop and evaluate an individualized instructional program in pre-calculus college mathematics. Four computer based resource units were developed in the areas of set theory, relations and function, algebra, trigonometry, and analytic geometry. Objectives were determined by experienced calculus teachers, and…

  16. Bounding the Resource Availability of Partially Ordered Events with Constant Resource Impact

    Science.gov (United States)

    Frank, Jeremy

    2004-01-01

    We compare existing techniques to bound the resource availability of partially ordered events. We first show that, contrary to intuition, two existing techniques, one due to Laborie and one due to Muscettola, are not strictly comparable in terms of the size of the search trees generated under chronological search with a fixed heuristic. We describe a generalization of these techniques called the Flow Balance Constraint to tightly bound the amount of available resource for a set of partially ordered events with piecewise constant resource impact We prove that the new technique generates smaller proof trees under chronological search with a fixed heuristic, at little increase in computational expense. We then show how to construct tighter resource bounds but at increased computational cost.

  17. Computer modelling of the UK wind energy resource. Phase 2. Application of the methodology

    Energy Technology Data Exchange (ETDEWEB)

    Burch, S F; Makari, M; Newton, K; Ravenscroft, F; Whittaker, J

    1993-12-31

    This report presents the results of the second phase of a programme to estimate the UK wind energy resource. The overall objective of the programme is to provide quantitative resource estimates using a mesoscale (resolution about 1km) numerical model for the prediction of wind flow over complex terrain, in conjunction with digitised terrain data and wind data from surface meteorological stations. A network of suitable meteorological stations has been established and long term wind data obtained. Digitised terrain data for the whole UK were obtained, and wind flow modelling using the NOABL computer program has been performed. Maps of extractable wind power have been derived for various assumptions about wind turbine characteristics. Validation of the methodology indicates that the results are internally consistent, and in good agreement with available comparison data. Existing isovent maps, based on standard meteorological data which take no account of terrain effects, indicate that 10m annual mean wind speeds vary between about 4.5 and 7 m/s over the UK with only a few coastal areas over 6 m/s. The present study indicates that 28% of the UK land area had speeds over 6 m/s, with many hill sites having 10m speeds over 10 m/s. It is concluded that these `first order` resource estimates represent a substantial improvement over the presently available `zero order` estimates. The results will be useful for broad resource studies and initial site screening. Detailed resource evaluation for local sites will require more detailed local modelling or ideally long term field measurements. (12 figures, 14 tables, 21 references). (Author)

  18. Performance analysis of cloud computing services for many-tasks scientific computing

    NARCIS (Netherlands)

    Iosup, A.; Ostermann, S.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.

    2011-01-01

    Cloud computing is an emerging commercial infrastructure paradigm that promises to eliminate the need for maintaining expensive computing facilities by companies and institutes alike. Through the use of virtualization and resource time sharing, clouds serve with a single set of physical resources a

  19. Computational modeling as a tool for water resources management: an alternative approach to problems of multiple uses

    Directory of Open Access Journals (Sweden)

    Haydda Manolla Chaves da Hora

    2012-04-01

    Full Text Available Today in Brazil there are many cases of incompatibility regarding use of water and its availability. Due to the increase in required variety and volume, the concept of multiple uses was created, as stated by Pinheiro et al. (2007. The use of the same resource to satisfy different needs with several restrictions (qualitative and quantitative creates conflicts. Aiming to minimize these conflicts, this work was applied to the particular cases of Hydrographic Regions VI and VIII of Rio de Janeiro State, using computational modeling techniques (based on MOHID software – Water Modeling System as a tool for water resources management.

  20. A resource facility for kinetic analysis: modeling using the SAAM computer programs.

    Science.gov (United States)

    Foster, D M; Boston, R C; Jacquez, J A; Zech, L

    1989-01-01

    Kinetic analysis and integrated system modeling have contributed significantly to understanding the physiology and pathophysiology of metabolic systems in humans and animals. Many experimental biologists are aware of the usefulness of these techniques and recognize that kinetic modeling requires special expertise. The Resource Facility for Kinetic Analysis (RFKA) provides this expertise through: (1) development and application of modeling technology for biomedical problems, and (2) development of computer-based kinetic modeling methodologies concentrating on the computer program Simulation, Analysis, and Modeling (SAAM) and its conversational version, CONversational SAAM (CONSAM). The RFKA offers consultation to the biomedical community in the use of modeling to analyze kinetic data and trains individuals in using this technology for biomedical research. Early versions of SAAM were widely applied in solving dosimetry problems; many users, however, are not familiar with recent improvements to the software. The purpose of this paper is to acquaint biomedical researchers in the dosimetry field with RFKA, which, together with the joint National Cancer Institute-National Heart, Lung and Blood Institute project, is overseeing SAAM development and applications. In addition, RFKA provides many service activities to the SAAM user community that are relevant to solving dosimetry problems.

  1. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  2. Resource Aware Intelligent Network Services (RAINS) Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Tom; Yang, Xi

    2018-01-16

    The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyberinfrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum of compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyberinfrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate

  3. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  4. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  5. PRISM: Processing routines in IDL for spectroscopic measurements (installation manual and user's guide, version 1.0)

    Science.gov (United States)

    Kokaly, Raymond F.

    2011-01-01

    This report describes procedures for installing and using the U.S. Geological Survey Processing Routines in IDL for Spectroscopic Measurements (PRISM) software. PRISM provides a framework to conduct spectroscopic analysis of measurements made using laboratory, field, airborne, and space-based spectrometers. Using PRISM functions, the user can compare the spectra of materials of unknown composition with reference spectra of known materials. This spectroscopic analysis allows the composition of the material to be identified and characterized. Among its other functions, PRISM contains routines for the storage of spectra in database files, import/export of ENVI spectral libraries, importation of field spectra, correction of spectra to absolute reflectance, arithmetic operations on spectra, interactive continuum removal and comparison of spectral features, correction of imaging spectrometer data to ground-calibrated reflectance, and identification and mapping of materials using spectral feature-based analysis of reflectance data. This report provides step-by-step instructions for installing the PRISM software and running its functions.

  6. Discharge Fee Policy Analysis: A Computable General Equilibrium (CGE) Model of Water Resources and Water Environments

    OpenAIRE

    Guohua Fang; Ting Wang; Xinyi Si; Xin Wen; Yu Liu

    2016-01-01

    To alleviate increasingly serious water pollution and shortages in developing countries, various kinds of policies have been implemented by local governments. It is vital to quantify and evaluate the performance and potential economic impacts of these policies. This study develops a Computable General Equilibrium (CGE) model to simulate the regional economic and environmental effects of discharge fees. Firstly, water resources and water environment factors are separated from the input and out...

  7. Production experience with the ATLAS Event Service

    Science.gov (United States)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  8. Cloud Computing:Strategies for Cloud Computing Adoption

    OpenAIRE

    Shimba, Faith

    2010-01-01

    The advent of cloud computing in recent years has sparked an interest from different organisations, institutions and users to take advantage of web applications. This is a result of the new economic model for the Information Technology (IT) department that cloud computing promises. The model promises a shift from an organisation required to invest heavily for limited IT resources that are internally managed, to a model where the organisation can buy or rent resources that are managed by a clo...

  9. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management's goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  10. Information resource management concepts for records managers

    Energy Technology Data Exchange (ETDEWEB)

    Seesing, P.R.

    1992-10-01

    Information Resource Management (ERM) is the label given to the various approaches used to foster greater accountability for the use of computing resources. It is a corporate philosophy that treats information as it would its other resources. There is a reorientation from simply expenditures to considering the value of the data stored on that hardware. Accountability for computing resources is expanding beyond just the data processing (DP) or management information systems (MIS) manager to include senior organization management and user management. Management`s goal for office automation is being refocused from saving money to improving productivity. A model developed by Richard Nolan (1982) illustrates the basic evolution of computer use in organizations. Computer Era: (1) Initiation (computer acquisition), (2) Contagion (intense system development), (3) Control (proliferation of management controls). Data Resource Era: (4) Integration (user service orientation), (5) Data Administration (corporate value of information), (6) Maturity (strategic approach to information technology). The first three stages mark the growth of traditional data processing and management information systems departments. The development of the IRM philosophy in an organization involves the restructuring of the DP organization and new management techniques. The three stages of the Data Resource Era represent the evolution of IRM. This paper examines each of them in greater detail.

  11. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities

    OpenAIRE

    Buyya, Rajkumar; Yeo, Chee Shin; Venugopal, Srikumar

    2008-01-01

    This keynote paper: presents a 21st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; presents...

  12. Natural resources and energy systems: a strategic perspective

    International Nuclear Information System (INIS)

    Lee, T.H.; Schmidt, E.; Anderer, J.

    1986-06-01

    Oil prices falls to below ten dollar a barrel. US synfuel program cancelled after billions of dollars are invested. Tennessee Valley Authority tries to sell unfinished nuclear plants to China. Completed nuclear plant stands idle in Austria. Canadians seek uses for excess power from Candu plants. A glut of cheap oil, a general excess of operating nuclear capacity, an ever growing number of mothballed or not quite completed non-operating nuclear plants. Today the formidable challenge is to use abundant energy sources in ways that support social and economic development and protect the environment. In this paper we seek to provide a strategic perspective on how to meet this challenge. Toward this end, we explore the misconceptions of the past that led to costly errors in energy planning. The issue here is to dispel the myth of resource depletion as the driving force for the shift from one energy source to another. To gain insight into the actual basis for energy substitution, we turn our attention to energy patterns, viewing these in retrospect and prospect. This review of energy development provides an opportunity to consider some of the environmental implications of the expanded use of energy resources. These findings are then drawn together in an attempt to highlight certain R and D options that we believe offer a sound basis for strategic energy management. (Author, shortened by G.Q.)

  13. The pilot way to Grid resources using glideinWMS

    CERN Document Server

    Sfiligoi, Igor; Holzman, Burt; Mhashilkar, Parag; Padhi, Sanjay; Wurthwrin, Frank

    Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of grid middleware shared between them. This deployment model has proven to be very convenient for computing resource providers, but has introduced several problems for the users of the system, the three major being the complexity of job scheduling, the non-uniformity of compute resources, and the lack of good job monitoring. Pilot jobs address all the above problems by creating a virtual private computing pool on top of grid resources. This paper presents both the general pilot concept, as well as a concrete implementation, called glideinWMS, deployed in the Open Science Grid.

  14. National Uranium Resource Evaluation Program. Hydrogeochemical and Stream Sediment Reconnaissance Basic Data Reports Computer Program Requests Manual

    International Nuclear Information System (INIS)

    1980-01-01

    This manual is intended to aid those who are unfamiliar with ordering computer output for verification and preparation of Uranium Resource Evaluation (URE) Project reconnaissance basic data reports. The manual is also intended to help standardize the procedures for preparing the reports. Each section describes a program or group of related programs. The sections are divided into three parts: Purpose, Request Forms, and Requested Information

  15. Using Amazon's Elastic Compute Cloud to scale CMS' compute hardware dynamically.

    CERN Document Server

    Melo, Andrew Malone

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud-computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely on-demand as limits and caps on usage are imposed. Our trial workflows allow us t...

  16. Security of fixed and wireless computer networks

    NARCIS (Netherlands)

    Verschuren, J.; Degen, A.J.G.; Veugen, P.J.M.

    2003-01-01

    A few decades ago, most computers were stand-alone machines: they were able to process information using their own resources. Later, computer systems were connected to each other enabling a computer system to exchange data with another computer and to use resources of another computer. With the

  17. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  18. CLOUD COMPUTING OVERVIEW AND CHALLENGES: A REVIEW PAPER

    OpenAIRE

    Satish Kumar*, Vishal Thakur, Payal Thakur, Ashok Kumar Kashyap

    2017-01-01

    Cloud computing era is the most resourceful, elastic, utilized and scalable period for internet technology to use the computing resources over the internet successfully. Cloud computing did not provide only the speed, accuracy, storage capacity and efficiency for computing but it also lead to propagate the green computing and resource utilization. In this research paper, a brief description of cloud computing, cloud services and cloud security challenges is given. Also the literature review o...

  19. Batch efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Schwickerath, Ulrich; Silva, Ricardo; Uria, Christian, E-mail: Ulrich.Schwickerath@cern.c, E-mail: Ricardo.Silva@cern.c [CERN IT, 1211 Geneve 23 (Switzerland)

    2010-04-01

    A frequent source of concern for resource providers is the efficient use of computing resources in their centers. This has a direct impact on requests for new resources. There are two different but strongly correlated aspects to be considered: while users are mostly interested in a good turn-around time for their jobs, resource providers are mostly interested in a high and efficient usage of their available resources. Both things, the box usage and the efficiency of individual user jobs, need to be closely monitored so that the sources of the inefficiencies can be identified. At CERN, the Lemon monitoring system is used for both purposes. Examples of such sources are poorly written user code, inefficient access to mass storage systems, and dedication of resources to specific user groups. As a first step for improvements CERN has launched a project to develop a scheduler add-on that allows careful overloading of worker nodes that run idle jobs.

  20. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  1. TethysCluster: A comprehensive approach for harnessing cloud resources for hydrologic modeling

    Science.gov (United States)

    Nelson, J.; Jones, N.; Ames, D. P.

    2015-12-01

    Advances in water resources modeling are improving the information that can be supplied to support decisions affecting the safety and sustainability of society. However, as water resources models become more sophisticated and data-intensive they require more computational power to run. Purchasing and maintaining the computing facilities needed to support certain modeling tasks has been cost-prohibitive for many organizations. With the advent of the cloud, the computing resources needed to address this challenge are now available and cost-effective, yet there still remains a significant technical barrier to leverage these resources. This barrier inhibits many decision makers and even trained engineers from taking advantage of the best science and tools available. Here we present the Python tools TethysCluster and CondorPy, that have been developed to lower the barrier to model computation in the cloud by providing (1) programmatic access to dynamically scalable computing resources, (2) a batch scheduling system to queue and dispatch the jobs to the computing resources, (3) data management for job inputs and outputs, and (4) the ability to dynamically create, submit, and monitor computing jobs. These Python tools leverage the open source, computing-resource management, and job management software, HTCondor, to offer a flexible and scalable distributed-computing environment. While TethysCluster and CondorPy can be used independently to provision computing resources and perform large modeling tasks, they have also been integrated into Tethys Platform, a development platform for water resources web apps, to enable computing support for modeling workflows and decision-support systems deployed as web apps.

  2. A resource management architecture for metacomputing systems.

    Energy Technology Data Exchange (ETDEWEB)

    Czajkowski, K.; Foster, I.; Karonis, N.; Kesselman, C.; Martin, S.; Smith, W.; Tuecke, S.

    1999-08-24

    Metacomputing systems are intended to support remote and/or concurrent use of geographically distributed computational resources. Resource management in such systems is complicated by five concerns that do not typically arise in other situations: site autonomy and heterogeneous substrates at the resources, and application requirements for policy extensibility, co-allocation, and online control. We describe a resource management architecture that addresses these concerns. This architecture distributes the resource management problem among distinct local manager, resource broker, and resource co-allocator components and defines an extensible resource specification language to exchange information about requirements. We describe how these techniques have been implemented in the context of the Globus metacomputing toolkit and used to implement a variety of different resource management strategies. We report on our experiences applying our techniques in a large testbed, GUSTO, incorporating 15 sites, 330 computers, and 3600 processors.

  3. COMPUTATIONAL TOXICOLOGY-WHERE IS THE DATA? ...

    Science.gov (United States)

    This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource). This talk will briefly describe the state of the data world for computational toxicology and one approach to improve the situation, called ACToR (Aggregated Computational Toxicology Resource).

  4. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  5. Using Multiple Seasonal Holt-Winters Exponential Smoothing to Predict Cloud Resource Provisioning

    OpenAIRE

    Ashraf A. Shahin

    2016-01-01

    Elasticity is one of the key features of cloud computing that attracts many SaaS providers to minimize their services' cost. Cost is minimized by automatically provision and release computational resources depend on actual computational needs. However, delay of starting up new virtual resources can cause Service Level Agreement violation. Consequently, predicting cloud resources provisioning gains a lot of attention to scale computational resources in advance. However, most of current approac...

  6. SOCR: Statistics Online Computational Resource

    OpenAIRE

    Dinov, Ivo D.

    2006-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis...

  7. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  8. Concrete resource analysis of the quantum linear-system algorithm used to compute the electromagnetic scattering cross section of a 2D target

    Science.gov (United States)

    Scherer, Artur; Valiron, Benoît; Mau, Siun-Chuon; Alexander, Scott; van den Berg, Eric; Chapuran, Thomas E.

    2017-03-01

    We provide a detailed estimate for the logical resource requirements of the quantum linear-system algorithm (Harrow et al. in Phys Rev Lett 103:150502, 2009) including the recently described elaborations and application to computing the electromagnetic scattering cross section of a metallic target (Clader et al. in Phys Rev Lett 110:250504, 2013). Our resource estimates are based on the standard quantum-circuit model of quantum computation; they comprise circuit width (related to parallelism), circuit depth (total number of steps), the number of qubits and ancilla qubits employed, and the overall number of elementary quantum gate operations as well as more specific gate counts for each elementary fault-tolerant gate from the standard set { X, Y, Z, H, S, T, { CNOT } }. In order to perform these estimates, we used an approach that combines manual analysis with automated estimates generated via the Quipper quantum programming language and compiler. Our estimates pertain to the explicit example problem size N=332{,}020{,}680 beyond which, according to a crude big-O complexity comparison, the quantum linear-system algorithm is expected to run faster than the best known classical linear-system solving algorithm. For this problem size, a desired calculation accuracy ɛ =0.01 requires an approximate circuit width 340 and circuit depth of order 10^{25} if oracle costs are excluded, and a circuit width and circuit depth of order 10^8 and 10^{29}, respectively, if the resource requirements of oracles are included, indicating that the commonly ignored oracle resources are considerable. In addition to providing detailed logical resource estimates, it is also the purpose of this paper to demonstrate explicitly (using a fine-grained approach rather than relying on coarse big-O asymptotic approximations) how these impressively large numbers arise with an actual circuit implementation of a quantum algorithm. While our estimates may prove to be conservative as more efficient

  9. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  10. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  11. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses ... CT of the Sinuses? What is CT (Computed Tomography) of the Sinuses? Computed tomography, more commonly known ...

  12. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226583; The ATLAS collaboration; Filipčič, Andrej; Guan, Wen; Tsulaia, Vakhtang; Walker, Rodney; Wenaus, Torre

    2017-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  13. Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2016-01-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from the resources that comprise the Grid computing of most experiments, therefore exploiting these resources requires a change in strategy for the experiment. The resources may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The ARC CE with its non-intrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the Event Service primarily to address the issue of jobs that can be terminated at any point when opportunistic resources are needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in...

  14. Application of Evolutionary Computation in Automotive Powertrain Mount Tuning

    Directory of Open Access Journals (Sweden)

    Anab Akanda

    2006-01-01

    Full Text Available Engine mount tuning is a multi-disciplinary exercise since it affects Idle-shake, Road-shake and power-train noise response. Engine inertia is often used as a tuned absorber for controlling suspension resonance related road-shake issues. Last but not least, vehicle ride and handling may also be affected by mount tuning. In this work, Torque-Roll-Axis (TRA decoupling of the rigid powertrain was used as a starting point for mount tuning. Nodal point of flexible powertrain bending was used to define the envelop for transmission mount locations. The frequency corresponding to the decoupled roll mode of the rigid powertrain was then adjusted for idle-shake and road-shake response management.

  15. Research Computing and Data for Geoscience

    OpenAIRE

    Smith, Preston

    2015-01-01

    This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue. This presentation will discuss the data storage and computational resources available for GIS researchers at Purdue.

  16. Computational Science at the Argonne Leadership Computing Facility

    Science.gov (United States)

    Romero, Nichols

    2014-03-01

    The goal of the Argonne Leadership Computing Facility (ALCF) is to extend the frontiers of science by solving problems that require innovative approaches and the largest-scale computing systems. ALCF's most powerful computer - Mira, an IBM Blue Gene/Q system - has nearly one million cores. How does one program such systems? What software tools are available? Which scientific and engineering applications are able to utilize such levels of parallelism? This talk will address these questions and describe a sampling of projects that are using ALCF systems in their research, including ones in nanoscience, materials science, and chemistry. Finally, the ways to gain access to ALCF resources will be presented. This research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under contract DE-AC02-06CH11357.

  17. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    Science.gov (United States)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  18. A new major SETI project based on Project SERENDIP data and 100,000 personal computers

    Science.gov (United States)

    Sullivan, Woodruff T., III; Werthimer, Dan; Bowyer, Stuart; Cobb, Jeff; Gedye, David; Anderson, David

    1997-01-01

    We are now developing an innovative SETI project involving massively parallel computation on desktop computers scattered around the world. The public will be uniquely involved in a real scientific project. Individuals will download a screensaver program that will not only provide the usual attractive graphics when their computer is idle, but will also perform sophisticated analysis of SETI data using the host computer. The data are tapped off Project SERENDIP IV's receiver and SETI survey operating on the 305-m-diameter Arecibo radio telescope. We make a continuous tape-recording of a 2-MHz bandwidth signal centered on the 21-cm H I line. The data on these tapes are then preliminarily screened and parceled out by a server that supplies small chunks of data over the Internet to clients possessing the screen-saver software. After the client computer has automatically analyzed a complete chunk of data a report on the best candidate signals is sent back to the server, whereupon a new chunk of data is sent out. If 50,000-100,000 customers can be achieved, the computing power will be equivalent to a substantial fraction of atypical supercomputer, and the project will cover a volume of parameter space comparable to that of SERENDIP IV.

  19. Dynamic Resource Allocation with the arcControlTower

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Nilsen, Jon Kerr

    2015-01-01

    Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers’ workloads are increasingly exploiting modern computing technologies such as parallelism. The current pilot job management model used by many experiments relies on static dedicated resources and cannot easily adapt to these changes. The model used for ATLAS in Nordic countries and some other places enables a flexible job management system based on dynamic resources allocation. Rather than a fixed set of resources managed centrally, the model allows resources to be requested on the fly. The ARC Computing Element (ARC-CE) and ARC Control Tower (aCT) are the key components of the model. The aCT requests jobs from the ATLAS job management system (PanDA) and submits a fully-formed job description to ARC-CEs. ARC-CE can then dynamically request the required resources from the underlying batch system. In this paper we describe the architecture of the model and the experienc...

  20. Application of a Resource Theory for Magic States to Fault-Tolerant Quantum Computing.

    Science.gov (United States)

    Howard, Mark; Campbell, Earl

    2017-03-03

    Motivated by their necessity for most fault-tolerant quantum computation schemes, we formulate a resource theory for magic states. First, we show that robustness of magic is a well-behaved magic monotone that operationally quantifies the classical simulation overhead for a Gottesman-Knill-type scheme using ancillary magic states. Our framework subsequently finds immediate application in the task of synthesizing non-Clifford gates using magic states. When magic states are interspersed with Clifford gates, Pauli measurements, and stabilizer ancillas-the most general synthesis scenario-then the class of synthesizable unitaries is hard to characterize. Our techniques can place nontrivial lower bounds on the number of magic states required for implementing a given target unitary. Guided by these results, we have found new and optimal examples of such synthesis.

  1. Brain-Computer Interface Controlled Functional Electrical Stimulation System for Ankle Movement

    Directory of Open Access Journals (Sweden)

    King Christine E

    2011-08-01

    Full Text Available Abstract Background Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG-based BCI with a noninvasive functional electrical stimulation (FES system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. Methods A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Results Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77 with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions, and one subject had a single false alarm. Conclusions This study suggests that the integration of a noninvasive BCI with a lower

  2. Brain-computer interface controlled functional electrical stimulation system for ankle movement.

    Science.gov (United States)

    Do, An H; Wang, Po T; King, Christine E; Abiri, Ahmad; Nenadic, Zoran

    2011-08-26

    Many neurological conditions, such as stroke, spinal cord injury, and traumatic brain injury, can cause chronic gait function impairment due to foot-drop. Current physiotherapy techniques provide only a limited degree of motor function recovery in these individuals, and therefore novel therapies are needed. Brain-computer interface (BCI) is a relatively novel technology with a potential to restore, substitute, or augment lost motor behaviors in patients with neurological injuries. Here, we describe the first successful integration of a noninvasive electroencephalogram (EEG)-based BCI with a noninvasive functional electrical stimulation (FES) system that enables the direct brain control of foot dorsiflexion in able-bodied individuals. A noninvasive EEG-based BCI system was integrated with a noninvasive FES system for foot dorsiflexion. Subjects underwent computer-cued epochs of repetitive foot dorsiflexion and idling while their EEG signals were recorded and stored for offline analysis. The analysis generated a prediction model that allowed EEG data to be analyzed and classified in real time during online BCI operation. The real-time online performance of the integrated BCI-FES system was tested in a group of five able-bodied subjects who used repetitive foot dorsiflexion to elicit BCI-FES mediated dorsiflexion of the contralateral foot. Five able-bodied subjects performed 10 alternations of idling and repetitive foot dorsifiexion to trigger BCI-FES mediated dorsifiexion of the contralateral foot. The epochs of BCI-FES mediated foot dorsifiexion were highly correlated with the epochs of voluntary foot dorsifiexion (correlation coefficient ranged between 0.59 and 0.77) with latencies ranging from 1.4 sec to 3.1 sec. In addition, all subjects achieved a 100% BCI-FES response (no omissions), and one subject had a single false alarm. This study suggests that the integration of a noninvasive BCI with a lower-extremity FES system is feasible. With additional modifications

  3. Impact of remote sensing upon the planning, management and development of water resources. Summary of computers and computer growth trends for hydrologic modeling and the input of ERTS image data processing load

    Science.gov (United States)

    Castruccio, P. A.; Loats, H. L., Jr.

    1975-01-01

    An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.

  4. NET-COMPUTER: Internet Computer Architecture and its Application in E-Commerce

    Directory of Open Access Journals (Sweden)

    P. O. Umenne

    2012-12-01

    Full Text Available Research in Intelligent Agents has yielded interesting results, some of which have been translated into commer­cial ventures. Intelligent Agents are executable software components that represent the user, perform tasks on behalf of the user and when the task terminates, the Agents send the result to the user. Intelligent Agents are best suited for the Internet: a collection of computers connected together in a world-wide computer network. Swarm and HYDRA computer architectures for Agents’ execution were developed at the University of Surrey, UK in the 90s. The objective of the research was to develop a software-based computer architecture on which Agents execution could be explored. The combination of Intelligent Agents and HYDRA computer architecture gave rise to a new computer concept: the NET-Computer in which the comput­ing resources reside on the Internet. The Internet computers form the hardware and software resources, and the user is provided with a simple interface to access the Internet and run user tasks. The Agents autonomously roam the Internet (NET-Computer executing the tasks. A growing segment of the Internet is E-Commerce for online shopping for products and services. The Internet computing resources provide a marketplace for product suppliers and consumers alike. Consumers are looking for suppliers selling products and services, while suppliers are looking for buyers. Searching the vast amount of information available on the Internet causes a great deal of problems for both consumers and suppliers. Intelligent Agents executing on the NET-Computer can surf through the Internet and select specific information of interest to the user. The simulation results show that Intelligent Agents executing HYDRA computer architecture could be applied in E-Commerce.

  5. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  6. Resource utilization and costs during the initial years of lung cancer screening with computed tomography in Canada.

    Science.gov (United States)

    Cressman, Sonya; Lam, Stephen; Tammemagi, Martin C; Evans, William K; Leighl, Natasha B; Regier, Dean A; Bolbocean, Corneliu; Shepherd, Frances A; Tsao, Ming-Sound; Manos, Daria; Liu, Geoffrey; Atkar-Khattra, Sukhinder; Cromwell, Ian; Johnston, Michael R; Mayo, John R; McWilliams, Annette; Couture, Christian; English, John C; Goffin, John; Hwang, David M; Puksa, Serge; Roberts, Heidi; Tremblay, Alain; MacEachern, Paul; Burrowes, Paul; Bhatia, Rick; Finley, Richard J; Goss, Glenwood D; Nicholas, Garth; Seely, Jean M; Sekhon, Harmanjatinder S; Yee, John; Amjadi, Kayvan; Cutz, Jean-Claude; Ionescu, Diana N; Yasufuku, Kazuhiro; Martel, Simon; Soghrati, Kamyar; Sin, Don D; Tan, Wan C; Urbanski, Stefan; Xu, Zhaolin; Peacock, Stuart J

    2014-10-01

    It is estimated that millions of North Americans would qualify for lung cancer screening and that billions of dollars of national health expenditures would be required to support population-based computed tomography lung cancer screening programs. The decision to implement such programs should be informed by data on resource utilization and costs. Resource utilization data were collected prospectively from 2059 participants in the Pan-Canadian Early Detection of Lung Cancer Study using low-dose computed tomography (LDCT). Participants who had 2% or greater lung cancer risk over 3 years using a risk prediction tool were recruited from seven major cities across Canada. A cost analysis was conducted from the Canadian public payer's perspective for resources that were used for the screening and treatment of lung cancer in the initial years of the study. The average per-person cost for screening individuals with LDCT was $453 (95% confidence interval [CI], $400-$505) for the initial 18-months of screening following a baseline scan. The screening costs were highly dependent on the detected lung nodule size, presence of cancer, screening intervention, and the screening center. The mean per-person cost of treating lung cancer with curative surgery was $33,344 (95% CI, $31,553-$34,935) over 2 years. This was lower than the cost of treating advanced-stage lung cancer with chemotherapy, radiotherapy, or supportive care alone, ($47,792; 95% CI, $43,254-$52,200; p = 0.061). In the Pan-Canadian study, the average cost to screen individuals with a high risk for developing lung cancer using LDCT and the average initial cost of curative intent treatment were lower than the average per-person cost of treating advanced stage lung cancer which infrequently results in a cure.

  7. Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter

    Directory of Open Access Journals (Sweden)

    Shyamala Loganathan

    2015-01-01

    Full Text Available Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.

  8. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  9. Calibration data Analysis Package (CAP): An IDL based widget application for analysis of X-ray calibration data

    Science.gov (United States)

    Vaishali, S.; Narendranath, S.; Sreekumar, P.

    An IDL (interactive data language) based widget application developed for the calibration of C1XS (Narendranath et al., 2010) instrument on Chandrayaan-1 is modified to provide a generic package for the analysis of data from x-ray detectors. The package supports files in ascii as well as FITS format. Data can be fitted with a list of inbuilt functions to derive the spectral redistribution function (SRF). We have incorporated functions such as `HYPERMET' (Philips & Marlow 1976) including non Gaussian components in the SRF such as low energy tail, low energy shelf and escape peak. In addition users can incorporate additional models which may be required to model detector specific features. Spectral fits use a routine `mpfit' which uses Leven-Marquardt least squares fitting method. The SRF derived from this tool can be fed into an accompanying program to generate a redistribution matrix file (RMF) compatible with the X-ray spectral analysis package XSPEC. The tool provides a user friendly interface of help to beginners and also provides transparency and advanced features for experts.

  10. GRID : unlimited computing power on your desktop Conference MT17

    CERN Multimedia

    2001-01-01

    The Computational GRID is an analogy to the electrical power grid for computing resources. It decouples the provision of computing, data, and networking from its use, it allows large-scale pooling and sharing of resources distributed world-wide. Every computer, from a desktop to a mainframe or supercomputer, can provide computing power or data for the GRID. The final objective is to plug your computer into the wall and have direct access to huge computing resources immediately, just like plugging-in a lamp to get instant light. The GRID will facilitate world-wide scientific collaborations on an unprecedented scale. It will provide transparent access to major distributed resources of computer power, data, information, and collaborations.

  11. Cloud Computing Security: A Survey

    OpenAIRE

    Khalil, Issa; Khreishah, Abdallah; Azeem, Muhammad

    2014-01-01

    Cloud computing is an emerging technology paradigm that migrates current technological and computing concepts into utility-like solutions similar to electricity and water systems. Clouds bring out a wide range of benefits including configurable computing resources, economic savings, and service flexibility. However, security and privacy concerns are shown to be the primary obstacles to a wide adoption of clouds. The new concepts that clouds introduce, such as multi-tenancy, resource sharing a...

  12. The dynamic management system for grid resources information of IHEP

    International Nuclear Information System (INIS)

    Gu Ming; Sun Gongxing; Zhang Weiyi

    2003-01-01

    The Grid information system is an essential base for building a Grid computing environment, it collects timely the resources information of each resource in a Grid, and provides an entire information view of all resources to the other components in a Grid computing system. The Grid technology could support strongly the computing of HEP (High Energy Physics) with big science and multi-organization features. In this article, the architecture and implementation of a dynamic management system are described, as well as the grid and LDAP (Lightweight Directory Access Protocol), including Web-based design for resource information collecting, querying and modifying. (authors)

  13. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    Science.gov (United States)

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  14. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    Directory of Open Access Journals (Sweden)

    Shoaib Ehsan

    2015-07-01

    Full Text Available The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF, allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video. Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44% in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  15. Probing the structure of complex solids using a distributed computing approach-Applications in zeolite science

    International Nuclear Information System (INIS)

    French, Samuel A.; Coates, Rosie; Lewis, Dewi W.; Catlow, C. Richard A.

    2011-01-01

    We demonstrate the viability of distributed computing techniques employing idle desktop computers in investigating complex structural problems in solids. Through the use of a combined Monte Carlo and energy minimisation method, we show how a large parameter space can be effectively scanned. By controlling the generation and running of different configurations through a database engine, we are able to not only analyse the data 'on the fly' but also direct the running of jobs and the algorithms for generating further structures. As an exemplar case, we probe the distribution of Al and extra-framework cations in the structure of the zeolite Mordenite. We compare our computed unit cells with experiment and find that whilst there is excellent correlation between computed and experimentally derived unit cell volumes, cation positioning and short-range Al ordering (i.e. near neighbour environment), there remains some discrepancy in the distribution of Al throughout the framework. We also show that stability-structure correlations only become apparent once a sufficiently large sample is used. - Graphical Abstract: Aluminium distributions in zeolites are determined using e-science methods. Highlights: → Use of e-science methods to search configurationally space. → Automated control of space searching. → Identify key structural features conveying stability. → Improved correlation of computed structures with experimental data.

  16. 云计算环境下的DPSO资源负载均衡算法%DPSO resource load balancing in cloud computing

    Institute of Scientific and Technical Information of China (English)

    冯小靖; 潘郁

    2013-01-01

    Load balancing problem is one of the hot issues in cloud computing. Discrete particle swarm optimization algoritm is used to research load balancing on cloud computing environment. According to dynamic change of resources demand and low require of servers, each resource management node servers as node of the topological structure, and this paper establishes appropriate resource-task model which is resolved by DPSO. Verification results show that the algorithm enhances the utilization ratio and load balancing of resources.%负载均衡问题是云计算研究的热点问题之一.运用离散粒子群算法对云计算环境下的负载均衡问题进行研究,根据云计算环境下资源需求动态变化,并且对资源节点服务器的要求较低的特点,把各个资源节点当做网络拓扑结构中的各个节点,建立相应的资源-任务分配模型,运用离散粒子群算法实现资源负载均衡.验证表明,该算法提高了资源利用率和云计算资源的负载均衡.

  17. Cloud Computing Security Latest Issues amp Countermeasures

    Directory of Open Access Journals (Sweden)

    Shelveen Pandey

    2015-08-01

    Full Text Available Abstract Cloud computing describes effective computing services provided by a third-party organization known as cloud service provider for organizations to perform different tasks over the internet for a fee. Cloud service providers computing resources are dynamically reallocated per demand and their infrastructure platform and software and other resources are shared by multiple corporate and private clients. With the steady increase in the number of cloud computing subscribers of these shared resources over the years security on the cloud is a growing concern. In this review paper the current cloud security issues and practices are described and a few innovative solutions are proposed that can help improve cloud computing security in the future.

  18. Cloud Computing: Architecture and Services

    OpenAIRE

    Ms. Ravneet Kaur

    2018-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. It is a method for delivering information technology (IT) services where resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server. Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possib...

  19. DIaaS: Resource Management System for the Intra-Cloud with On-Premise Desktops

    Directory of Open Access Journals (Sweden)

    Hyun-Woo Kim

    2017-01-01

    Full Text Available Infrastructure as a service with desktops (DIaaS based on the extensible mark-up language (XML is herein proposed to utilize surplus resources. DIaaS is a traditional surplus-resource integrated management technology. It is designed to provide fast work distribution and computing services based on user service requests as well as storage services through desktop-based distributed computing and storage resource integration. DIaaS includes a nondisruptive resource service and an auto-scalable scheme to enhance the availability and scalability of intra-cloud computing resources. A performance evaluation of the proposed scheme measured the clustering performance time for surplus resource utilization. The results showed improvement in computing and storage services in a connection of at least two computers compared to the traditional method for high-availability measurement of nondisruptive services. Furthermore, an artificial server error environment was used to create a clustering delay for computing and storage services and for nondisruptive services. It was compared to the Hadoop distributed file system (HDFS.

  20. CHEP2015: Dynamic Resource Allocation with arcControlTower

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Nilsen, Jon Kerr

    2015-01-01

    Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers’ workloads are increasingly exploiting modern computing technologies such as parallelism. The current pilot job management model used by many experiments relies on static dedicated resources and cannot easily adapt to these changes. The model used for ATLAS in Nordic countries and some other places enables a flexible job management system based on dynamic resources allocation. Rather than a fixed set of resources managed centrally, the model allows resources to be requested on the fly. The ARC Computing Element (ARC-CE) and ARC Control Tower (aCT) are the key components of the model. The aCT requests jobs from the ATLAS job mangement system (Panda) and submits a fully-formed job description to ARC-CEs. ARC-CE can then dynamically request the required resources from the underlying batch system. In this paper we describe the architecture of the model and the experience...

  1. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    Science.gov (United States)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  2. A Distributed Snapshot Protocol for Efficient Artificial Intelligence Computation in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    JongBeom Lim

    2018-01-01

    Full Text Available Many artificial intelligence applications often require a huge amount of computing resources. As a result, cloud computing adoption rates are increasing in the artificial intelligence field. To support the demand for artificial intelligence applications and guarantee the service level agreement, cloud computing should provide not only computing resources but also fundamental mechanisms for efficient computing. In this regard, a snapshot protocol has been used to create a consistent snapshot of the global state in cloud computing environments. However, the existing snapshot protocols are not optimized in the context of artificial intelligence applications, where large-scale iterative computation is the norm. In this paper, we present a distributed snapshot protocol for efficient artificial intelligence computation in cloud computing environments. The proposed snapshot protocol is based on a distributed algorithm to run interconnected multiple nodes in a scalable fashion. Our snapshot protocol is able to deal with artificial intelligence applications, in which a large number of computing nodes are running. We reveal that our distributed snapshot protocol guarantees the correctness, safety, and liveness conditions.

  3. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    Science.gov (United States)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  4. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray ... What is CT Scanning of the Head? Computed tomography, more commonly known as a CT or CAT ...

  5. Computer system architecture for laboratory automation

    International Nuclear Information System (INIS)

    Penney, B.K.

    1978-01-01

    This paper describes the various approaches that may be taken to provide computing resources for laboratory automation. Three distinct approaches are identified, the single dedicated small computer, shared use of a larger computer, and a distributed approach in which resources are provided by a number of computers, linked together, and working in some cooperative way. The significance of the microprocessor in laboratory automation is discussed, and it is shown that it is not simply a cheap replacement of the minicomputer. (Auth.)

  6. Usage of Cloud Computing Simulators and Future Systems For Computational Research

    OpenAIRE

    Lakshminarayanan, Ramkumar; Ramalingam, Rajasekar

    2016-01-01

    Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) are used as a business model for Cloud Computing. Nowadays, the adoption and deployment of Cloud Computing is increasing in various domains, forcing researchers to conduct research in the area of Cloud Computing ...

  7. Cybernetics in water resources management

    International Nuclear Information System (INIS)

    Alam, N.

    2005-01-01

    The term Water Resources is used to refer to the management and use of water primarily for the benefit of people. Hence, successful management of water resources requires a solid understanding of Hydrology. Cybernetics in Water Resources Management is an endeavor to analyze and enhance the beneficial exploitation of diverse scientific approaches and communication methods; to control the complexity of water management; and to highlight the importance of making right decisions at the right time, avoiding the devastating effects of drought and floods. Recent developments in computer technology and advancement of mathematics have created a new field of system analysis i.e. Mathematical Modeling. Based on mathematical models, several computer based Water Resources System (WRS) Models were developed across the world, to solve the water resources management problems, but these were not adaptable and were limited to computation by a well defined algorithm, with information input at various stages and the management tasks were also formalized in that well structured algorithm. The recent advancements in information technology has revolutionized every field of the contemporary world and thus, the WRS has also to be diversified by broadening the knowledge base of the system. The updation of this knowledge should be a continuous process acquired through the latest techniques of networking from all its concerned sources together with the expertise of the specialists and the analysis of the practical experiences. The system should then be made capable of making inferences and shall have the tendency to apply the rules based on the latest information and inferences in a given stage of problem solving. Rigid programs cannot adapt to changing conditions and new knowledge. Thus, there is a need for an evolutionary development based on mutual independence of computational procedure and knowledge with capability to adapt itself to the increasing complexity of problem. The subject

  8. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  9. Lightweight on-demand computing with Elasticluster and Nordugrid ARC

    CERN Document Server

    Pedersen, Maiken; The ATLAS collaboration; Filipcic, Andrej

    2018-01-01

    The cloud computing paradigm allows scientists to elastically grow or shrink computing resources as requirements demand, so that resources only need to be paid for when necessary. The challenge of integrating cloud computing into distributed computing frameworks used by HEP experiments has led to many different solutions in the past years, however none of these solutions offer a complete, fully integrated cloud resource out of the box. This paper describes how to offer such a resource using stripped-down minimal versions of existing distributed computing software components combined with off-the-shelf cloud tools. The basis of the cloud resource is Elasticluster, and the glue to join to the HEP computing infrastructure is provided by the NorduGrid ARC middleware and the ARC Control Tower. These latter two components are stripped down to bare minimum edge services, removing the need for administering complex grid middleware, yet still provide the complete job and data management required to fully exploit the c...

  10. Eucalyptus: an open-source cloud computing infrastructure

    International Nuclear Information System (INIS)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii

    2009-01-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  11. Cloud Computing Security Latest Issues amp Countermeasures

    OpenAIRE

    Shelveen Pandey; Mohammed Farik

    2015-01-01

    Abstract Cloud computing describes effective computing services provided by a third-party organization known as cloud service provider for organizations to perform different tasks over the internet for a fee. Cloud service providers computing resources are dynamically reallocated per demand and their infrastructure platform and software and other resources are shared by multiple corporate and private clients. With the steady increase in the number of cloud computing subscribers of these shar...

  12. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  13. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  14. A Computer Security Course in the Undergraduate Computer Science Curriculum.

    Science.gov (United States)

    Spillman, Richard

    1992-01-01

    Discusses the importance of computer security and considers criminal, national security, and personal privacy threats posed by security breakdown. Several examples are given, including incidents involving computer viruses. Objectives, content, instructional strategies, resources, and a sample examination for an experimental undergraduate computer…

  15. Comparison of Resource Platform Selection Approaches for Scientific Workflows

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Ramakrishnan, Lavanya

    2010-03-05

    Cloud computing is increasingly considered as an additional computational resource platform for scientific workflows. The cloud offers opportunity to scale-out applications from desktops and local cluster resources. At the same time, it can eliminate the challenges of restricted software environments and queue delays in shared high performance computing environments. Choosing from these diverse resource platforms for a workflow execution poses a challenge for many scientists. Scientists are often faced with deciding resource platform selection trade-offs with limited information on the actual workflows. While many workflow planning methods have explored task scheduling onto different resources, these methods often require fine-scale characterization of the workflow that is onerous for a scientist. In this position paper, we describe our early exploratory work into using blackbox characteristics to do a cost-benefit analysis across of using cloud platforms. We use only very limited high-level information on the workflow length, width, and data sizes. The length and width are indicative of the workflow duration and parallelism. The data size characterizes the IO requirements. We compare the effectiveness of this approach to other resource selection models using two exemplar scientific workflows scheduled on desktops, local clusters, HPC centers, and clouds. Early results suggest that the blackbox model often makes the same resource selections as a more fine-grained whitebox model. We believe the simplicity of the blackbox model can help inform a scientist on the applicability of cloud computing resources even before porting an existing workflow.

  16. Computer Simulation and Digital Resources for Plastic Surgery Psychomotor Education.

    Science.gov (United States)

    Diaz-Siso, J Rodrigo; Plana, Natalie M; Stranix, John T; Cutting, Court B; McCarthy, Joseph G; Flores, Roberto L

    2016-10-01

    Contemporary plastic surgery residents are increasingly challenged to learn a greater number of complex surgical techniques within a limited period. Surgical simulation and digital education resources have the potential to address some limitations of the traditional training model, and have been shown to accelerate knowledge and skills acquisition. Although animal, cadaver, and bench models are widely used for skills and procedure-specific training, digital simulation has not been fully embraced within plastic surgery. Digital educational resources may play a future role in a multistage strategy for skills and procedures training. The authors present two virtual surgical simulators addressing procedural cognition for cleft repair and craniofacial surgery. Furthermore, the authors describe how partnerships among surgical educators, industry, and philanthropy can be a successful strategy for the development and maintenance of digital simulators and educational resources relevant to plastic surgery training. It is our responsibility as surgical educators not only to create these resources, but to demonstrate their utility for enhanced trainee knowledge and technical skills development. Currently available digital resources should be evaluated in partnership with plastic surgery educational societies to guide trainees and practitioners toward effective digital content.

  17. Security Implications of Typical Grid Computing Usage Scenarios

    International Nuclear Information System (INIS)

    Humphrey, Marty; Thompson, Mary R.

    2001-01-01

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing

  18. Security Implications of Typical Grid Computing Usage Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.

    2001-06-05

    A Computational Grid is a collection of heterogeneous computers and resources spread across multiple administrative domains with the intent of providing users uniform access to these resources. There are many ways to access the resources of a Computational Grid, each with unique security requirements and implications for both the resource user and the resource provider. A comprehensive set of Grid usage scenarios are presented and analyzed with regard to security requirements such as authentication, authorization, integrity, and confidentiality. The main value of these scenarios and the associated security discussions are to provide a library of situations against which an application designer can match, thereby facilitating security-aware application use and development from the initial stages of the application design and invocation. A broader goal of these scenarios are to increase the awareness of security issues in Grid Computing.

  19. Tracking the Flow of Resources in Electronic Waste - The Case of End-of-Life Computer Hard Disk Drives.

    Science.gov (United States)

    Habib, Komal; Parajuly, Keshav; Wenzel, Henrik

    2015-10-20

    Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.

  20. System for simulating fluctuation diagnostics for application to turbulence computations

    International Nuclear Information System (INIS)

    Bravenec, R.V.; Nevins, W.M.

    2006-01-01

    Present-day nonlinear microstability codes are able to compute the saturated fluctuations of a turbulent fluid versus space and time, whether the fluid be liquid, gas, or plasma. They are therefore able to determine turbulence-induced fluid (or particle) and energy fluxes. These codes, however, must be tested against experimental data not only with respect to transport but also characteristics of the fluctuations. The latter is challenging because of limitations in the diagnostics (e.g., finite spatial resolution) and the fact that the diagnostics typically do not measure exactly the quantities that the codes compute. In this work, we present a system based on IDL registered analysis and visualization software in which user-supplied 'diagnostic filters' are applied to the code outputs to generate simulated diagnostic signals. The same analysis techniques as applied to the measurements, e.g., digital time-series analysis, may then be applied to the synthesized signals. Their statistical properties, such as rms fluctuation level, mean wave numbers, phase and group velocities, correlation lengths and times, and in some cases full S(k,ω) spectra, can then be compared directly to those of the measurements

  1. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs...... in determining the relative value of cloud computing....

  2. Microwave and Electron Beam Computer Programs

    Science.gov (United States)

    1988-06-01

    Research (ONR). SCRIBE was adapted by MRC from the Stanford Linear Accelerator Center Beam Trajectory Program, EGUN . oTIC NSECE Acc !,,o For IDL1C I...achieved with SCRIBE. It is a ver- sion of the Stanford Linear Accelerator (SLAC) code EGUN (Ref. 8), extensively modified by MRC for research on

  3. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  4. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Sinuses Computed tomography (CT) of the sinuses uses special x-ray equipment ... story here Images × Image Gallery Patient undergoing computed tomography (CT) scan. View full size with caption Pediatric Content ...

  5. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Computed Tomography (CT) - Head Computed tomography (CT) of the head uses special x-ray equipment ... story here Images × Image Gallery Patient undergoing computed tomography (CT) scan. View full size with caption Pediatric Content ...

  6. SU-E-T-573: Normal Tissue Dose Effect of Prescription Isodose Level Selection in Lung Stereotactic Body Radiation Therapy

    International Nuclear Information System (INIS)

    Zhang, Q; Lei, Y; Zheng, D; Zhu, X; Wahl, A; Lin, C; Zhou, S; Zhen, W

    2015-01-01

    Purpose: To evaluate dose fall-off in normal tissue for lung stereotactic body radiation therapy (SBRT) cases planned with different prescription isodose levels (IDLs), by calculating the dose dropping speed (DDS) in normal tissue on plans computed with both Pencil Beam (PB) and Monte-Carlo (MC) algorithms. Methods: The DDS was calculated on 32 plans for 8 lung SBRT patients. For each patient, 4 dynamic conformal arc plans were individually optimized for prescription isodose levels (IDL) ranging from 60% to 90% of the maximum dose with 10% increments to conformally cover the PTV. Eighty non-overlapping rind structures each of 1mm thickness were created layer by layer from each PTV surface. The average dose in each rind was calculated and fitted with a double exponential function (DEF) of the distance from the PTV surface, which models the steep- and moderate-slope portions of the average dose curve in normal tissue. The parameter characterizing the steep portion of the average dose curve in the DEF quantifies the DDS in the immediate normal tissue receiving high dose. Provided that the prescription dose covers the whole PTV, a greater DDS indicates better normal tissue sparing. The DDS were compared among plans with different prescription IDLs, for plans computed with both PB and MC algorithms. Results: For all patients, the DDS was found to be the lowest for 90% prescription IDL and reached a highest plateau region for 60% or 70% prescription. The trend was the same for both PB and MC plans. Conclusion: Among the range of prescription IDLs accepted by lung SBRT RTOG protocols, prescriptions to 60% and 70% IDLs were found to provide best normal tissue sparing

  7. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    Science.gov (United States)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  8. Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service

    Science.gov (United States)

    Cameron, D.; Filipčič, A.; Guan, W.; Tsulaia, V.; Walker, R.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means to add much-needed extra computing power. These resources can be very different in design from those that comprise the Grid computing of most experiments, therefore exploiting them requires a change in strategy for the experiment. They may be highly restrictive in what can be run or in connections to the outside world, or tolerate opportunistic usage only on condition that tasks may be terminated without warning. The Advanced Resource Connector Computing Element (ARC CE) with its nonintrusive architecture is designed to integrate resources such as High Performance Computing (HPC) systems into a computing Grid. The ATLAS experiment developed the ATLAS Event Service (AES) primarily to address the issue of jobs that can be terminated at any point when opportunistic computing capacity is needed by someone else. This paper describes the integration of these two systems in order to exploit opportunistic resources for ATLAS in a restrictive environment. In addition to the technical details, results from deployment of this solution in the SuperMUC HPC centre in Munich are shown.

  9. CMS computing upgrade and evolution

    CERN Document Server

    Hernandez Calama, Jose

    2013-01-01

    The distributed Grid computing infrastructure has been instrumental in the successful exploitation of the LHC data leading to the discovery of the Higgs boson. The computing system will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited increase in the computing resources. A more efficient use of the available resources will be mandatory. CMS is improving the data storage, distribution and access as well as the processing efficiency. Remote access to the data through the WAN, dynamic data replication and deletion based on the data access patterns, and separation of disk and tape storage are some of the areas being actively developed. Multi-core processing and scheduling is being pursued in order to make a better use of the multi-core nodes available at the sites. In addition, CMS is exploring new computing techniques, such as Cloud Computing, to get access to opportunistic resources or as a means of using wit...

  10. Computer group

    International Nuclear Information System (INIS)

    Black, I.; Heusler, A.; Hoeptner, G.; Krafft, F.; Lang, R.; Moellenkamp, R.; Mueller, W.; Mueller, W.F.; Schmidt, A.; Schwind, D.; Weber, G.

    1989-01-01

    The VAX-8650 has been running with no idle time during more than 98% of the time. Early in 1988 it became the boot member of a local area VAX cluster. Up to five satellites (mikroVAX II and VAXstation2000) joined the cluster building a pool of 22 disk drives. Experimences with the cluster system have shown a way to expand the capacity: Early in 1989, a second boot member (VAX3000) and several VAXstations (VAXstation2000 and VAXstation3200) will be added with additional disk space. (orig.)

  11. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  12. Trying to Predict the Future - Resource Planning and Allocation in CMS

    CERN Document Server

    Kreuzer, Peter; Fisk, Ian; Merino, Gonzalo

    2012-01-01

    In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this presentation we will discuss the resource planning techniques used in CMS to predict the computing resources several years in advance. We will discuss how we attempt to implement the activities of the computing model in spreadsheets and formulas to calculate the needs. We will talk about how those needs are reflected in the 2012 running and how the planned long shutdown of the LHC in 2013 and 2014 impact the planning process and the outcome. In the end we will speculate on the computing needs in the second major run of LHC.

  13. Study on the application of mobile internet cloud computing platform

    Science.gov (United States)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  14. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  15. Eucalyptus: an open-source cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii, E-mail: rich@cs.ucsb.ed [Computer Science Department, University of California, Santa Barbara, CA 93106 (United States) and Eucalyptus Systems Inc., 130 Castilian Dr., Goleta, CA 93117 (United States)

    2009-07-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  16. Quantum computing with incoherent resources and quantum jumps.

    Science.gov (United States)

    Santos, M F; Cunha, M Terra; Chaves, R; Carvalho, A R R

    2012-04-27

    Spontaneous emission and the inelastic scattering of photons are two natural processes usually associated with decoherence and the reduction in the capacity to process quantum information. Here we show that, when suitably detected, these photons are sufficient to build all the fundamental blocks needed to perform quantum computation in the emitting qubits while protecting them from deleterious dissipative effects. We exemplify this by showing how to efficiently prepare graph states for the implementation of measurement-based quantum computation.

  17. Research on Key Technologies of Cloud Computing

    Science.gov (United States)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  18. Self-guaranteed measurement-based quantum computation

    Science.gov (United States)

    Hayashi, Masahito; Hajdušek, Michal

    2018-05-01

    In order to guarantee the output of a quantum computation, we usually assume that the component devices are trusted. However, when the total computation process is large, it is not easy to guarantee the whole system when we have scaling effects, unexpected noise, or unaccounted for correlations between several subsystems. If we do not trust the measurement basis or the prepared entangled state, we do need to be worried about such uncertainties. To this end, we propose a self-guaranteed protocol for verification of quantum computation under the scheme of measurement-based quantum computation where no prior-trusted devices (measurement basis or entangled state) are needed. The approach we present enables the implementation of verifiable quantum computation using the measurement-based model in the context of a particular instance of delegated quantum computation where the server prepares the initial computational resource and sends it to the client, who drives the computation by single-qubit measurements. Applying self-testing procedures, we are able to verify the initial resource as well as the operation of the quantum devices and hence the computation itself. The overhead of our protocol scales with the size of the initial resource state to the power of 4 times the natural logarithm of the initial state's size.

  19. Environmental Impact Assessment, Brownfield Areas. Brownfields are defined by the Florida DEP as abandoned, idled, or underused industrial and commercial facilities where expansion or redevelopment is complicated by real or perceived environmental contamination., Published in 2001, 1:24000 (1in=2000ft) scale, Florida Department of Environmental Protection (FDEP).

    Data.gov (United States)

    NSGIC State | GIS Inventory — Environmental Impact Assessment dataset current as of 2001. Brownfield Areas. Brownfields are defined by the Florida DEP as abandoned, idled, or underused industrial...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  1. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  2. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... News Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) ... are the limitations of Children's CT? What is Children's CT? Computed tomography, more commonly known as a ...

  3. LHCb computing model

    CERN Document Server

    Frank, M; Pacheco, Andreu

    1998-01-01

    This document is a first attempt to describe the LHCb computing model. The CPU power needed to process data for the event filter and reconstruction is estimated to be 2.2 \\Theta 106 MIPS. This will be installed at the experiment and will be reused during non data-taking periods for reprocessing. The maximal I/O of these activities is estimated to be around 40 MB/s.We have studied three basic models concerning the placement of the CPU resources for the other computing activities, Monte Carlo-simulation (1:4 \\Theta 106 MIPS) and physics analysis (0:5 \\Theta 106 MIPS): CPU resources may either be located at the physicist's homelab, national computer centres (Regional Centres) or at CERN.The CPU resources foreseen for analysis are sufficient to allow 100 concurrent analyses. It is assumed that physicists will work in physics groups that produce analysis data at an average rate of 4.2 MB/s or 11 TB per month. However, producing these group analysis data requires reading capabilities of 660 MB/s. It is further assu...

  4. Computer Labs | College of Engineering & Applied Science

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  5. Computer Science | Classification | College of Engineering & Applied

    Science.gov (United States)

    Engineering Concentration on Ergonomics M.S. Program in Computer Science Interdisciplinary Concentration on Structural Engineering Laboratory Water Resources Laboratory Computer Science Department Computer Science Academic Programs Computer Science Undergraduate Programs Computer Science Major Computer Science Tracks

  6. PROOF on the Cloud for ALICE using PoD and OpenNebula

    International Nuclear Information System (INIS)

    Berzano, D; Bagnasco, S; Brunetti, R; Lusso, S

    2012-01-01

    In order to optimize the use and management of computing centres, their conversion to cloud facilities is becoming increasingly popular. In a medium to large cloud facility, many different virtual clusters may concur for the same resources: unused resources can be freed either by turning off idle virtual machines, or by lowering resources assigned to a virtual machine at runtime. PROOF, a ROOT-based parallel and interactive analysis framework, is officially endorsed in the computing model of the ALICE experiment as complementary to the Grid, and it has become very popular over the last three years. The locality of PROOF-based analysis facilities forces system administrators to scavenge resources, yet the chaotic nature of user analysis tasks deems them unstable and inconstantly used, making PROOF a typical use-case for HPC cloud computing. Currently, PoD dynamically and easily provides a PROOF-enabled cluster by submitting agents to a job scheduler. Unfortunately, a Tier-2 does not comfortably share the same queue between interactive and batch jobs, due to the very large average time to completion of the latter: an elastic cloud approach would enable interactive virtual machines to temporarily subtract resources to the batch ones, without a noticeable impact on them. In this work we describe our setup of a dynamic PROOF-based cloud analysis facility based on PoD and OpenNebula, orchestrated by a simple and lightweight control daemon that makes virtualization transparent for the user.

  7. Semantic Web integration of Cheminformatics resources with the SADI framework

    Directory of Open Access Journals (Sweden)

    Chepelev Leonid L

    2011-05-01

    Full Text Available Abstract Background The diversity and the largely independent nature of chemical research efforts over the past half century are, most likely, the major contributors to the current poor state of chemical computational resource and database interoperability. While open software for chemical format interconversion and database entry cross-linking have partially addressed database interoperability, computational resource integration is hindered by the great diversity of software interfaces, languages, access methods, and platforms, among others. This has, in turn, translated into limited reproducibility of computational experiments and the need for application-specific computational workflow construction and semi-automated enactment by human experts, especially where emerging interdisciplinary fields, such as systems chemistry, are pursued. Fortunately, the advent of the Semantic Web, and the very recent introduction of RESTful Semantic Web Services (SWS may present an opportunity to integrate all of the existing computational and database resources in chemistry into a machine-understandable, unified system that draws on the entirety of the Semantic Web. Results We have created a prototype framework of Semantic Automated Discovery and Integration (SADI framework SWS that exposes the QSAR descriptor functionality of the Chemistry Development Kit. Since each of these services has formal ontology-defined input and output classes, and each service consumes and produces RDF graphs, clients can automatically reason about the services and available reference information necessary to complete a given overall computational task specified through a simple SPARQL query. We demonstrate this capability by carrying out QSAR analysis backed by a simple formal ontology to determine whether a given molecule is drug-like. Further, we discuss parameter-based control over the execution of SADI SWS. Finally, we demonstrate the value of computational resource

  8. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  9. Analysis On Security Of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Muhammad Zunnurain Hussain

    2017-01-01

    Full Text Available In this paper Author will be discussing the security issues and challenges faced by the industry in securing the cloud computing and how these problems can be tackled. Cloud computing is modern technique of sharing resources like data sharing file sharing basically sharing of resources without launching own infrastructure and using some third party resources to avoid huge investment . It is very challenging these days to secure the communication between two users although people use different encryption techniques 1.

  10. Decentralized vs. centralized economic coordination of resource allocation in grids

    OpenAIRE

    Eymann, Torsten; Reinicke, Michael; Ardáiz Villanueva, Óscar; Artigas Vidal, Pau; Díaz de Cerio Ripalda, Luis Manuel; Freitag, Fèlix; Meseguer Pallarès, Roc; Navarro Moldes, Leandro; Royo Vallés, María Dolores; Sanjeevan, Kanapathipillai

    2003-01-01

    Application layer networks are software architectures that allow the provisioning of services requiring a huge amount of resources by connecting large numbers of individual computers, like in Grid or Peer-to-Peer computing. Controlling the resource allocation in those networks is nearly impossible using a centralized arbitrator. The network simulation project CATNET will evaluate a decentralized mechanism for resource allocation, which is based on the economic paradigm of th...

  11. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  12. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  13. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  14. Security Problems in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rola Motawie

    2016-12-01

    Full Text Available Cloud is a pool of computing resources which are distributed among cloud users. Cloud computing has many benefits like scalability, flexibility, cost savings, reliability, maintenance and mobile accessibility. Since cloud-computing technology is growing day by day, it comes with many security problems. Securing the data in the cloud environment is most critical challenges which act as a barrier when implementing the cloud. There are many new concepts that cloud introduces, such as resource sharing, multi-tenancy, and outsourcing, create new challenges for the security community. In this work, we provide a comparable study of cloud computing privacy and security concerns. We identify and classify known security threats, cloud vulnerabilities, and attacks.

  15. Science and Technology Resources on the Internet: Computer Security.

    Science.gov (United States)

    Kinkus, Jane F.

    2002-01-01

    Discusses issues related to computer security, including confidentiality, integrity, and authentication or availability; and presents a selected list of Web sites that cover the basic issues of computer security under subject headings that include ethics, privacy, kids, antivirus, policies, cryptography, operating system security, and biometrics.…

  16. Database of Information technology resources

    OpenAIRE

    Barzda, Erlandas

    2005-01-01

    The subject of this master work is the internet information resource database. This work also handles the problems of old information systems which do not meet the new contemporary requirements. The aim is to create internet information system, based on object-oriented technologies and tailored to computer users’ needs. The internet information database system helps computers administrators to get the all needed information about computers network elements and easy to register all changes int...

  17. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  18. A computer software system for integration and analysis of grid-based remote sensing data with other natural resource data. Remote Sensing Project

    Science.gov (United States)

    Tilmann, S. E.; Enslin, W. R.; Hill-Rowley, R.

    1977-01-01

    A computer-based information system is described designed to assist in the integration of commonly available spatial data for regional planning and resource analysis. The Resource Analysis Program (RAP) provides a variety of analytical and mapping phases for single factor or multi-factor analyses. The unique analytical and graphic capabilities of RAP are demonstrated with a study conducted in Windsor Township, Eaton County, Michigan. Soil, land cover/use, topographic and geological maps were used as a data base to develope an eleven map portfolio. The major themes of the portfolio are land cover/use, non-point water pollution, waste disposal, and ground water recharge.

  19. GapMap: Enabling Comprehensive Autism Resource Epidemiology.

    Science.gov (United States)

    Albert, Nikhila; Daniels, Jena; Schwartz, Jessey; Du, Michael; Wall, Dennis P

    2017-05-04

    For individuals with autism spectrum disorder (ASD), finding resources can be a lengthy and difficult process. The difficulty in obtaining global, fine-grained autism epidemiological data hinders researchers from quickly and efficiently studying large-scale correlations among ASD, environmental factors, and geographical and cultural factors. The objective of this study was to define resource load and resource availability for families affected by autism and subsequently create a platform to enable a more accurate representation of prevalence rates and resource epidemiology. We created a mobile application, GapMap, to collect locational, diagnostic, and resource use information from individuals with autism to compute accurate prevalence rates and better understand autism resource epidemiology. GapMap is hosted on AWS S3, running on a React and Redux front-end framework. The backend framework is comprised of an AWS API Gateway and Lambda Function setup, with secure and scalable end points for retrieving prevalence and resource data, and for submitting participant data. Measures of autism resource scarcity, including resource load, resource availability, and resource gaps were defined and preliminarily computed using simulated or scraped data. The average distance from an individual in the United States to the nearest diagnostic center is approximately 182 km (50 miles), with a standard deviation of 235 km (146 miles). The average distance from an individual with ASD to the nearest diagnostic center, however, is only 32 km (20 miles), suggesting that individuals who live closer to diagnostic services are more likely to be diagnosed. This study confirmed that individuals closer to diagnostic services are more likely to be diagnosed and proposes GapMap, a means to measure and enable the alleviation of increasingly overburdened diagnostic centers and resource-poor areas where parents are unable to diagnose their children as quickly and easily as needed. GapMap will

  20. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  1. A Dynamic Resource Scheduling Method Based on Fuzzy Control Theory in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhijia Chen

    2015-01-01

    Full Text Available The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic resource scheduling method based on fuzzy control theory. Firstly, the resource requirements prediction model is established. Then the relationships between resource availability and the resource requirements are concluded. Afterwards fuzzy control theory is adopted to realize a friendly match between user needs and resources availability. Results show that this approach improves the resources scheduling efficiency and the quality of service (QoS of cloud computing.

  2. Vanderbilt University: Campus Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    Despite the decentralized nature of computing at Vanderbilt, there is significant evidence of cooperation and use of each other's resources by the various computing entities. Planning for computing occurs in every school and department. Caravan, a campus-wide network, is described. (MLW)

  3. Trying to predict the future – resource planning and allocation in CMS

    International Nuclear Information System (INIS)

    Bloom, Kenneth; Fisk, Ian; Kreuzer, Peter; Merino, Gonzalo

    2012-01-01

    In the large LHC experiments the majority of computing resources are provided by the participating countries. These resource pledges account for more than three quarters of the total available computing. The experiments are asked to give indications of their requests three years in advance and to evolve these as the details and constraints become clearer. In this paper we will discuss the resource planning techniques used in CMS to predict the computing resources several years in advance. We will discuss how we attempt to implement the activities of the computing model in spread-sheets and formulas to calculate the needs. We will talk about how those needs are reflected in the 2012 running and how the planned long shutdown of the LHC in 2013 and 2014 impacts the planning process and the outcome. In the end we will speculate on the computing needs in the second major run of LHC.

  4. A generic remote method invocation for intensive data processing

    International Nuclear Information System (INIS)

    Neto, A.; Alves, D.; Fernandes, H.; Ferreira, J.S.; Varandas, C.A.F.

    2006-01-01

    Based on the Extensible Markup Language (XML) and the Remote Method Invocation (RMI) standards, a client/server remote data analysis application has been developed for intensive data processing. This GRID oriented philosophy allows a powerful tool to maintain updated code and centralized computational resources. Another major feature is the ability to share proprietary algorithms in remote computers without the need of local code and libraries installation and maintenance. The 16 CPU Orionte cluster in operation at Centro de Fusao Nuclear (CFN) is currently used to provide remote data analysis. The codes running in languages such as Octave, C, Fortran or IDL are called through a script remote invocation and data is released to the client as soon as available. The remote calculations parameters are described in an XML file containing the configuration for the server runtime environment. Since the execution is made by calling a script any program can be launched to perform the analysis, the only requirement is the implementation of the protocol described in XML. Some plasma properties of the CFN tokamak (ISTTOK) that require heavy computational resources are already obtained using this approach, allowing ready inter-shot analysis and parameterization decisions

  5. A generic remote method invocation for intensive data processing

    Energy Technology Data Exchange (ETDEWEB)

    Neto, A. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisbon (Portugal)]. E-mail: andre.neto@cfn.ist.utl.pt; Alves, D. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisbon (Portugal); Fernandes, H. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisbon (Portugal); Ferreira, J.S. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisbon (Portugal); Varandas, C.A.F. [Associacao Euratom/IST, Centro de Fusao Nuclear, Av. Rovisco Pais, P-1049-001 Lisbon (Portugal)

    2006-07-15

    Based on the Extensible Markup Language (XML) and the Remote Method Invocation (RMI) standards, a client/server remote data analysis application has been developed for intensive data processing. This GRID oriented philosophy allows a powerful tool to maintain updated code and centralized computational resources. Another major feature is the ability to share proprietary algorithms in remote computers without the need of local code and libraries installation and maintenance. The 16 CPU Orionte cluster in operation at Centro de Fusao Nuclear (CFN) is currently used to provide remote data analysis. The codes running in languages such as Octave, C, Fortran or IDL are called through a script remote invocation and data is released to the client as soon as available. The remote calculations parameters are described in an XML file containing the configuration for the server runtime environment. Since the execution is made by calling a script any program can be launched to perform the analysis, the only requirement is the implementation of the protocol described in XML. Some plasma properties of the CFN tokamak (ISTTOK) that require heavy computational resources are already obtained using this approach, allowing ready inter-shot analysis and parameterization decisions.

  6. Cloudbus Toolkit for Market-Oriented Cloud Computing

    Science.gov (United States)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  7. Optimizing Resource Utilization in Grid Batch Systems

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2012-01-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  8. Reconfiguration of Computation and Communication Resources in Multi-Core Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pezzarossa, Luca

    -core platform. Our approach is to associate reconfiguration with operational mode changes where the system, during normal operation, changes a subset of the executing tasks to adapt its behaviour to new conditions. Reconfiguration is therefore used during a mode change to modify the real-time guaranteed services...... of the communication channels between the tasks that are affected by the reconfiguration. This thesis investigates the use of reconfiguration in the context of multicore realtime systems targeting embedded applications. We address the reconfiguration of both the computation and the communication resources of a multi...... by the communication fabric between the cores of the platform. To support this, we present a new network on chip architecture, named Argo 2, that allows instantaneous and time-predictable reconfiguration of the communication channels. Our reconfiguration-capable architecture is prototyped using the existing time...

  9. Elucidating reaction mechanisms on quantum computers

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-01-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources. PMID:28674011

  10. Elucidating reaction mechanisms on quantum computers

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M.; Wecker, Dave; Troyer, Matthias

    2017-07-01

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  11. Elucidating reaction mechanisms on quantum computers.

    Science.gov (United States)

    Reiher, Markus; Wiebe, Nathan; Svore, Krysta M; Wecker, Dave; Troyer, Matthias

    2017-07-18

    With rapid recent advances in quantum technology, we are close to the threshold of quantum devices whose computational powers can exceed those of classical supercomputers. Here, we show that a quantum computer can be used to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. Our resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. Our results demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources.

  12. Efficient Nash Equilibrium Resource Allocation Based on Game Theory Mechanism in Cloud Computing by Using Auction.

    Science.gov (United States)

    Nezarat, Amin; Dastghaibifard, G H

    2015-01-01

    One of the most complex issues in the cloud computing environment is the problem of resource allocation so that, on one hand, the cloud provider expects the most profitability and, on the other hand, users also expect to have the best resources at their disposal considering the budget constraints and time. In most previous work conducted, heuristic and evolutionary approaches have been used to solve this problem. Nevertheless, since the nature of this environment is based on economic methods, using such methods can decrease response time and reducing the complexity of the problem. In this paper, an auction-based method is proposed which determines the auction winner by applying game theory mechanism and holding a repetitive game with incomplete information in a non-cooperative environment. In this method, users calculate suitable price bid with their objective function during several round and repetitions and send it to the auctioneer; and the auctioneer chooses the winning player based the suggested utility function. In the proposed method, the end point of the game is the Nash equilibrium point where players are no longer inclined to alter their bid for that resource and the final bid also satisfies the auctioneer's utility function. To prove the response space convexity, the Lagrange method is used and the proposed model is simulated in the cloudsim and the results are compared with previous work. At the end, it is concluded that this method converges to a response in a shorter time, provides the lowest service level agreement violations and the most utility to the provider.

  13. Computer Software Reviews.

    Science.gov (United States)

    Hawaii State Dept. of Education, Honolulu. Office of Instructional Services.

    Intended to provide guidance in the selection of the best computer software available to support instruction and to make optimal use of schools' financial resources, this publication provides a listing of computer software programs that have been evaluated according to their currency, relevance, and value to Hawaii's educational programs. The…

  14. Unconditionally verifiable blind quantum computation

    Science.gov (United States)

    Fitzsimons, Joseph F.; Kashefi, Elham

    2017-07-01

    Blind quantum computing (BQC) allows a client to have a server carry out a quantum computation for them such that the client's input, output, and computation remain private. A desirable property for any BQC protocol is verification, whereby the client can verify with high probability whether the server has followed the instructions of the protocol or if there has been some deviation resulting in a corrupted output state. A verifiable BQC protocol can be viewed as an interactive proof system leading to consequences for complexity theory. We previously proposed [A. Broadbent, J. Fitzsimons, and E. Kashefi, in Proceedings of the 50th Annual Symposium on Foundations of Computer Science, Atlanta, 2009 (IEEE, Piscataway, 2009), p. 517] a universal and unconditionally secure BQC scheme where the client only needs to be able to prepare single qubits in separable states randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. In this paper we extend that protocol with additional functionality allowing blind computational basis measurements, which we use to construct another verifiable BQC protocol based on a different class of resource states. We rigorously prove that the probability of failing to detect an incorrect output is exponentially small in a security parameter, while resource overhead remains polynomial in this parameter. This resource state allows entangling gates to be performed between arbitrary pairs of logical qubits with only constant overhead. This is a significant improvement on the original scheme, which required that all computations to be performed must first be put into a nearest-neighbor form, incurring linear overhead in the number of qubits. Such an improvement has important consequences for efficiency and fault-tolerance thresholds.

  15. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  16. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  17. Hard-real-time resource management for autonomous spacecraft

    Science.gov (United States)

    Gat, E.

    2000-01-01

    This paper describes tickets, a computational mechanism for hard-real-time autonomous resource management. Autonomous spacecraftcontrol can be considered abstractly as a computational process whose outputs are spacecraft commands.

  18. Analysis of problem solving on project based learning with resource based learning approach computer-aided program

    Science.gov (United States)

    Kuncoro, K. S.; Junaedi, I.; Dwijanto

    2018-03-01

    This study aimed to reveal the effectiveness of Project Based Learning with Resource Based Learning approach computer-aided program and analyzed problem-solving abilities in terms of problem-solving steps based on Polya stages. The research method used was mixed method with sequential explanatory design. The subject of this research was the students of math semester 4. The results showed that the S-TPS (Strong Top Problem Solving) and W-TPS (Weak Top Problem Solving) had good problem-solving abilities in each problem-solving indicator. The problem-solving ability of S-MPS (Strong Middle Problem Solving) and (Weak Middle Problem Solving) in each indicator was good. The subject of S-BPS (Strong Bottom Problem Solving) had a difficulty in solving the problem with computer program, less precise in writing the final conclusion and could not reflect the problem-solving process using Polya’s step. While the Subject of W-BPS (Weak Bottom Problem Solving) had not been able to meet almost all the indicators of problem-solving. The subject of W-BPS could not precisely made the initial table of completion so that the completion phase with Polya’s step was constrained.

  19. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  20. COMPUTER GAMES AND EDUCATION

    OpenAIRE

    Sukhov, Anton

    2018-01-01

    This paper devoted to the research of educational resources and possibilities of modern computer games. The “internal” educational aspects of computer games include educational mechanism (a separate or integrated “tutorial”) and representation of a real or even fantastic educational process within virtual worlds. The “external” dimension represents educational opportunities of computer games for personal and professional development in different genres of computer games (various transport, so...

  1. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  2. RHIC off-line computing

    International Nuclear Information System (INIS)

    Featherly, J.; Gibbard, B.; Gould, J.

    1993-01-01

    A report was prepared in Sept 1992, RHIC/DET Note 8, also known as ROCOCO, which estimated the various computing resources which will be required by the RHIC experimental program. A study has now been undertaken to review technical issues associated with supplying these resources. This study, organized by the HEP/NP Computing Group but including other appropriate participants, addresses questions of technologies, manpower, cost and schedule. The following document is an interim summary of this study both in terms of discussions which have occurred and initial conclusions reached

  3. Osei et al

    African Journals Online (AJOL)

    PUBLICATIONS1

    ... KNUST, Kumasi. Keywords: IDL-KNUST, graduates, tracer study, job performance, relevance. .... and professional demands for management and public administration education at the post- graduate level; to develop human resources in.

  4. Sensitive Data Protection Based on Intrusion Tolerance in Cloud Computing

    OpenAIRE

    Jingyu Wang; xuefeng Zheng; Dengliang Luo

    2011-01-01

    Service integration and supply on-demand coming from cloud computing can significantly improve the utilization of computing resources and reduce power consumption of per service, and effectively avoid the error of computing resources. However, cloud computing is still facing the problem of intrusion tolerance of the cloud computing platform and sensitive data of new enterprise data center. In order to address the problem of intrusion tolerance of cloud computing platform and sensitive data in...

  5. Resources for GCSE.

    Science.gov (United States)

    Anderton, Alain

    1987-01-01

    Argues that new resources are needed to help teachers prepare students for the new General Certificate in Secondary Education (GCSE) examination. Compares previous examinations with new examinations to illustrate the problem. Presents textbooks, workbooks, computer programs, and other curriculum materials to demonstrate the gap between resources…

  6. Computer Viruses: Pathology and Detection.

    Science.gov (United States)

    Maxwell, John R.; Lamon, William E.

    1992-01-01

    Explains how computer viruses were originally created, how a computer can become infected by a virus, how viruses operate, symptoms that indicate a computer is infected, how to detect and remove viruses, and how to prevent a reinfection. A sidebar lists eight antivirus resources. (four references) (LRW)

  7. Investigation of Diesel combustion using multiple injection strategies for idling after cold start of passenger-car engines

    Energy Technology Data Exchange (ETDEWEB)

    Payri, F.; Broatch, A.; Salavert, J.M.; Martin, J. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Aptdo. 22012, E-46071 Valencia (Spain)

    2010-10-15

    A comprehensive investigation was carried out in order to better understand the combustion behaviour in a low compression ratio DI Diesel engine when multiple injection strategies are applied just after the engine cold starts in low temperature conditions (idling). More specifically, the aim of this study was twofold: on one hand, to understand the effect of the multiple injection strategies on the indicated mean effective pressure; on the other hand, to contribute to the understanding of combustion stability characterized by the coefficient of variation of indicated mean effective pressure. The first objective was fulfilled by analyzing the rate of heat release obtained by in-cylinder pressure diagnosis. The results showed that the timing of the pilot injection closest to the main injection was the most influential parameter based on the behaviour of the rate of heat release (regardless of the multiple injection strategy applied). For the second objective, the combustion stability was found to be correlated with the combustion centroid angle. The results showed a trend between them and the existence of a range of centroid angles where the combustion stability is strong enough. In addition, it was also evident that convenient split injection allows shifting the centroid to such a zone and improves combustion stability after start. (author)

  8. CHPS IN CLOUD COMPUTING ENVIRONMENT

    OpenAIRE

    K.L.Giridas; A.Shajin Nargunam

    2012-01-01

    Workflow have been utilized to characterize a various form of applications concerning high processing and storage space demands. So, to make the cloud computing environment more eco-friendly,our research project was aiming in reducing E-waste accumulated by computers. In a hybrid cloud, the user has flexibility offered by public cloud resources that can be combined to the private resources pool as required. Our previous work described the process of combining the low range and mid range proce...

  9. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  10. Contextuality supplies the 'magic' for quantum computation.

    Science.gov (United States)

    Howard, Mark; Wallman, Joel; Veitch, Victor; Emerson, Joseph

    2014-06-19

    Quantum computers promise dramatic advantages over their classical counterparts, but the source of the power in quantum computing has remained elusive. Here we prove a remarkable equivalence between the onset of contextuality and the possibility of universal quantum computation via 'magic state' distillation, which is the leading model for experimentally realizing a fault-tolerant quantum computer. This is a conceptually satisfying link, because contextuality, which precludes a simple 'hidden variable' model of quantum mechanics, provides one of the fundamental characterizations of uniquely quantum phenomena. Furthermore, this connection suggests a unifying paradigm for the resources of quantum information: the non-locality of quantum theory is a particular kind of contextuality, and non-locality is already known to be a critical resource for achieving advantages with quantum communication. In addition to clarifying these fundamental issues, this work advances the resource framework for quantum computation, which has a number of practical applications, such as characterizing the efficiency and trade-offs between distinct theoretical and experimental schemes for achieving robust quantum computation, and putting bounds on the overhead cost for the classical simulation of quantum algorithms.

  11. AGIS: Evolution of Distributed Computing Information system for ATLAS

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria; Karavakis, Edward

    2015-01-01

    The variety of the ATLAS Computing Infrastructure requires a central information system to define the topology of computing resources and to store the different parameters and configuration data which are needed by the various ATLAS software components. The ATLAS Grid Information System is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services.

  12. User's guide to the Geothermal Resource Areas Database

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, J.D.; Leung, K.; Yen, W.

    1981-10-01

    The National Geothermal Information Resource project at the Lawrence Berkeley Laboratory is developing a Geothermal Resource Areas Database, called GRAD, designed to answer questions about the progress of geothermal energy development. This database will contain extensive information on geothermal energy resources for selected areas, covering development from initial exploratory surveys to plant construction and operation. The database is available for on-lie interactive query by anyone with an account number on the computer, a computer terminal with an acoustic coupler, and a telephone. This report will help in making use of the database. Some information is provided on obtaining access to the computer system being used, instructions on obtaining standard reports, and some aids to using the query language.

  13. The Need for Computer Science

    Science.gov (United States)

    Margolis, Jane; Goode, Joanna; Bernier, David

    2011-01-01

    Broadening computer science learning to include more students is a crucial item on the United States' education agenda, these authors say. Although policymakers advocate more computer science expertise, computer science offerings in high schools are few--and actually shrinking. In addition, poorly resourced schools with a high percentage of…

  14. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  15. Design and implementation of distributed spatial computing node based on WPS

    International Nuclear Information System (INIS)

    Liu, Liping; Li, Guoqing; Xie, Jibo

    2014-01-01

    Currently, the research work of SIG (Spatial Information Grid) technology mostly emphasizes on the spatial data sharing in grid environment, while the importance of spatial computing resources is ignored. In order to implement the sharing and cooperation of spatial computing resources in grid environment, this paper does a systematical research of the key technologies to construct Spatial Computing Node based on the WPS (Web Processing Service) specification by OGC (Open Geospatial Consortium). And a framework of Spatial Computing Node is designed according to the features of spatial computing resources. Finally, a prototype of Spatial Computing Node is implemented and the relevant verification work under the environment is completed

  16. Probability calculations for three-part mineral resource assessments

    Science.gov (United States)

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  17. Evaluation and Ranking of Geothermal Resources for Electrical Generation or Electrical Offset in Idaho, Montana, Oregon and Washington. Volume II.

    Energy Technology Data Exchange (ETDEWEB)

    Bloomquist, R. Gordon

    1985-06-01

    This volume contains appendices on: (1) resource assessment - electrical generation computer results; (2) resource assessment summary - direct use computer results; (3) electrical generation (high temperature) resource assessment computer program listing; (4) direct utilization (low temperature) resource assessment computer program listing; (5) electrical generation computer program CENTPLANT and related documentation; (6) electrical generation computer program WELLHEAD and related documentation; (7) direct utilization computer program HEATPLAN and related documentation; (8) electrical generation ranking computer program GEORANK and related documentation; (9) direct utilization ranking computer program GEORANK and related documentation; and (10) life cycle cost analysis computer program and related documentation. (ACR)

  18. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  19. Polyphony: A Workflow Orchestration Framework for Cloud Computing

    Science.gov (United States)

    Shams, Khawaja S.; Powell, Mark W.; Crockett, Tom M.; Norris, Jeffrey S.; Rossi, Ryan; Soderstrom, Tom

    2010-01-01

    Cloud Computing has delivered unprecedented compute capacity to NASA missions at affordable rates. Missions like the Mars Exploration Rovers (MER) and Mars Science Lab (MSL) are enjoying the elasticity that enables them to leverage hundreds, if not thousands, or machines for short durations without making any hardware procurements. In this paper, we describe Polyphony, a resilient, scalable, and modular framework that efficiently leverages a large set of computing resources to perform parallel computations. Polyphony can employ resources on the cloud, excess capacity on local machines, as well as spare resources on the supercomputing center, and it enables these resources to work in concert to accomplish a common goal. Polyphony is resilient to node failures, even if they occur in the middle of a transaction. We will conclude with an evaluation of a production-ready application built on top of Polyphony to perform image-processing operations of images from around the solar system, including Mars, Saturn, and Titan.

  20. Fundamentals of universality in one-way quantum computation

    International Nuclear Information System (INIS)

    Nest, M van den; Duer, W; Miyake, A; Briegel, H J

    2007-01-01

    In this paper, we build a framework allowing for a systematic investigation of the fundamental issue: 'Which quantum states serve as universal resources for measurement-based (one-way) quantum computation?' We start our study by re-examining what is exactly meant by 'universality' in quantum computation, and what the implications are for universal one-way quantum computation. Given the framework of a measurement-based quantum computer, where quantum information is processed by local operations only, we find that the most general universal one-way quantum computer is one which is capable of accepting arbitrary classical inputs and producing arbitrary quantum outputs-we refer to this property as CQ-universality. We then show that a systematic study of CQ-universality in one-way quantum computation is possible by identifying entanglement features that are required to be present in every universal resource. In particular, we find that a large class of entanglement measures must reach its supremum on every universal resource. These insights are used to identify several families of states as being not universal, such as one-dimensional (1D) cluster states, Greenberger-Horne-Zeilinger (GHZ) states, W states, and ground states of non-critical 1D spin systems. Our criteria are strengthened by considering the efficiency of a quantum computation, and we find that entanglement measures must obey a certain scaling law with the system size for all efficient universal resources. This again leads to examples of non-universal resources, such as, e.g. ground states of critical 1D spin systems. On the other hand, we provide several examples of efficient universal resources, namely graph states corresponding to hexagonal, triangular and Kagome lattices. Finally, we consider the more general notion of encoded CQ-universality, where quantum outputs are allowed to be produced in an encoded form. Again we provide entanglement-based criteria for encoded universality. Moreover, we present a

  1. Computational Fair Division

    DEFF Research Database (Denmark)

    Branzei, Simina

    Fair division is a fundamental problem in economic theory and one of the oldest questions faced through the history of human society. The high level scenario is that of several participants having to divide a collection of resources such that everyone is satisfied with their allocation -- e.g. two...... heirs dividing a car, house, and piece of land inherited. The literature on fair division was developed in the 20th century in mathematics and economics, but computational work on fair division is still sparse. This thesis can be seen as an excursion in computational fair division divided in two parts....... The first part tackles the cake cutting problem, where the cake is a metaphor for a heterogeneous divisible resource such as land, time, mineral deposits, and computer memory. We study the equilibria of classical protocols and design an algorithmic framework for reasoning about their game theoretic...

  2. Distributed GPU Computing in GIScience

    Science.gov (United States)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE

  3. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  4. 面向业务对象的计算资源动态分配方法%DYNAMIC ALLOCATION OF COMPUTING RESOURCES FOR BUSINESS-ORIENTED OBJECT

    Institute of Scientific and Technical Information of China (English)

    尚海鹰

    2017-01-01

    This paper aims to summarize the development trend of computer system infrastructure.In view of the current era Internet plus information system business scenarios,we analyze the mainstream method of computing resources allocation and load balancing.Meanwhile,to further improve transaction processing efficiency and meet the demand of service level agreement flexibility,we introduce a dynamic allocation method of computing resources for business objects.According to the reference value of the processing performance of the actual application system,the computing resources allocation plan and dynamic adjustment strategy ofeach business object were obtained.The experiment achieved the desired effect through large amount of data in the actual clearing business of the city card.%概述计算机系统基础架构的发展趋势.针对当前互联网+时代事务处理系统的业务场景,分析研究了计算资源分配与负载均衡的基本方法.为满足事务处理系统对业务对象的差异化服务需求,并充分发挥事务处理系统的整体处理能力,提出面向业务对象的计算资源动态分配方法.方法根据实际应用系统平台的处理性能基准值,确定各业务对象的计算资源分配计划及动态调整策略.通过城市一卡通实际清算业务大数据量的测试达到预期效果.

  5. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  6. JINR CLOUD SERVICE FOR SCIENTIFIC AND ENGINEERING COMPUTATIONS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available Pretty often small research scientific groups do not have access to powerful enough computational resources required for their research work to be productive. Global computational infrastructures used by large scientific collaborations can be challenging for small research teams because of bureaucracy overhead as well as usage complexity of underlying tools. Some researchers buy a set of powerful servers to cover their own needs in computational resources. A drawback of such approach is a necessity to take care about proper hosting environment for these hardware and maintenance which requires a certain level of expertise. Moreover a lot of time such resources may be underutilized because а researcher needs to spend a certain amount of time to prepare computations and to analyze results as well as he doesn’t always need all resources of modern multi-core CPUs servers. The JINR cloud team developed a service which provides an access for scientists of small research groups from JINR and its Member State organizations to computational resources via problem-oriented (i.e. application-specific web-interface. It allows a scientist to focus on his research domain by interacting with the service in a convenient way via browser and abstracting away from underlying infrastructure as well as its maintenance. A user just sets a required values for his job via web-interface and specify a location for uploading a result. The computational workloads are done on the virtual machines deployed in the JINR cloud infrastructure.

  7. Computing meaning v.4

    CERN Document Server

    Bunt, Harry; Pulman, Stephen

    2013-01-01

    This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue i

  8. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  9. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  10. A comprehensive overview of computational resources to aid in precision genome editing with engineered nucleases.

    Science.gov (United States)

    Periwal, Vinita

    2017-07-01

    Genome editing with engineered nucleases (zinc finger nucleases, TAL effector nucleases s and Clustered regularly inter-spaced short palindromic repeats/CRISPR-associated) has recently been shown to have great promise in a variety of therapeutic and biotechnological applications. However, their exploitation in genetic analysis and clinical settings largely depends on their specificity for the intended genomic target. Large and complex genomes often contain highly homologous/repetitive sequences, which limits the specificity of genome editing tools and could result in off-target activity. Over the past few years, various computational approaches have been developed to assist the design process and predict/reduce the off-target activity of these nucleases. These tools could be efficiently used to guide the design of constructs for engineered nucleases and evaluate results after genome editing. This review provides a comprehensive overview of various databases, tools, web servers and resources for genome editing and compares their features and functionalities. Additionally, it also describes tools that have been developed to analyse post-genome editing results. The article also discusses important design parameters that could be considered while designing these nucleases. This review is intended to be a quick reference guide for experimentalists as well as computational biologists working in the field of genome editing with engineered nucleases. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. The Future of Distributed Computing Systems in ATLAS: Boldly Venturing Beyond Grids

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    The Production and Distributed Analysis system (PanDA) for the ATLAS experiment at the Large Hadron Collider has seen big changes over the past couple of years to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers and other external resources. While PanDA was originally designed for fairly homogeneous resources available through the Worldwide LHC Computing Grid, the new resources are heterogeneous, at diverse scales and with diverse interfaces. Up to a fifth of the resources available to ATLAS are of such new types and require special techniques for integration into PanDA. In this talk, we present the nature and scale of these resources. We provide an overview of the various challenges faced, spanning infrastructure, software distribution, workload requirements, scaling requirements, workflow management, data management, network provisioning, and associated software and computing facilities. We describe the strategies for integrating these heterogeneous resources into ...

  12. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  13. Using the CMS high level trigger as a cloud resource

    International Nuclear Information System (INIS)

    Colling, David; Huffman, Adam; Bauer, Daniela; McCrae, Alison; Cinquilli, Mattia; Gowdy, Stephen; Coarasa, Jose Antonio; Ozga, Wojciech; Chaze, Olivier; Lahiff, Andrew; Grandi, Claudio; Tiradani, Anthony; Sgaravatto, Massimo

    2014-01-01

    The CMS High Level Trigger is a compute farm of more than 10,000 cores. During data taking this resource is heavily used and is an integral part of the experiment's triggering system. However, outside of data taking periods this resource is largely unused. We describe why CMS wants to use the HLT as a cloud resource (outside of data taking periods) and how this has been achieved. In doing this we have turned a single-use cluster into an agile resource for CMS production computing. While we are able to use the HLT as a production cloud resource, there is still considerable further work that CMS needs to carry out before this resource can be used with the desired agility. This report, therefore, represents a snapshot of this activity at the time of CHEP 2013.

  14. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  15. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  16. Cloud Computing. Technology Briefing. Number 1

    Science.gov (United States)

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  17. Children's (Pediatric) CT (Computed Tomography)

    Medline Plus

    Full Text Available ... Physician Resources Professions Site Index A-Z Children's (Pediatric) CT (Computed Tomography) Pediatric computed tomography (CT) is ... a CT scan. View full size with caption Pediatric Content Some imaging tests and treatments have special ...

  18. An Efficient Resource Management System for a Streaming Media Distribution Network

    Science.gov (United States)

    Cahill, Adrian J.; Sreenan, Cormac J.

    2006-01-01

    This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…

  19. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  20. Cloud computing basics for librarians.

    Science.gov (United States)

    Hoy, Matthew B

    2012-01-01

    "Cloud computing" is the name for the recent trend of moving software and computing resources to an online, shared-service model. This article briefly defines cloud computing, discusses different models, explores the advantages and disadvantages, and describes some of the ways cloud computing can be used in libraries. Examples of cloud services are included at the end of the article. Copyright © Taylor & Francis Group, LLC