WorldWideScience

Sample records for significantly reduced computational

  1. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  2. The effect of ergonomic training and intervention on reducing occupational stress among computer users

    Directory of Open Access Journals (Sweden)

    T. Yektaee

    2014-05-01

    Result: According to covariance analysis, ergonomic training and interventions lead to reduction of occupational stress of computer users. .Conclusion: Training computer users and informing them of the ergonomic principals and also providing interventions such as correction of posture, reducing duration of work time, using armrest and footrest would have significant implication in reducing occupational stress among computer users.

  3. Rackspace: Significance of Cloud Computing to CERN

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The research collaboration between Rackspace and CERN is contributing to how OpenStack cloud computing will move science work around the world for CERN, and to reducing the barriers between clouds for Rackspace.

  4. Computational toxicology: Its essential role in reducing drug attrition.

    Science.gov (United States)

    Naven, R T; Louise-May, S

    2015-12-01

    Predictive toxicology plays a critical role in reducing the failure rate of new drugs in pharmaceutical research and development. Despite recent gains in our understanding of drug-induced toxicity, however, it is urgent that the utility and limitations of our current predictive tools be determined in order to identify gaps in our understanding of mechanistic and chemical toxicology. Using recently published computational regression analyses of in vitro and in vivo toxicology data, it will be demonstrated that significant gaps remain in early safety screening paradigms. More strategic analyses of these data sets will allow for a better understanding of their domain of applicability and help identify those compounds that cause significant in vivo toxicity but which are currently mis-predicted by in silico and in vitro models. These 'outliers' and falsely predicted compounds are metaphorical lighthouses that shine light on existing toxicological knowledge gaps, and it is essential that these compounds are investigated if attrition is to be reduced significantly in the future. As such, the modern computational toxicologist is more productively engaged in understanding these gaps and driving investigative toxicology towards addressing them. © The Author(s) 2015.

  5. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  6. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  7. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.

    is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds...... the coding operations must be performed in a particular way, which we introduce. Finally we evaluate the suggested system and find that the amount of coding can be significantly reduced both at nodes that recode and decode.......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...

  8. Sucralfate significantly reduces ciprofloxacin concentrations in serum.

    OpenAIRE

    Garrelts, J C; Godley, P J; Peterie, J D; Gerlach, E H; Yakshe, C C

    1990-01-01

    The effect of sucralfate on the bioavailability of ciprofloxacin was evaluated in eight healthy subjects utilizing a randomized, crossover design. The area under the concentration-time curve from 0 to 12 h was reduced from 8.8 to 1.1 micrograms.h/ml by sucralfate (P less than 0.005). Similarly, the maximum concentration of ciprofloxacin in serum was reduced from 2.0 to 0.2 micrograms/ml (P less than 0.005). We conclude that concurrent ingestion of sucralfate significantly reduces the concentr...

  9. Reduced-Complexity Direction of Arrival Estimation Using Real-Valued Computation with Arbitrary Array Configurations

    Directory of Open Access Journals (Sweden)

    Feng-Gang Yan

    2018-01-01

    Full Text Available A low-complexity algorithm is presented to dramatically reduce the complexity of the multiple signal classification (MUSIC algorithm for direction of arrival (DOA estimation, in which both tasks of eigenvalue decomposition (EVD and spectral search are implemented with efficient real-valued computations, leading to about 75% complexity reduction as compared to the standard MUSIC. Furthermore, the proposed technique has no dependence on array configurations and is hence suitable for arbitrary array geometries, which shows a significant implementation advantage over most state-of-the-art unitary estimators including unitary MUSIC (U-MUSIC. Numerical simulations over a wide range of scenarios are conducted to show the performance of the new technique, which demonstrates that with a significantly reduced computational complexity, the new approach is able to provide a close accuracy to the standard MUSIC.

  10. Next-generation nozzle check valve significantly reduces operating costs

    Energy Technology Data Exchange (ETDEWEB)

    Roorda, O. [SMX International, Toronto, ON (Canada)

    2009-01-15

    Check valves perform an important function in preventing reverse flow and protecting plant and mechanical equipment. However, the variety of different types of valves and extreme differences in performance even within one type can change maintenance requirements and life cycle costs, amounting to millions of dollars over the typical 15-year design life of piping components. A next-generation non-slam nozzle check valve which prevents return flow has greatly reduced operating costs by protecting the mechanical equipment in a piping system. This article described the check valve varieties such as the swing check valve, a dual-plate check valve, and nozzle check valves. Advancements in optimized design of a non-slam nozzle check valve were also discussed, with particular reference to computer flow modelling such as computational fluid dynamics; computer stress modelling such as finite element analysis; and flow testing (using rapid prototype development and flow loop testing), both to improve dynamic performance and reduce hydraulic losses. The benefits of maximized dynamic performance and minimized pressure loss from the new designed valve were also outlined. It was concluded that this latest non-slam nozzle check valve design has potential applications in natural gas, liquefied natural gas, and oil pipelines, including subsea applications, as well as refineries, and petrochemical plants among others, and is suitable for horizontal and vertical installation. The result of this next-generation nozzle check valve design is not only superior performance, and effective protection of mechanical equipment but also minimized life cycle costs. 1 fig.

  11. Reducing musculoskeletal disorders among computer operators: comparison between ergonomics interventions at the workplace.

    Science.gov (United States)

    Levanon, Yafa; Gefen, Amit; Lerman, Yehuda; Givon, Uri; Ratzon, Navah Z

    2012-01-01

    Typing is associated with musculoskeletal disorders (MSDs) caused by multiple risk factors. This control study aimed to evaluate the efficacy of a workplace intervention for reducing MSDs among computer workers. Sixty-six subjects with and without MSD were assigned consecutively to one of three groups: ergonomics intervention (work site and body posture adjustments, muscle activity training and exercises) accompanied with biofeedback training, the same ergonomics intervention without biofeedback and a control group. Evaluation of MSDs, body posture, psychosocial status, upper extremity (UE) kinematics and muscle surface electromyography were carried out before and after the intervention in the workplace and the motion lab. Our main hypothesis that significant differences in the reduction of MSDs will exist between subjects in the study groups and controls was confirmed (χ(2) = 13.3; p = 0.001). Significant changes were found in UE kinematics and posture as well. Both ergonomics interventions effectively reduced MSD and improved body posture. This study aimed to test the efficacy of an individual workplace intervention programme among computer workers by evaluating musculoskeletal disorders (MSDs), body posture, upper extremity kinematics, muscle activity and psychosocial factors were tested. The proposed ergonomics interventions effectively reduced MSDs and improved body posture.

  12. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  13. A REDUCE program for symbolic computation of Puiseux expansions

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Tiller, P.

    1991-01-01

    The program is described for computation of Puiseux expansions of algebraic functions. The Newton diagram method is used for construction of initial coefficients of all the Puiseux series at the given point. The program is written in computer algebra language Reduce. Some illustrative examples are given. 20 refs

  14. Significant decimal digits for energy representation on short-word computers

    International Nuclear Information System (INIS)

    Sartori, E.

    1989-01-01

    The general belief that single precision floating point numbers have always at least seven significant decimal digits on short word computers such as IBM is erroneous. Seven significant digits are required however for representing the energy variable in nuclear cross-section data sets containing sharp p-wave resonances at 0 Kelvin. It is suggested that either the energy variable is stored in double precision or that cross-section resonances are reconstructed to room temperature or higher on short word computers

  15. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  16. Reducing image noise in computed tomography (CT) colonography: effect of an integrated circuit CT detector.

    Science.gov (United States)

    Liu, Yu; Leng, Shuai; Michalak, Gregory J; Vrieze, Thomas J; Duan, Xinhui; Qu, Mingliang; Shiung, Maria M; McCollough, Cynthia H; Fletcher, Joel G

    2014-01-01

    To investigate whether the integrated circuit (IC) detector results in reduced noise in computed tomography (CT) colonography (CTC). Three hundred sixty-six consecutive patients underwent clinically indicated CTC using the same CT scanner system, except for a difference in CT detectors (IC or conventional). Image noise, patient size, and scanner radiation output (volume CT dose index) were quantitatively compared between patient cohorts using each detector system, with separate comparisons for the abdomen and pelvis. For the abdomen and pelvis, despite significantly larger patient sizes in the IC detector cohort (both P 0.18). Based on the observed image noise reduction, radiation dose could alternatively be reduced by approximately 20% to result in similar levels of image noise. Computed tomography colonography images acquired using the IC detector had significantly lower noise than images acquired using the conventional detector. This noise reduction can permit further radiation dose reduction in CTC.

  17. Implementation of a solution Cloud Computing with MapReduce model

    International Nuclear Information System (INIS)

    Baya, Chalabi

    2014-01-01

    In recent years, large scale computer systems have emerged to meet the demands of high storage, supercomputing, and applications using very large data sets. The emergence of Cloud Computing offers the potentiel for analysis and processing of large data sets. Mapreduce is the most popular programming model which is used to support the development of such applications. It was initially designed by Google for building large datacenters on a large scale, to provide Web search services with rapid response and high availability. In this paper we will test the clustering algorithm K-means Clustering in a Cloud Computing. This algorithm is implemented on MapReduce. It has been chosen for its characteristics that are representative of many iterative data analysis algorithms. Then, we modify the framework CloudSim to simulate the MapReduce execution of K-means Clustering on different Cloud Computing, depending on their size and characteristics of target platforms. The experiment show that the implementation of K-means Clustering gives good results especially for large data set and the Cloud infrastructure has an influence on these results

  18. A strategy for reducing turnaround time in design optimization using a distributed computer system

    Science.gov (United States)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  19. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Cambridge, MA; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  20. Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda A [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-01-10

    Methods, apparatus, and products are disclosed for reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application that include: beginning, by each compute node, performance of a blocking operation specified by the parallel application, each compute node beginning the blocking operation asynchronously with respect to the other compute nodes; reducing, for each compute node, power to one or more hardware components of that compute node in response to that compute node beginning the performance of the blocking operation; and restoring, for each compute node, the power to the hardware components having power reduced in response to all of the compute nodes beginning the performance of the blocking operation.

  1. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed

  2. A computational environment for creating and testing reduced chemical kinetic mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Montgomery, C.J.; Swensen, D.A.; Harding, T.V.; Cremer, M.A.; Bockelie, M.J. [Reaction Engineering International, Salt Lake City, UT (USA)

    2002-02-01

    This paper describes software called computer assisted reduced mechanism problem solving environment (CARM-PSE) that gives the engineer the ability to rapidly set up, run and examine large numbers of problems comparing detailed and reduced (approximate) chemistry. CARM-PSE integrates the automatic chemical mechanism reduction code CARM and the codes that simulate perfectly stirred reactors and plug flow reactors into a user-friendly computational environment. CARM-PSE gives the combustion engineer the ability to easily test chemical approximations over many hundreds of combinations of inputs in a multidimensional parameter space. The demonstration problems compare detailed and reduced chemical kinetic calculations for methane-air combustion, including nitrogen oxide formation, in a stirred reactor and selective non-catalytic reduction of NOx, in coal combustion flue gas.

  3. The impact of slice-reduced computed tomography on histogram-based densitometry assessment of lung fibrosis in patients with systemic sclerosis.

    Science.gov (United States)

    Nguyen-Kim, Thi Dan Linh; Maurer, Britta; Suliman, Yossra A; Morsbach, Fabian; Distler, Oliver; Frauenfelder, Thomas

    2018-04-01

    To evaluate usability of slice-reduced sequential computed tomography (CT) compared to standard high-resolution CT (HRCT) in patients with systemic sclerosis (SSc) for qualitative and quantitative assessment of interstitial lung disease (ILD) with respect to (I) detection of lung parenchymal abnormalities, (II) qualitative and semiquantitative visual assessment, (III) quantification of ILD by histograms and (IV) accuracy for the 20%-cut off discrimination. From standard chest HRCT of 60 SSc patients sequential 9-slice-computed tomography (reduced HRCT) was retrospectively reconstructed. ILD was assessed by visual scoring and quantitative histogram parameters. Results from standard and reduced HRCT were compared using non-parametric tests and analysed by univariate linear regression analyses. With respect to the detection of parenchymal abnormalities, only the detection of intrapulmonary bronchiectasis was significantly lower in reduced HRCT compared to standard HRCT (P=0.039). No differences were found comparing visual scores for fibrosis severity and extension from standard and reduced HRCT (P=0.051-0.073). All scores correlated significantly (Phistogram parameters derived from both, standard and reduced HRCT. Significant higher values of kurtosis and skewness for reduced HRCT were found (both Phistogram parameters from reduced HRCT showed significant discrimination at cut-off 20% fibrosis (sensitivity 88% kurtosis and skewness; specificity 81% kurtosis and 86% skewness; cut-off kurtosis ≤26, cut-off skewness ≤4; both Phistogram parameters derived from the approach of reduced HRCT could discriminate at a threshold of 20% lung fibrosis with high sensitivity and specificity. Hence it might be used to detect early disease progression of lung fibrosis in context of monitoring and treatment of SSc patients.

  4. Reducing power consumption during execution of an application on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2012-06-05

    Methods, apparatus, and products are disclosed for reducing power consumption during execution of an application on a plurality of compute nodes that include: executing, by each compute node, an application, the application including power consumption directives corresponding to one or more portions of the application; identifying, by each compute node, the power consumption directives included within the application during execution of the portions of the application corresponding to those identified power consumption directives; and reducing power, by each compute node, to one or more components of that compute node according to the identified power consumption directives during execution of the portions of the application corresponding to those identified power consumption directives.

  5. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  6. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention.

    Science.gov (United States)

    Ho, Chi-Kung; Chen, Fu-Cheng; Chen, Yung-Lung; Wang, Hui-Ting; Lee, Chien-Ho; Chung, Wen-Jung; Lin, Cheng-Jui; Hsueh, Shu-Kai; Hung, Shin-Chiang; Wu, Kuan-Han; Liu, Chu-Feng; Kung, Chia-Te; Cheng, Cheng-I

    2017-01-01

    This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB) time for ST segment elevation myocardial infarction (STEMI). A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p cloud computing system in our present protocol did not reduce DTB time.

  7. Clinical significance of measurement of hepatic volume by computed tomography

    International Nuclear Information System (INIS)

    Sato, Hiroyuki; Matsuda, Yoshiro; Takada, Akira

    1984-01-01

    Hepatic volumes were measured by computed tomography (CT) in 91 patients with chronic liver diseases. Mean hepatic volume in alcoholic liver disease was significantly larger than that in non-alcoholic liver disease. Hepatic volumes in the majority of decompensated liver cirrhosis were significantly smaller than those of compensated liver cirrhosis. In liver cirrhosis, significant correlations between hepatic volume and various hepatic tests which reflect the total functioning hepatic cell masses were found. Combinations of hepatic volume with ICG maximum removal rate and with serum cholinesterase activity were most useful for the assessment of prognosis in liver cirrhosis. These results indicated that estimation of hepatic volume by CT is useful for analysis of pathophysiology and prognosis of chronic liver diseases, and for diagnosis of alcoholic liver diseases. (author)

  8. Significance of computed tomography for diagnosis of heart diseases

    International Nuclear Information System (INIS)

    Senda, Kohei; Sakuma, Sadayuki

    1983-01-01

    Computed tomography (CT) with a 2 sec scanner was carried out on 105 cases with various heart disease in order to detect CT findings in each heart disease. Significance of CT as a imaging study was evaluated in comparison with scintigraphic, echographic and roentgenographic studies. CT with contrast enhancement in moderate inspiration was able to demonstrate accurately organic changes of intra-and extracardiac structure. Comparing with other imaging studies, CT was superior in detection of calcified or intracardiac mass lesion in spite of low value in evaluating cardiac function or dynamics. (author)

  9. Fixed-point image orthorectification algorithms for reduced computational cost

    Science.gov (United States)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  10. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...

  11. Optimizing contrast agents with respect to reducing beam hardening in nonmedical X-ray computed tomography experiments.

    Science.gov (United States)

    Nakashima, Yoshito; Nakano, Tsukasa

    2014-01-01

    Iodine is commonly used as a contrast agent in nonmedical science and engineering, for example, to visualize Darcy flow in porous geological media using X-ray computed tomography (CT). Undesirable beam hardening artifacts occur when a polychromatic X-ray source is used, which makes the quantitative analysis of CT images difficult. To optimize the chemistry of a contrast agent in terms of the beam hardening reduction, we performed computer simulations and generated synthetic CT images of a homogeneous cylindrical sand-pack (diameter, 28 or 56 mm; porosity, 39 vol.% saturated with aqueous suspensions of heavy elements assuming the use of a polychromatic medical CT scanner. The degree of cupping derived from the beam hardening was assessed using the reconstructed CT images to find the chemistry of the suspension that induced the least cupping. The results showed that (i) the degree of cupping depended on the position of the K absorption edge of the heavy element relative to peak of the polychromatic incident X-ray spectrum, (ii) (53)I was not an ideal contrast agent because it causes marked cupping, and (iii) a single element much heavier than (53)I ((64)Gd to (79)Au) reduced the cupping artifact significantly, and a four-heavy-element mixture of elements from (64)Gd to (79)Au reduced the artifact most significantly.

  12. Reducing the Digital Divide among Children Who Received Desktop or Hybrid Computers for the Home

    Directory of Open Access Journals (Sweden)

    Gila Cohen Zilka

    2016-06-01

    Full Text Available Researchers and policy makers have been exploring ways to reduce the digital divide. Parameters commonly used to examine the digital divide worldwide, as well as in this study, are: (a the digital divide in the accessibility and mobility of the ICT infrastructure and of the content infrastructure (e.g., sites used in school; and (b the digital divide in literacy skills. In the present study we examined the degree of effectiveness of receiving a desktop or hybrid computer for the home in reducing the digital divide among children of low socio-economic status aged 8-12 from various localities across Israel. The sample consisted of 1,248 respondents assessed in two measurements. As part of the mixed-method study, 128 children were also interviewed. Findings indicate that after the children received desktop or hybrid computers, changes occurred in their frequency of access, mobility, and computer literacy. Differences were found between the groups: hybrid computers reduce disparities and promote work with the computer and surfing the Internet more than do desktop computers. Narrowing the digital divide for this age group has many implications for the acquisition of skills and study habits, and consequently, for the realization of individual potential. The children spoke about self improvement as a result of exposure to the digital environment, about a sense of empowerment and of improvement in their advantage in the social fabric. Many children expressed a desire to continue their education and expand their knowledge of computer applications, the use of software, of games, and more. Therefore, if there is no computer in the home and it is necessary to decide between a desktop and a hybrid computer, a hybrid computer is preferable.

  13. Complex functionality with minimal computation: Promise and pitfalls of reduced-tracer ocean biogeochemistry models

    Science.gov (United States)

    Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh

    2015-12-01

    Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.

  14. The significance of reduced respiratory chain enzyme activities: clinical, biochemical and radiological associations.

    Science.gov (United States)

    Mordekar, S R; Guthrie, P; Bonham, J R; Olpin, S E; Hargreaves, I; Baxter, P S

    2006-03-01

    Mitochondrial diseases are an important group of neurometabolic disorders in children with varied clinical presentations and diagnosis that can be difficult to confirm. To report the significance of reduced respiratory chain enzyme (RCE) activity in muscle biopsy samples from children. Retrospective odds ratio was used to compare clinical and biochemical features, DNA studies, neuroimaging, and muscle biopsies in 18 children with and 48 without reduced RCE activity. Children with reduced RCE activity were significantly more likely to have consanguineous parents, to present with acute encephalopathy and lactic acidaemia and/or within the first year of life; to have an axonal neuropathy, CSF lactate >4 mmol/l; and/or to have signal change in the basal ganglia. There were positive associations with a maternal family history of possible mitochondrial cytopathy; a presentation with failure to thrive and lactic acidaemia, ragged red fibres, reduced fibroblast fatty acid oxidation and with an abnormal allopurinol loading test. There was no association with ophthalmic abnormalities, deafness, epilepsy or myopathy. The association of these clinical, biochemical and radiological features with reduced RCE activity suggests a possible causative link.

  15. Significance of a postenhancement computed tomography findings in liver cirrhosis: In view of hemodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Suck Hong; Kim, Byung Soo [Pusan National University College of Medicine, Pusan (Korea, Republic of)

    1985-04-15

    We observed a significant sign in postenhancement computed tomography of liver cirrhosis, that is visualization of portal venous branches. During postenhancement computed tomography scanning of liver, the portal vein can not be identified in liver parenchyme in 84% of patients without known cirrhosis (including chronic active hepatitis). The two have the same hemodynamic changes in that there is diffuse fibrosis and resultant decrease in vascular bed. Visualization of intrahepatic portal branches in postenhancement computed tomography is because of decreased diffusion ability and portal hypertension.

  16. Computational design of patterned interfaces using reduced order models

    International Nuclear Information System (INIS)

    Vattre, A.J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M.J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. (authors)

  17. Four-phonon scattering significantly reduces intrinsic thermal conductivity of solids

    Science.gov (United States)

    Feng, Tianli; Lindsay, Lucas; Ruan, Xiulin

    2017-10-01

    For decades, the three-phonon scattering process has been considered to govern thermal transport in solids, while the role of higher-order four-phonon scattering has been persistently unclear and so ignored. However, recent quantitative calculations of three-phonon scattering have often shown a significant overestimation of thermal conductivity as compared to experimental values. In this Rapid Communication we show that four-phonon scattering is generally important in solids and can remedy such discrepancies. For silicon and diamond, the predicted thermal conductivity is reduced by 30% at 1000 K after including four-phonon scattering, bringing predictions in excellent agreement with measurements. For the projected ultrahigh-thermal conductivity material, zinc-blende BAs, a competitor of diamond as a heat sink material, four-phonon scattering is found to be strikingly strong as three-phonon processes have an extremely limited phase space for scattering. The four-phonon scattering reduces the predicted thermal conductivity from 2200 to 1400 W/m K at room temperature. The reduction at 1000 K is 60%. We also find that optical phonon scattering rates are largely affected, being important in applications such as phonon bottlenecks in equilibrating electronic excitations. Recognizing that four-phonon scattering is expensive to calculate, in the end we provide some guidelines on how to quickly assess the significance of four-phonon scattering, based on energy surface anharmonicity and the scattering phase space. Our work clears the decades-long fundamental question of the significance of higher-order scattering, and points out ways to improve thermoelectrics, thermal barrier coatings, nuclear materials, and radiative heat transfer.

  18. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    Science.gov (United States)

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra

  19. Reduced frontal and occipital lobe asymmetry on the CT-scans of schizophrenic patients. Its specificity and clinical significance

    International Nuclear Information System (INIS)

    Falkai, P.; Schneider, T.; Greve, B.; Klieser, E.; Bogerts, B.

    1995-01-01

    Frontal and occipital lobe widths were determined in the computed tomographic (CT) scans of 135 schizophrenic patients, 158 neuro psychiatrically healthy and 102 psychiatric control subjects, including patients with affective psychosis, neurosis and schizoaffective psychosis. Most healthy right-handed subjects demonstrate a relative enlargement of the right frontal as well as left occipital lobe compared to the opposite hemisphere. These normal frontal and occipital lobe asymmetries were selectively reduced in schizophrenics (f.: 5%, p < .0005; o.: 3%, p < .05), irrespective of the pathophysiological subgroup. Schizophrenic neuroleptic non-responders revealed a significant reduction of frontal lobe asymmetry (3%, p < .05), while no correlation between BPRS-sub scores and disturbed cerebral laterality could be detected. In sum the present study demonstrates the disturbed cerebral lateralisation in schizophrenic patients supporting the hypothesis of interrupted early brain development in schizophrenia. (author)

  20. Bringing MapReduce Closer To Data With Active Drives

    Science.gov (United States)

    Golpayegani, N.; Prathapan, S.; Warmka, R.; Wyatt, B.; Halem, M.; Trantham, J. D.; Markey, C. A.

    2017-12-01

    Moving computation closer to the data location has been a much theorized improvement to computation for decades. The increase in processor performance, the decrease in processor size and power requirement combined with the increase in data intensive computing has created a push to move computation as close to data as possible. We will show the next logical step in this evolution in computing: moving computation directly to storage. Hypothetical systems, known as Active Drives, have been proposed as early as 1998. These Active Drives would have a general-purpose CPU on each disk allowing for computations to be performed on them without the need to transfer the data to the computer over the system bus or via a network. We will utilize Seagate's Active Drives to perform general purpose parallel computing using the MapReduce programming model directly on each drive. We will detail how the MapReduce programming model can be adapted to the Active Drive compute model to perform general purpose computing with comparable results to traditional MapReduce computations performed via Hadoop. We will show how an Active Drive based approach significantly reduces the amount of data leaving the drive when performing several common algorithms: subsetting and gridding. We will show that an Active Drive based design significantly improves data transfer speeds into and out of drives compared to Hadoop's HDFS while at the same time keeping comparable compute speeds as Hadoop.

  1. A Flexible Computational Framework Using R and Map-Reduce for Permutation Tests of Massive Genetic Analysis of Complex Traits.

    Science.gov (United States)

    Mahjani, Behrang; Toor, Salman; Nettelblad, Carl; Holmgren, Sverker

    2017-01-01

    In quantitative trait locus (QTL) mapping significance of putative QTL is often determined using permutation testing. The computational needs to calculate the significance level are immense, 10 4 up to 10 8 or even more permutations can be needed. We have previously introduced the PruneDIRECT algorithm for multiple QTL scan with epistatic interactions. This algorithm has specific strengths for permutation testing. Here, we present a flexible, parallel computing framework for identifying multiple interacting QTL using the PruneDIRECT algorithm which uses the map-reduce model as implemented in Hadoop. The framework is implemented in R, a widely used software tool among geneticists. This enables users to rearrange algorithmic steps to adapt genetic models, search algorithms, and parallelization steps to their needs in a flexible way. Our work underlines the maturity of accessing distributed parallel computing for computationally demanding bioinformatics applications through building workflows within existing scientific environments. We investigate the PruneDIRECT algorithm, comparing its performance to exhaustive search and DIRECT algorithm using our framework on a public cloud resource. We find that PruneDIRECT is vastly superior for permutation testing, and perform 2 ×10 5 permutations for a 2D QTL problem in 15 hours, using 100 cloud processes. We show that our framework scales out almost linearly for a 3D QTL search.

  2. Using a Cloud Computing System to Reduce Door-to-Balloon Time in Acute ST-Elevation Myocardial Infarction Patients Transferred for Percutaneous Coronary Intervention

    Directory of Open Access Journals (Sweden)

    Chi-Kung Ho

    2017-01-01

    Full Text Available Background. This study evaluated the impact on clinical outcomes using a cloud computing system to reduce percutaneous coronary intervention hospital door-to-balloon (DTB time for ST segment elevation myocardial infarction (STEMI. Methods. A total of 369 patients before and after implementation of the transfer protocol were enrolled. Of these patients, 262 were transferred through protocol while the other 107 patients were transferred through the traditional referral process. Results. There were no significant differences in DTB time, pain to door of STEMI receiving center arrival time, and pain to balloon time between the two groups. Pain to electrocardiography time in patients with Killip I/II and catheterization laboratory to balloon time in patients with Killip III/IV were significantly reduced in transferred through protocol group compared to in traditional referral process group (both p<0.05. There were also no remarkable differences in the complication rate and 30-day mortality between two groups. The multivariate analysis revealed that the independent predictors of 30-day mortality were elderly patients, advanced Killip score, and higher level of troponin-I. Conclusions. This study showed that patients transferred through our present protocol could reduce pain to electrocardiography and catheterization laboratory to balloon time in Killip I/II and III/IV patients separately. However, this study showed that using a cloud computing system in our present protocol did not reduce DTB time.

  3. Substituting computers for services - potential to reduce ICT's environmental footprint

    Energy Technology Data Exchange (ETDEWEB)

    Plepys, A. [The International Inst. for Industrial Environmental Economics at Lund Univ. (Sweden)

    2004-07-01

    The environmental footprint of IT products are significant and, in spite of manufacturing and product design improvements, growing consumption of electronics results in increasing absolute environmental impact. Computers have short technological lifespan and a lot of the in-build performance, although necessary, remains idling for most of the time. Today, most of computers used in non-residential sectors are connected to networks. The premise of this paper is that computer networks are an untapped resource, which could allow addressing environmental impacts of IT products through centralising and sharing computing resources. The article presents results of a comparative study of two computing architectures. The first one is the traditional decentralised PC-based system and the second - centralised server-based computing (SBC) system. Both systems deliver equivalent functions to the final users and this can be compared on a one-to-one basis. The study evaluates product lifespan, energy consumption in user stage, product design and its environmental implications in manufacturing. (orig.)

  4. Male circumcision significantly reduces prevalence and load of genital anaerobic bacteria.

    Science.gov (United States)

    Liu, Cindy M; Hungate, Bruce A; Tobian, Aaron A R; Serwadda, David; Ravel, Jacques; Lester, Richard; Kigozi, Godfrey; Aziz, Maliha; Galiwango, Ronald M; Nalugoda, Fred; Contente-Cuomo, Tania L; Wawer, Maria J; Keim, Paul; Gray, Ronald H; Price, Lance B

    2013-04-16

    Male circumcision reduces female-to-male HIV transmission. Hypothesized mechanisms for this protective effect include decreased HIV target cell recruitment and activation due to changes in the penis microbiome. We compared the coronal sulcus microbiota of men from a group of uncircumcised controls (n = 77) and from a circumcised intervention group (n = 79) at enrollment and year 1 follow-up in a randomized circumcision trial in Rakai, Uganda. We characterized microbiota using16S rRNA gene-based quantitative PCR (qPCR) and pyrosequencing, log response ratio (LRR), Bayesian classification, nonmetric multidimensional scaling (nMDS), and permutational multivariate analysis of variance (PerMANOVA). At baseline, men in both study arms had comparable coronal sulcus microbiota; however, by year 1, circumcision decreased the total bacterial load and reduced microbiota biodiversity. Specifically, the prevalence and absolute abundance of 12 anaerobic bacterial taxa decreased significantly in the circumcised men. While aerobic bacterial taxa also increased postcircumcision, these gains were minor. The reduction in anaerobes may partly account for the effects of circumcision on reduced HIV acquisition. The bacterial changes identified in this study may play an important role in the HIV risk reduction conferred by male circumcision. Decreasing the load of specific anaerobes could reduce HIV target cell recruitment to the foreskin. Understanding the mechanisms that underlie the benefits of male circumcision could help to identify new intervention strategies for decreasing HIV transmission, applicable to populations with high HIV prevalence where male circumcision is culturally less acceptable.

  5. Computer-based versus in-person interventions for preventing and reducing stress in workers.

    Science.gov (United States)

    Kuster, Anootnara Talkul; Dalsbø, Therese K; Luong Thanh, Bao Yen; Agarwal, Arnav; Durand-Moreau, Quentin V; Kirkehei, Ingvild

    2017-08-30

    Chronic exposure to stress has been linked to several negative physiological and psychological health outcomes. Among employees, stress and its associated effects can also result in productivity losses and higher healthcare costs. In-person (face-to-face) and computer-based (web- and mobile-based) stress management interventions have been shown to be effective in reducing stress in employees compared to no intervention. However, it is unclear if one form of intervention delivery is more effective than the other. It is conceivable that computer-based interventions are more accessible, convenient, and cost-effective. To compare the effects of computer-based interventions versus in-person interventions for preventing and reducing stress in workers. We searched CENTRAL, MEDLINE, PubMed, Embase, PsycINFO, NIOSHTIC, NIOSHTIC-2, HSELINE, CISDOC, and two trials registers up to February 2017. We included randomised controlled studies that compared the effectiveness of a computer-based stress management intervention (using any technique) with a face-to-face intervention that had the same content. We included studies that measured stress or burnout as an outcome, and used workers from any occupation as participants. Three authors independently screened and selected 75 unique studies for full-text review from 3431 unique reports identified from the search. We excluded 73 studies based on full-text assessment. We included two studies. Two review authors independently extracted stress outcome data from the two included studies. We contacted study authors to gather additional data. We used standardised mean differences (SMDs) with 95% confidence intervals (CIs) to report study results. We did not perform meta-analyses due to variability in the primary outcome and considerable statistical heterogeneity. We used the GRADE approach to rate the quality of the evidence. Two studies met the inclusion criteria, including a total of 159 participants in the included arms of the studies

  6. Reduced-Order Computational Model for Low-Frequency Dynamics of Automobiles

    Directory of Open Access Journals (Sweden)

    A. Arnoux

    2013-01-01

    Full Text Available A reduced-order model is constructed to predict, for the low-frequency range, the dynamical responses in the stiff parts of an automobile constituted of stiff and flexible parts. The vehicle has then many elastic modes in this range due to the presence of many flexible parts and equipment. A nonusual reduced-order model is introduced. The family of the elastic modes is not used and is replaced by an adapted vector basis of the admissible space of global displacements. Such a construction requires a decomposition of the domain of the structure in subdomains in order to control the spatial wave length of the global displacements. The fast marching method is used to carry out the subdomain decomposition. A probabilistic model of uncertainties is introduced. The parameters controlling the level of uncertainties are estimated solving a statistical inverse problem. The methodology is validated with a large computational model of an automobile.

  7. Quilting after mastectomy significantly reduces seroma formation

    African Journals Online (AJOL)

    reduce or prevent seroma formation among mastectomy patients ... of this prospective study is to evaluate the effect of surgical quilting ... Seroma was more common in smokers (p=0.003) and was not decreased by the .... explain its aetiology.

  8. Study of Propagation Mechanisms in Dynamical Railway Environment to Reduce Computation Time of 3D Ray Tracing Simulator

    Directory of Open Access Journals (Sweden)

    Siham Hairoud

    2013-01-01

    Full Text Available In order to better assess the behaviours of the propagation channel in a confined environment such as a railway tunnel for subway application, we present an optimization method for a deterministic channel simulator based on 3D ray tracing associated to the geometrical optics laws and the uniform theory of diffraction. This tool requires a detailed description of the environment. Thus, the complexity of this model is directly bound to the complexity of the environment and specifically to the number of facets that compose it. In this paper, we propose an algorithm to identify facets that have no significant impact on the wave propagation. This allows us to simplify the description of the geometry of the modelled environment by removing them and by this way, to reduce the complexity of our model and therefore its computation time. A comparative study between full and simplified environment is led and shows the impact of this proposed method on the characteristic parameters of the propagation channel. Thus computation time obtained from the simplified environment is 6 times lower than the one of the full model without significant degradation of simulation accuracy.

  9. A computer language for reducing activation analysis data

    International Nuclear Information System (INIS)

    Friedman, M.H.; Tanner, J.T.

    1978-01-01

    A program, written in FORTRAN, which defines a language for reducing activation analysis data is described. An attempt was made to optimize the choice of commands and their definitions so as to concisely express what should be done, make the language natural to use and easy to learn, arranqe a system of checks to guard against communication errors and have the language be inclusive. Communications are effected through commands, and these can be given in almost any order. Consistency checks are done and diagnostic messages are printed automatically to guard against the incorrect use of commands. Default options on the commands allow instructions to be expressed concisely while providing a capability to specify details for the data reduction process. The program has been implemented on a UNIVAC 1108 computer. A complete description of the commands, the algorithms used, and the internal consistency checks used are given elsewhere. The applications of the program and the methods for obtaining data automatically have already been described. (T.G.)

  10. Non-conforming finite-element formulation for cardiac electrophysiology: an effective approach to reduce the computation time of heart simulations without compromising accuracy

    Science.gov (United States)

    Hurtado, Daniel E.; Rojas, Guillermo

    2018-04-01

    Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.

  11. Defibrillator charging before rhythm analysis significantly reduces hands-off time during resuscitation

    DEFF Research Database (Denmark)

    Hansen, L. K.; Folkestad, L.; Brabrand, M.

    2013-01-01

    BACKGROUND: Our objective was to reduce hands-off time during cardiopulmonary resuscitation as increased hands-off time leads to higher mortality. METHODS: The European Resuscitation Council (ERC) 2005 and ERC 2010 guidelines were compared with an alternative sequence (ALT). Pulseless ventricular...... physicians were included. All had prior experience in advanced life support. Chest compressions were shorter interrupted using ALT (mean, 6.7 vs 13.0 seconds). Analyzing data for ventricular tachycardia scenarios only, hands-off time was shorter using ALT (mean, 7.1 vs 18.2 seconds). In ERC 2010 vs ALT, 12...... physicians were included. Two physicians had not prior experience in advanced life support. Hands-off time was reduced using ALT (mean, 3.9 vs 5.6 seconds). Looking solely at ventricular tachycardia scenarios, hands-off time was shortened using ALT (mean, 4.5 vs 7.6 seconds). No significant reduction...

  12. Sodium-Reduced Meat and Poultry Products Contain a Significant Amount of Potassium from Food Additives.

    Science.gov (United States)

    Parpia, Arti Sharma; Goldstein, Marc B; Arcand, JoAnne; Cho, France; L'Abbé, Mary R; Darling, Pauline B

    2018-05-01

    counterparts (mean difference [95% CI]: 486 [334-638]; Padditives appearing on the product label ingredient list, did not significantly differ between the two groups. Potassium additives are frequently added to sodium-reduced MPPs in amounts that significantly contribute to the potassium load for patients with impaired renal handling of potassium caused by chronic kidney disease and certain medications. Patients requiring potassium restriction should be counseled to be cautious regarding the potassium content of sodium-reduced MPPs and encouraged to make food choices accordingly. Copyright © 2018 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  13. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  14. Computational Approach to Annotating Variants of Unknown Significance in Clinical Next Generation Sequencing.

    Science.gov (United States)

    Schulz, Wade L; Tormey, Christopher A; Torres, Richard

    2015-01-01

    Next generation sequencing (NGS) has become a common technology in the clinical laboratory, particularly for the analysis of malignant neoplasms. However, most mutations identified by NGS are variants of unknown clinical significance (VOUS). Although the approach to define these variants differs by institution, software algorithms that predict variant effect on protein function may be used. However, these algorithms commonly generate conflicting results, potentially adding uncertainty to interpretation. In this review, we examine several computational tools used to predict whether a variant has clinical significance. In addition to describing the role of these tools in clinical diagnostics, we assess their efficacy in analyzing known pathogenic and benign variants in hematologic malignancies. Copyright© by the American Society for Clinical Pathology (ASCP).

  15. A chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to phonetic content.

    Science.gov (United States)

    Heimbauer, Lisa A; Beran, Michael J; Owren, Michael J

    2011-07-26

    A long-standing debate concerns whether humans are specialized for speech perception, which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuographic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users. Experiment 2 tested "impossibly unspeechlike" sine-wave (SW) synthesis, which reduces speech to just three moving tones. Although receiving only intermittent and noncontingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate but improved in experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Intensity-modulated radiotherapy significantly reduces xerostomia compared with conventional radiotherapy

    International Nuclear Information System (INIS)

    Braam, Petra M.; Terhaard, Chris H.J. M.D.; Roesink, Judith M.; Raaijmakers, Cornelis P.J.

    2006-01-01

    Purpose: Xerostomia is a severe complication after radiotherapy for oropharyngeal cancer, as the salivary glands are in close proximity with the primary tumor. Intensity-modulated radiotherapy (IMRT) offers theoretical advantages for normal tissue sparing. A Phase II study was conducted to determine the value of IMRT for salivary output preservation compared with conventional radiotherapy (CRT). Methods and Materials: A total of 56 patients with oropharyngeal cancer were prospectively evaluated. Of these, 30 patients were treated with IMRT and 26 with CRT. Stimulated parotid salivary flow was measured before, 6 weeks, and 6 months after treatment. A complication was defined as a stimulated parotid flow rate <25% of the preradiotherapy flow rate. Results: The mean dose to the parotid glands was 48.1 Gy (SD 14 Gy) for CRT and 33.7 Gy (SD 10 Gy) for IMRT (p < 0.005). The mean parotid flow ratio 6 weeks and 6 months after treatment was respectively 41% and 64% for IMRT and respectively 11% and 18% for CRT. As a result, 6 weeks after treatment, the number of parotid flow complications was significantly lower after IMRT (55%) than after CRT (87%) (p = 0.002). The number of complications 6 months after treatment was 56% for IMRT and 81% for CRT (p = 0.04). Conclusions: IMRT significantly reduces the number of parotid flow complications for patients with oropharyngeal cancer

  17. Biofeedback effectiveness to reduce upper limb muscle activity during computer work is muscle specific and time pressure dependent

    DEFF Research Database (Denmark)

    Vedsted, Pernille; Søgaard, Karen; Blangsted, Anne Katrine

    2011-01-01

    trapezius (TRA) can reduce bilateral TRA activity but not extensor digitorum communis (EDC) activity; (2) biofeedback from EDC can reduce activity in EDC but not in TRA; (3) biofeedback is more effective in no time constraint than in the time constraint working condition. Eleven healthy women performed......Continuous electromyographic (EMG) activity level is considered a risk factor in developing muscle disorders. EMG biofeedback is known to be useful in reducing EMG activity in working muscles during computer work. The purpose was to test the following hypotheses: (1) unilateral biofeedback from...... computer work during two different working conditions (time constraint/no time constraint) while receiving biofeedback. Biofeedback was given from right TRA or EDC through two modes (visual/auditory) by the use of EMG or mechanomyography as biofeedback source. During control sessions (no biofeedback), EMG...

  18. Impact of reduced-radiation dual-energy protocols using 320-detector row computed tomography for analyzing urinary calculus components: initial in vitro evaluation.

    Science.gov (United States)

    Cai, Xiangran; Zhou, Qingchun; Yu, Juan; Xian, Zhaohui; Feng, Youzhen; Yang, Wencai; Mo, Xukai

    2014-10-01

    To evaluate the impact of reduced-radiation dual-energy (DE) protocols using 320-detector row computed tomography on the differentiation of urinary calculus components. A total of 58 urinary calculi were placed into the same phantom and underwent DE scanning with 320-detector row computed tomography. Each calculus was scanned 4 times with the DE protocols using 135 kV and 80 kV tube voltage and different tube current combinations, including 100 mA and 570 mA (group A), 50 mA and 290 mA (group B), 30 mA and 170 mA (group C), and 10 mA and 60 mA (group D). The acquisition data of all 4 groups were then analyzed by stone DE analysis software, and the results were compared with x-ray diffraction analysis. Noise, contrast-to-noise ratio, and radiation dose were compared. Calculi were correctly identified in 56 of 58 stones (96.6%) using group A and B protocols. However, only 35 stones (60.3%) and 16 stones (27.6%) were correctly diagnosed using group C and D protocols, respectively. Mean noise increased significantly and mean contrast-to-noise ratio decreased significantly from groups A to D (P calculus component analysis while reducing patient radiation exposure to 1.81 mSv. Further reduction of tube currents may compromise diagnostic accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Reduced-dose C-arm computed tomography applications at a pediatric institution

    Energy Technology Data Exchange (ETDEWEB)

    Acord, Michael; Shellikeri, Sphoorti; Vatsky, Seth; Srinivasan, Abhay; Krishnamurthy, Ganesh; Keller, Marc S.; Cahill, Anne Marie [The Children' s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA (United States)

    2017-12-15

    Reduced-dose C-arm computed tomography (CT) uses flat-panel detectors to acquire real-time 3-D images in the interventional radiology suite to assist with anatomical localization and procedure planning. To describe dose-reduction techniques for C-arm CT at a pediatric institution and to provide guidance for implementation. We conducted a 5-year retrospective study on procedures using an institution-specific reduced-dose protocol: 5 or 8 s Dyna Rotation, 248/396 projection images/acquisition and 0.1-0.17 μGy/projection dose at the detector with 0.3/0.6/0.9-mm copper (Cu) filtration. We categorized cases by procedure type and average patient age and calculated C-arm CT and total dose area product (DAP). Two hundred twenty-two C-arm CT-guided procedures were performed with a dose-reduction protocol. The most common procedures were temporomandibular and sacroiliac joint injections (48.6%) and sclerotherapy (34.2%). C-arm CT was utilized in cases of difficult percutaneous access in less common applications such as cecostomy and gastrostomy placement, foreign body retrieval and thoracentesis. C-arm CT accounted for between 9.9% and 80.7% of the total procedural DAP. Dose-reducing techniques can preserve image quality for intervention while reducing radiation exposure to the child. This technology has multiple applications within pediatric interventional radiology and can be considered as an adjunctive imaging tool in a variety of procedures, particularly when percutaneous access is challenging despite routine fluoroscopic or ultrasound guidance. (orig.)

  20. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  1. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  2. Reduce in Variation and Improve Efficiency of Target Volume Delineation by a Computer-Assisted System Using a Deformable Image Registration Approach

    International Nuclear Information System (INIS)

    Chao, K.S. Clifford; Bhide, Shreerang FRCR; Chen, Hansen; Asper, Joshua PAC; Bush, Steven; Franklin, Gregg; Kavadi, Vivek; Liengswangwong, Vichaivood; Gordon, William; Raben, Adam; Strasser, Jon; Koprowski, Christopher; Frank, Steven; Chronowski, Gregory; Ahamad, Anesa; Malyapa, Robert; Zhang Lifei; Dong Lei

    2007-01-01

    Purpose: To determine whether a computer-assisted target volume delineation (CAT) system using a deformable image registration approach can reduce the variation of target delineation among physicians with different head and neck (HN) IMRT experiences and reduce the time spent on the contouring process. Materials and Methods: We developed a deformable image registration method for mapping contours from a template case to a patient case with a similar tumor manifestation but different body configuration. Eight radiation oncologists with varying levels of clinical experience in HN IMRT performed target delineation on two HN cases, one with base-of-tongue (BOT) cancer and another with nasopharyngeal cancer (NPC), by first contouring from scratch and then by modifying the contours deformed by the CAT system. The gross target volumes were provided. Regions of interest for comparison included the clinical target volumes (CTVs) and normal organs. The volumetric and geometric variation of these regions of interest and the time spent on contouring were analyzed. Results: We found that the variation in delineating CTVs from scratch among the physicians was significant, and that using the CAT system reduced volumetric variation and improved geometric consistency in both BOT and NPC cases. The average timesaving when using the CAT system was 26% to 29% for more experienced physicians and 38% to 47% for the less experienced ones. Conclusions: A computer-assisted target volume delineation approach, using a deformable image-registration method with template contours, was able to reduce the variation among physicians with different experiences in HN IMRT while saving contouring time

  3. Innovative Phase Change Approach for Significant Energy Savings

    Science.gov (United States)

    2016-09-01

    related to the production, use, transmission , storage, control, or conservation of energy that will – (A) reduce the need for additional energy supplies...Conditions set for operation were: a. The computer with the broadband wireless card is to be used for data collection, transmission and...FINAL REPORT Innovative Phase Change Approach for Significant Energy Savings ESTCP Project EW-201138 SEPTEMBER 2016 Dr. Aly H Shaaban Applied

  4. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  5. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lent, Wineke A.M. van, E-mail: w.v.lent@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands); Deetman, Joost W., E-mail: j.deetman@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Teertstra, H. Jelle, E-mail: h.teertstra@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Muller, Sara H., E-mail: s.muller@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); Hans, Erwin W., E-mail: e.w.hans@utwente.nl [University of Twente, School of Management and Governance, Dept. of Industrial Engineering and Business Intelligence Systems, Enschede (Netherlands); Harten, Wim H. van, E-mail: w.v.harten@nki.nl [Netherlands Cancer Institute - Antoni van Leeuwenhoek Hospital (NKI-AVL), P.O. Box 90203, 1006 BE Amsterdam (Netherlands); University of Twente, IGS Institute for Innovation and Governance Studies, Department of Health Technology Services Research (HTSR), Enschede (Netherlands)

    2012-11-15

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  6. Reducing the throughput time of the diagnostic track involving CT scanning with computer simulation

    International Nuclear Information System (INIS)

    Lent, Wineke A.M. van; Deetman, Joost W.; Teertstra, H. Jelle; Muller, Sara H.; Hans, Erwin W.; Harten, Wim H. van

    2012-01-01

    Introduction: To examine the use of computer simulation to reduce the time between the CT request and the consult in which the CT report is discussed (diagnostic track) while restricting idle time and overtime. Methods: After a pre implementation analysis in our case study hospital, by computer simulation three scenarios were evaluated on access time, overtime and idle time of the CT; after implementation these same aspects were evaluated again. Effects on throughput time were measured for outpatient short-term and urgent requests only. Conclusion: The pre implementation analysis showed an average CT access time of 9.8 operating days and an average diagnostic track of 14.5 operating days. Based on the outcomes of the simulation, management changed the capacity for the different patient groups to facilitate a diagnostic track of 10 operating days, with a CT access time of 7 days. After the implementation of changes, the average diagnostic track duration was 12.6 days with an average CT access time of 7.3 days. The fraction of patients with a total throughput time within 10 days increased from 29% to 44% while the utilization remained equal with 82%, the idle time increased by 11% and the overtime decreased by 82%. The fraction of patients that completed the diagnostic track within 10 days improved with 52%. Computer simulation proved useful for studying the effects of proposed scenarios in radiology management. Besides the tangible effects, the simulation increased the awareness that optimizing capacity allocation can reduce access times.

  7. Reduced chemical kinetic mechanisms for hydrocarbon fuels

    International Nuclear Information System (INIS)

    Montgomery, C.J.; Cremer, M.A.; Heap, M.P.; Chen, J-Y.; Westbrook, C.K.; Maurice, L.Q.

    1999-01-01

    Using CARM (Computer Aided Reduction Method), a computer program that automates the mechanism reduction process, a variety of different reduced chemical kinetic mechanisms for ethylene and n-heptane have been generated. The reduced mechanisms have been compared to detailed chemistry calculations in simple homogeneous reactors and experiments. Reduced mechanisms for combustion of ethylene having as few as 10 species were found to give reasonable agreement with detailed chemistry over a range of stoichiometries and showed significant improvement over currently used global mechanisms. The performance of reduced mechanisms derived from a large detailed mechanism for n-heptane was compared to results from a reduced mechanism derived from a smaller semi-empirical mechanism. The semi-empirical mechanism was advantageous as a starting point for reduction for ignition delay, but not for PSR calculations. Reduced mechanisms with as few as 12 species gave excellent results for n-heptane/air PSR calculations but 16-25 or more species are needed to simulate n-heptane ignition delay

  8. Differences in prevalence of self-reported musculoskeletal symptoms among computer and non-computer users in a Nigerian population: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Ayanniyi O

    2010-08-01

    Full Text Available Abstract Background Literature abounds on the prevalent nature of Self Reported Musculoskeletal Symptoms (SRMS among computer users, but studies that actually compared this with non computer users are meagre thereby reducing the strength of the evidence. This study compared the prevalence of SRMS between computer and non computer users and assessed the risk factors associated with SRMS. Methods A total of 472 participants comprising equal numbers of age and sex matched computer and non computer users were assessed for the presence of SRMS. Information concerning musculoskeletal symptoms and discomforts from the neck, shoulders, upper back, elbows, wrists/hands, low back, hips/thighs, knees and ankles/feet were obtained using the Standardized Nordic questionnaire. Results The prevalence of SRMS was significantly higher in the computer users than the non computer users both over the past 7 days (χ2 = 39.11, p = 0.001 and during the past 12 month durations (χ2 = 53.56, p = 0.001. The odds of reporting musculoskeletal symptoms was least for participants above the age of 40 years (OR = 0.42, 95% CI = 0.31-0.64 over the past 7 days and OR = 0.61; 95% CI = 0.47-0.77 during the past 12 months and also reduced in female participants. Increasing daily hours and accumulated years of computer use and tasks of data processing and designs/graphics significantly (p Conclusion The prevalence of SRMS was significantly higher in the computer users than the non computer users and younger age, being male, working longer hours daily, increasing years of computer use, data entry tasks and computer designs/graphics were the significant risk factors for reporting musculoskeletal symptoms among the computer users. Computer use may explain the increase in prevalence of SRMS among the computer users.

  9. Effective computation of stochastic protein kinetic equation by reducing stiffness via variable transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lijin, E-mail: ljwang@ucas.ac.cn [School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049 (China)

    2016-06-08

    The stochastic protein kinetic equations can be stiff for certain parameters, which makes their numerical simulation rely on very small time step sizes, resulting in large computational cost and accumulated round-off errors. For such situation, we provide a method of reducing stiffness of the stochastic protein kinetic equation by means of a kind of variable transformation. Theoretical and numerical analysis show effectiveness of this method. Its generalization to a more general class of stochastic differential equation models is also discussed.

  10. Reduced iodinated contrast media for abdominal imaging by dual-layer spectral detector computed tomography for patients with kidney disease

    Directory of Open Access Journals (Sweden)

    Hirokazu Saito, MD

    2018-04-01

    Full Text Available Contrast-enhanced computed tomography using iodinated contrast media is useful for diagnosis of gastrointestinal diseases. However, contrast-induced nephropathy remains problematic for kidney diseases patients. Although current guidelines recommended the use of a minimal dose of contrast media necessary to obtain adequate images for diagnosis, obtaining adequate images with sufficient contrast enhancement is difficult with conventional computed tomography using reduced contrast media. Dual-layer spectral detector computed tomography enables the simultaneous acquisition of low- and high-energy data and the reconstruction of virtual monochromatic images ranging from 40 to 200 keV, retrospectively. Low-energy virtual monochromatic images can enhance the contrast of images, thereby facilitating reduced contrast media. In case 1, abdominal computed tomography angiography at 50 keV using 40% of the conventional dose of contrast media revealed the artery that was the source of diverticular bleeding in the ascending colon. In case 2, ischemia of the transverse colon was diagnosed by contrast-enhanced computed tomography and iodine-selective imaging using 40% of the conventional dose of contrast media. In case 3, advanced esophagogastric junctional cancer was staged and preoperative abdominal computed tomography angiography could be obtained with 30% of the conventional dose of contrast media. However, the texture of virtual monochromatic images may be a limitation at low energy. Keywords: Virtual monochromatic images, Contrast-induced nephropathy

  11. Nano-CL-20/HMX Cocrystal Explosive for Significantly Reduced Mechanical Sensitivity

    Directory of Open Access Journals (Sweden)

    Chongwei An

    2017-01-01

    Full Text Available Spray drying method was used to prepare cocrystals of hexanitrohexaazaisowurtzitane (CL-20 and cyclotetramethylene tetranitramine (HMX. Raw materials and cocrystals were characterized using scanning electron microscopy, X-ray diffraction, differential scanning calorimetry, Raman spectroscopy, and Fourier transform infrared spectroscopy. Impact and friction sensitivity of cocrystals were tested and analyzed. Results show that, after preparation by spray drying method, microparticles were spherical in shape and 0.5–5 µm in size. Particles formed aggregates of numerous tiny plate-like cocrystals, whereas CL-20/HMX cocrystals had thicknesses of below 100 nm. Cocrystals were formed by C–H⋯O bonding between –NO2 (CL-20 and –CH2– (HMX. Nanococrystal explosives exhibited drop height of 47.3 cm, and friction demonstrated explosion probability of 64%. Compared with raw HMX, cocrystals displayed significantly reduced mechanical sensitivity.

  12. Significance of triplane computed tomography in otolaryngology

    International Nuclear Information System (INIS)

    Taiji, Hidenobu; Namiki, Hideo; Kano, Shigeru; Hojoh, Yoshio

    1985-01-01

    The authors obtained direct sagittal CT scans of the head using a new method for positioning the head of patient in sitting position. Direct sagittal scans are more useful than computed rearranged scans in a better spatial and density resolution. The triplane CT (axial, coronal, and sagittal CT) greatly improves three dimentional recognition of the intracranial and facial structures and the extent of the lesion. A series of patients with various nasal and oropharyngeal tumors was examined with the triplane CT. The advantages of direct sagittal scans are (1) the recognition of localization and extension of the lesion. (2) the evaluation of the extent of the deep facial and nasopharygeal tumors, especially in the intracranial and intraorbital regions. (3) the more accurate determination of staging of the maxillary cancer. (author)

  13. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  14. Control Synthesis of Discrete-Time T-S Fuzzy Systems: Reducing the Conservatism Whilst Alleviating the Computational Burden.

    Science.gov (United States)

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Peng, Chen

    2017-09-01

    The augmented multi-indexed matrix approach acts as a powerful tool in reducing the conservatism of control synthesis of discrete-time Takagi-Sugeno fuzzy systems. However, its computational burden is sometimes too heavy as a tradeoff. Nowadays, reducing the conservatism whilst alleviating the computational burden becomes an ideal but very challenging problem. This paper is toward finding an efficient way to achieve one of satisfactory answers. Different from the augmented multi-indexed matrix approach in the literature, we aim to design a more efficient slack variable approach under a general framework of homogenous matrix polynomials. Thanks to the introduction of a new extended representation for homogeneous matrix polynomials, related matrices with the same coefficient are collected together into one sole set and thus those redundant terms of the augmented multi-indexed matrix approach can be removed, i.e., the computational burden can be alleviated in this paper. More importantly, due to the fact that more useful information is involved into control design, the conservatism of the proposed approach as well is less than the counterpart of the augmented multi-indexed matrix approach. Finally, numerical experiments are given to show the effectiveness of the proposed approach.

  15. Computer-based training (CBT) intervention reduces workplace violence and harassment for homecare workers.

    Science.gov (United States)

    Glass, Nancy; Hanson, Ginger C; Anger, W Kent; Laharnar, Naima; Campbell, Jacquelyn C; Weinstein, Marc; Perrin, Nancy

    2017-07-01

    The study examines the effectiveness of a workplace violence and harassment prevention and response program with female homecare workers in a consumer driven model of care. Homecare workers were randomized to either; computer based training (CBT only) or computer-based training with homecare worker peer facilitation (CBT + peer). Participants completed measures on confidence, incidents of violence, and harassment, health and work outcomes at baseline, 3, 6 months post-baseline. Homecare workers reported improved confidence to prevent and respond to workplace violence and harassment and a reduction in incidents of workplace violence and harassment in both groups at 6-month follow-up. A decrease in negative health and work outcomes associated with violence and harassment were not reported in the groups. CBT alone or with trained peer facilitation with homecare workers can increase confidence and reduce incidents of workplace violence and harassment in a consumer-driven model of care. © 2017 Wiley Periodicals, Inc.

  16. REDUCED DATA FOR CURVE MODELING – APPLICATIONS IN GRAPHICS, COMPUTER VISION AND PHYSICS

    Directory of Open Access Journals (Sweden)

    Małgorzata Janik

    2013-06-01

    Full Text Available In this paper we consider the problem of modeling curves in Rn via interpolation without a priori specified interpolation knots. We discuss two approaches to estimate the missing knots for non-parametric data (i.e. collection of points. The first approach (uniform evaluation is based on blind guess in which knots are chosen uniformly. The second approach (cumulative chord parameterization incorporates the geometry of the distribution of data points. More precisely, the difference is equal to the Euclidean distance between data points qi+1 and qi. The second method partially compensates for the loss of the information carried by the reduced data. We also present the application of the above schemes for fitting non-parametric data in computer graphics (light-source motion rendering, in computer vision (image segmentation and in physics (high velocity particles trajectory modeling. Though experiments are conducted for points in R2 and R3 the entire method is equally applicable in Rn.

  17. Head multidetector computed tomography: emergency medicine physicians overestimate the pretest probability and legal risk of significant findings.

    Science.gov (United States)

    Baskerville, Jerry Ray; Herrick, John

    2012-02-01

    This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. The Effort to Reduce a Muscle Fatigue Through Gymnastics Relaxation and Ergonomic Approach for Computer Users in Central Building State University of Medan

    Science.gov (United States)

    Gultom, Syamsul; Darma Sitepu, Indra; Hasibuan, Nurman

    2018-03-01

    Fatigue due to long and continuous computer usage can lead to problems of dominant fatigue associated with decreased performance and work motivation. Specific targets in the first phase have been achieved in this research such as: (1) Identified complaints on workers using computers, using the Bourdon Wiersma test kit. (2) Finding the right relaxation & work posture draft for a solution to reduce muscle fatigue in computer-based workers. The type of research used in this study is research and development method which aims to produce the products or refine existing products. The final product is a prototype of back-holder, monitoring filter and arranging a relaxation exercise as well as the manual book how to do this while in front of the computer to lower the fatigue level for computer users in Unimed’s Administration Center. In the first phase, observations and interviews have been conducted and identified the level of fatigue on the employees of computer users at Uniemd’s Administration Center using Bourdon Wiersma test and has obtained the following results: (1) The average velocity time of respondents in BAUK, BAAK and BAPSI after working with the value of interpretation of the speed obtained value of 8.4, WS 13 was in a good enough category, (2) The average of accuracy of respondents in BAUK, in BAAK and in BAPSI after working with interpretation value accuracy obtained Value of 5.5, WS 8 was in doubt-category. This result shows that computer users experienced a significant tiredness at the Unimed Administration Center, (3) the consistency of the average of the result in measuring tiredness level on computer users in Unimed’s Administration Center after working with values in consistency of interpretation obtained Value of 5.5 with WS 8 was put in a doubt-category, which means computer user in The Unimed Administration Center suffered an extreme fatigue. In phase II, based on the results of the first phase in this research, the researcher offers

  19. The significance of computed tomography in optic neuropathy

    International Nuclear Information System (INIS)

    Awai, Tsugumi; Yasutake, Hirohide; Ono, Yoshiko; Kumagai, Kazuhisa; Kairada, Kensuke

    1981-01-01

    Computed tomography (CT scan) has become one of the important and useful modes of examination for ophthalmological and neuro-ophthalmological disorders. CT scan (EMI scan) was performed on 21 patients with optic neuropathy in order to detect the cause. Of these 21 patients, the CT scan was abnormal in six. These six patients were verified, histopathologically, as having chromophobe pituitary adenoma, craniopharyngioma, plasmocytoma from sphenoidal sinus, optic nerve glioma and giant aneurysma of anterior communicating artery. The practical diagnostic value of CT scan for optic neuropathy is discussed. (author)

  20. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  1. The significance of sensory appeal for reduced meat consumption.

    Science.gov (United States)

    Tucker, Corrina A

    2014-10-01

    Reducing meat (over-)consumption as a way to help address environmental deterioration will require a range of strategies, and any such strategies will benefit from understanding how individuals might respond to various meat consumption practices. To investigate how New Zealanders perceive such a range of practices, in this instance in vitro meat, eating nose-to-tail, entomophagy and reducing meat consumption, focus groups involving a total of 69 participants were held around the country. While it is the damaging environmental implications of intensive farming practices and the projected continuation of increasing global consumer demand for meat products that has propelled this research, when asked to consider variations on the conventional meat-centric diet common to many New Zealanders, it was the sensory appeal of the areas considered that was deemed most problematic. While an ecological rationale for considering these 'meat' alternatives was recognised and considered important by most, transforming this value into action looks far less promising given the recurrent sensory objections to consuming different protein-based foods or of reducing meat consumption. This article considers the responses of focus group participants in relation to each of the dietary practices outlined, and offers suggestions on ways to encourage a more environmentally viable diet. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Doing Very Big Calculations on Modest Size Computers: Reducing the Cost of Exact Diagonalization Using Singular Value Decomposition

    International Nuclear Information System (INIS)

    Weinstein, M.

    2012-01-01

    I will talk about a new way of implementing Lanczos and contraction algorithms to diagonalize lattice Hamiltonians that dramatically reduces the memory required to do the computation, without restricting to variational ansatzes. (author)

  3. Prognostic significance of tumor size of small lung adenocarcinomas evaluated with mediastinal window settings on computed tomography.

    Directory of Open Access Journals (Sweden)

    Yukinori Sakao

    Full Text Available BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion. Recurrence-free survival was used for prognosis. RESULTS: Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0

  4. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    Science.gov (United States)

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    Background We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. Methods We evaluated 176 patients with small lung adenocarcinomas (diameter, 1–3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution computed tomography) with tumour dimensions evaluated under two settings: lung window and mediastinal window. We also determined the patient age, gender, preoperative nodal status, tumour size, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and pathological status (lymphatic vessel, vascular vessel or pleural invasion). Recurrence-free survival was used for prognosis. Results Lung window, mediastinal window, tumour disappearance ratio and preoperative nodal status were significant predictive factors for recurrence-free survival in univariate analyses. Areas under the receiver operator curves for recurrence were 0.76, 0.73 and 0.65 for mediastinal window, tumour disappearance ratio and lung window, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant predictive factors for lymph node metastasis in univariate analyses; areas under the receiver operator curves were 0.61, 0.76, 0.72 and 0.66, for lung window, mediastinal window, tumour disappearance ratio and preoperative serum carcinoembryonic antigen levels, respectively. Lung window, mediastinal window, tumour disappearance ratio, preoperative serum carcinoembryonic antigen levels and preoperative nodal status were significant factors for lymphatic vessel, vascular vessel or pleural invasion in univariate analyses; areas under the receiver operator curves were 0.60, 0.81, 0

  5. Significance of Computed Tomography in the Diagnosis of Cerebrovascular Accidents

    Directory of Open Access Journals (Sweden)

    Sumnima Acharya

    2014-06-01

    Full Text Available Introduction: Cerebrovascular Accident (CVA is defined as abrupt onset of a neurological deficit that is attributable to a focal vascular cause. CT scan is a widely available, affordable, non-invasive and relatively accurate investigation in patients with stroke and is important to identify stroke pathology and exclude mimics. Aim of this study is to establish the diagnostic significance of computed tomography in cerebrovascular accident and to differentiate between cerebral infarction and cerebral haemorrhage with CT for better management of CVA. Methods: A one year observational cross sectional study was conducted in 100 patients that presented at the department of radiodiagnosis from emergency or ward within the one year of study period with the clinical diagnosis of stroke, and had a brain CT scan done within one to fourteen days of onset. Results: A total of 100 patients were studied. 66 were male and 34 were female with a male/female ratio of 1.9:1. Maximum number of cases (39% was in the age group of 61-80 yrs. Among 100 patients, 55 cases were clinically diagnosed as hemorrhagic stroke and 45 cases were clinically diagnosed with an infarct. Out of the 55 hemorrhagic cases, two cases were diagnosed as both hemorrhage and infarct by CT scan, one case had normal CT scan findings and one had subdural haemorrhage. These four cases were excluded while comparing the clinical diagnosis with CT scan finding. Among 51 clinically diagnosed cases of hemorrhagic stroke, 32(62.7% cases were proved by CT scan as hemorrhagic stroke and among clinically diagnosed cases of infarct, 39(86.7% cases were proved by CT scan as infarct which is statistically significant (p <0.001. A significant agreement between clinical and CT diagnosis was observed as indicated by kappa value of 0.49. Sensitivity, specificity, positive predictive value and negative predictive value of clinical findings as compared to CT in diagnosing hemorrhage were 84.2%, 67.2%, 62.8% and 86

  6. Significance of frontal cortical atrophy in Parkinson's disease: computed tomographic study

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Sang; Suh, Jung Ho; Chung, Tae Sub; Kim, Dong Ik [College of Medicine, Yonsei University, Seoul (Korea, Republic of)

    1987-10-15

    Fifty-five patients with Parkinson's disease were evaluated clinically and with brain computed tomography (CT) in order to determine the incidence of frontal cortical and subcortical atrophy. Twenty cases of age-related healthy control group were also scanned. The CT criteria of frontal cortical atrophy that was used in this study were the maximum width of frontal hemispheric cortical sulci and width of anterior interhemispheric fissure between frontal lobes comparing with maximum width of hemispheric cortical sulci except frontal lobes. And the criteria of frontal subcortical atrophy were bifrontal index bicaudate index, and Evans index. The results are as follows: 1. Cortical atrophic changes in Parkinson's disease were more prominent in frontal lobe rather than other causes of cortical atrophy. 2. Frontal cortical and subcortical atrophic changes were also more prominent in Parkinson's disease rather than age-related control group. 3. Subcortical atrophic changes in frontal lobe were always associated with cortical atrophic changes. 4. Changes of basal ganglia were hardly seen in Parkinson's disease. 5. Cortical atrophic changes in frontal lobe must be the one of significant findings in Parkinson's disease.

  7. Significance of frontal cortical atrophy in Parkinson's disease: computed tomographic study

    International Nuclear Information System (INIS)

    Lee, Kyung Sang; Suh, Jung Ho; Chung, Tae Sub; Kim, Dong Ik

    1987-01-01

    Fifty-five patients with Parkinson's disease were evaluated clinically and with brain computed tomography (CT) in order to determine the incidence of frontal cortical and subcortical atrophy. Twenty cases of age-related healthy control group were also scanned. The CT criteria of frontal cortical atrophy that was used in this study were the maximum width of frontal hemispheric cortical sulci and width of anterior interhemispheric fissure between frontal lobes comparing with maximum width of hemispheric cortical sulci except frontal lobes. And the criteria of frontal subcortical atrophy were bifrontal index bicaudate index, and Evans index. The results are as follows: 1. Cortical atrophic changes in Parkinson's disease were more prominent in frontal lobe rather than other causes of cortical atrophy. 2. Frontal cortical and subcortical atrophic changes were also more prominent in Parkinson's disease rather than age-related control group. 3. Subcortical atrophic changes in frontal lobe were always associated with cortical atrophic changes. 4. Changes of basal ganglia were hardly seen in Parkinson's disease. 5. Cortical atrophic changes in frontal lobe must be the one of significant findings in Parkinson's disease

  8. Sensitive Data Protection Based on Intrusion Tolerance in Cloud Computing

    OpenAIRE

    Jingyu Wang; xuefeng Zheng; Dengliang Luo

    2011-01-01

    Service integration and supply on-demand coming from cloud computing can significantly improve the utilization of computing resources and reduce power consumption of per service, and effectively avoid the error of computing resources. However, cloud computing is still facing the problem of intrusion tolerance of the cloud computing platform and sensitive data of new enterprise data center. In order to address the problem of intrusion tolerance of cloud computing platform and sensitive data in...

  9. Parallel algorithms and archtectures for computational structural mechanics

    Science.gov (United States)

    Patrick, Merrell; Ma, Shing; Mahajan, Umesh

    1989-01-01

    The determination of the fundamental (lowest) natural vibration frequencies and associated mode shapes is a key step used to uncover and correct potential failures or problem areas in most complex structures. However, the computation time taken by finite element codes to evaluate these natural frequencies is significant, often the most computationally intensive part of structural analysis calculations. There is continuing need to reduce this computation time. This study addresses this need by developing methods for parallel computation.

  10. Effects of a Web-Based Computer-Tailored Game to Reduce Binge Drinking Among Dutch Adolescents: A Cluster Randomized Controlled Trial.

    Science.gov (United States)

    Jander, Astrid; Crutzen, Rik; Mercken, Liesbeth; Candel, Math; de Vries, Hein

    2016-02-03

    Binge drinking among Dutch adolescents is among the highest in Europe. Few interventions so far have focused on adolescents aged 15 to 19 years. Because binge drinking increases significantly during those years, it is important to develop binge drinking prevention programs for this group. Web-based computer-tailored interventions can be an effective tool for reducing this behavior in adolescents. Embedding the computer-tailored intervention in a serious game may make it more attractive to adolescents. The aim was to assess whether a Web-based computer-tailored intervention is effective in reducing binge drinking in Dutch adolescents aged 15 to 19 years. Secondary outcomes were reduction in excessive drinking and overall consumption during the previous week. Personal characteristics associated with program adherence were also investigated. A cluster randomized controlled trial was conducted among 34 Dutch schools. Each school was randomized into either an experimental (n=1622) or a control (n=1027) condition. Baseline assessment took place in January and February 2014. At baseline, demographic variables and alcohol use were assessed. Follow-up assessment of alcohol use took place 4 months later (May and June 2014). After the baseline assessment, participants in the experimental condition started with the intervention consisting of a game about alcohol in which computer-tailored feedback regarding motivational characteristics was embedded. Participants in the control condition only received the baseline questionnaire. Both groups received the 4-month follow-up questionnaire. Effects of the intervention were assessed using logistic regression mixed models analyses for binge and excessive drinking and linear regression mixed models analyses for weekly consumption. Factors associated with intervention adherence in the experimental condition were explored by means of a linear regression model. In total, 2649 adolescents participated in the baseline assessment. At follow

  11. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  12. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-01-01

    of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation

  13. A computer-assisted motivational social network intervention to reduce alcohol, drug and HIV risk behaviors among Housing First residents.

    Science.gov (United States)

    Kennedy, David P; Hunter, Sarah B; Chan Osilla, Karen; Maksabedian, Ervant; Golinelli, Daniela; Tucker, Joan S

    2016-03-15

    Individuals transitioning from homelessness to housing face challenges to reducing alcohol, drug and HIV risk behaviors. To aid in this transition, this study developed and will test a computer-assisted intervention that delivers personalized social network feedback by an intervention facilitator trained in motivational interviewing (MI). The intervention goal is to enhance motivation to reduce high risk alcohol and other drug (AOD) use and reduce HIV risk behaviors. In this Stage 1b pilot trial, 60 individuals that are transitioning from homelessness to housing will be randomly assigned to the intervention or control condition. The intervention condition consists of four biweekly social network sessions conducted using MI. AOD use and HIV risk behaviors will be monitored prior to and immediately following the intervention and compared to control participants' behaviors to explore whether the intervention was associated with any systematic changes in AOD use or HIV risk behaviors. Social network health interventions are an innovative approach for reducing future AOD use and HIV risk problems, but little is known about their feasibility, acceptability, and efficacy. The current study develops and pilot-tests a computer-assisted intervention that incorporates social network visualizations and MI techniques to reduce high risk AOD use and HIV behaviors among the formerly homeless. CLINICALTRIALS. NCT02140359.

  14. Computer Game Play Reduces Intrusive Memories of Experimental Trauma via Reconsolidation-Update Mechanisms.

    Science.gov (United States)

    James, Ella L; Bonsall, Michael B; Hoppitt, Laura; Tunbridge, Elizabeth M; Geddes, John R; Milton, Amy L; Holmes, Emily A

    2015-08-01

    Memory of a traumatic event becomes consolidated within hours. Intrusive memories can then flash back repeatedly into the mind's eye and cause distress. We investigated whether reconsolidation-the process during which memories become malleable when recalled-can be blocked using a cognitive task and whether such an approach can reduce these unbidden intrusions. We predicted that reconsolidation of a reactivated visual memory of experimental trauma could be disrupted by engaging in a visuospatial task that would compete for visual working memory resources. We showed that intrusive memories were virtually abolished by playing the computer game Tetris following a memory-reactivation task 24 hr after initial exposure to experimental trauma. Furthermore, both memory reactivation and playing Tetris were required to reduce subsequent intrusions (Experiment 2), consistent with reconsolidation-update mechanisms. A simple, noninvasive cognitive-task procedure administered after emotional memory has already consolidated (i.e., > 24 hours after exposure to experimental trauma) may prevent the recurrence of intrusive memories of those emotional events. © The Author(s) 2015.

  15. A three-dimensional ground-water-flow model modified to reduce computer-memory requirements and better simulate confining-bed and aquifer pinchouts

    Science.gov (United States)

    Leahy, P.P.

    1982-01-01

    The Trescott computer program for modeling groundwater flow in three dimensions has been modified to (1) treat aquifer and confining bed pinchouts more realistically and (2) reduce the computer memory requirements needed for the input data. Using the original program, simulation of aquifer systems with nonrectangular external boundaries may result in a large number of nodes that are not involved in the numerical solution of the problem, but require computer storage. (USGS)

  16. Computation of 3-D magnetostatic fields using a reduced scalar potential

    International Nuclear Information System (INIS)

    Biro, O.; Preis, K.; Vrisk, G.; Richter, K.R.

    1993-01-01

    The paper presents some improvements to the finite element computation of static magnetic fields in three dimensions using a reduced magnetic scalar potential. New methods are described for obtaining an edge element representation of the rotational part of the magnetic field from a given source current distribution. In the case when the current distribution is not known in advance, a boundary value problem is set up in terms of a current vector potential. An edge element representation of the solution can be directly used in the subsequent magnetostatic calculation. The magnetic field in a D.C. arc furnace is calculated by first determining the current distribution in terms of a current vector potential. A three dimensional problem involving a permanent magnet as well as a coil is solved and the magnetic field in some points is compared with measurement results

  17. Algebraic computing

    International Nuclear Information System (INIS)

    MacCallum, M.A.H.

    1990-01-01

    The implementation of a new computer algebra system is time consuming: designers of general purpose algebra systems usually say it takes about 50 man-years to create a mature and fully functional system. Hence the range of available systems and their capabilities changes little between one general relativity meeting and the next, despite which there have been significant changes in the period since the last report. The introductory remarks aim to give a brief survey of capabilities of the principal available systems and highlight one or two trends. The reference to the most recent full survey of computer algebra in relativity and brief descriptions of the Maple, REDUCE and SHEEP and other applications are given. (author)

  18. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws

    Energy Technology Data Exchange (ETDEWEB)

    Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Marcon, Magda [University Hospital of Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); University of Udine, Institute of Diagnostic Radiology, Department of Medical and Biological Sciences, Udine (Italy); Scholz, Bernhard [Imaging and Therapy Division, Siemens AG, Healthcare Sector, Forchheim (Germany); Calcagni, Maurizio [University Hospital of Zurich, Division of Plastic Surgery and Hand Surgery, Zurich (Switzerland)

    2014-12-15

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)

  19. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws

    International Nuclear Information System (INIS)

    Filli, Lukas; Finkenstaedt, Tim; Andreisek, Gustav; Guggenberger, Roman; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio

    2014-01-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was ''almost perfect'' (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. (orig.)

  20. Significance of computed tomography in urology

    International Nuclear Information System (INIS)

    Harada, Takashi

    1981-01-01

    There are more than five years since computed tomography (CT) was first introduced in this country for practical use. However, cumulative diagnostic experiences in urology have not been discussed thoroughly yet. In the Department of Urology of Kansai Medical University over 120 times CT diagnosis were attempted past three years and the instrument employed during this period has been alternative from the first generation type (ACTA 150) to the third one (CT-3W) this year as to technical advance. These cases are 70 of pelvic lesions and retroperitoneal surveys are made in the rests. As a results, detection of space occupying mass in kidney, adrenal and their surroundings was comparatively easy to deliver by this method, but there are several pitfalls to come misunderstanding in diagnosis of pelvic organs. It seems to be difficult to obtain certain result on closely packed viscera with tightly adhered connective tissue in tiny space. However, these difficulties will be solved by bladder insufflation with olive oil, for instance, and scanning in prone position. Contrast enhancement by injection of dye also give more definite results in genitourinary tract assessment. Moreover, there are much benefit in diagnosis of renal parenchymal change including lacerating renal trauma unable to be differentiated conventional method. Bolus injection of contrast material also allows to calculate CT values obtained from ROI on tomography and enables to fit the value to time-activity curve likewise scintillation scanning. In forthcomming day, new device in this field including emission-CT, NMR-CT and others will open new sight for ideal diagnostic facility in urology. (author)

  1. Application of ubiquitous computing in personal health monitoring systems.

    Science.gov (United States)

    Kunze, C; Grossmann, U; Stork, W; Müller-Glaser, K D

    2002-01-01

    A possibility to significantly reduce the costs of public health systems is to increasingly use information technology. The Laboratory for Information Processing Technology (ITIV) at the University of Karlsruhe is developing a personal health monitoring system, which should improve health care and at the same time reduce costs by combining micro-technological smart sensors with personalized, mobile computing systems. In this paper we present how ubiquitous computing theory can be applied in the health-care domain.

  2. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  3. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  4. A Randomized Controlled Trial to Compare Computer-assisted Motivational Intervention with Didactic Educational Counseling to Reduce Unprotected Sex in Female Adolescents.

    Science.gov (United States)

    Gold, Melanie A; Tzilos, Golfo K; Stein, L A R; Anderson, Bradley J; Stein, Michael D; Ryan, Christopher M; Zuckoff, Allan; DiClemente, Carlo

    2016-02-01

    To examine a computer-assisted, counselor-guided motivational intervention (CAMI) aimed at reducing the risk of unprotected sexual intercourse. DESIGN, SETTING, PARTICIPANTS, INTERVENTIONS, AND MAIN OUTCOME MEASURES: We conducted a 9-month, longitudinal randomized controlled trial with a multisite recruitment strategy including clinic, university, and social referrals, and compared the CAMI with didactic educational counseling in 572 female adolescents with a mean age of 17 years (SD = 2.2 years; range = 13-21 years; 59% African American) who were at risk for pregnancy and sexually transmitted diseases. The primary outcome was the acceptability of the CAMI according to self-reported rating scales. The secondary outcome was the reduction of pregnancy and sexually transmitted disease risk using a 9-month, self-report timeline follow-back calendar of unprotected sex. The CAMI was rated easy to use. Compared with the didactic educational counseling, there was a significant effect of the intervention which suggested that the CAMI helped reduce unprotected sex among participants who completed the study. However, because of the high attrition rate, the intent to treat analysis did not demonstrate a significant effect of the CAMI on reducing the rate of unprotected sex. Among those who completed the intervention, the CAMI reduced unprotected sex among an at-risk, predominantly minority sample of female adolescents. Modification of the CAMI to address methodological issues that contributed to a high drop-out rate are needed to make the intervention more acceptable and feasible for use among sexually active predominantly minority, at-risk, female adolescents. Copyright © 2016 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  5. An enhanced technique for mobile cloudlet offloading with reduced computation using compression in the cloud

    Science.gov (United States)

    Moro, A. C.; Nadesh, R. K.

    2017-11-01

    The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.

  6. Reduced opiate use after total knee arthroplasty using computer-assisted cryotherapy.

    Science.gov (United States)

    Thijs, Elke; Schotanus, Martijn G M; Bemelmans, Yoeri F L; Kort, Nanne P

    2018-05-03

    Despite multimodal pain management and advances in anesthetic techniques, total knee arthroplasty (TKA) remains painful during the early postoperative phase. This trial investigated whether computer-assisted cryotherapy (CAC) is effective in reduction of pain and consumption of opioids in patients operated for TKA following an outpatient surgery pathway. Sixty patients scheduled for primary TKA were included in this prospective, double-blind, randomized controlled trial receiving CAC at 10-12 °C (Cold-group, n = 30) or at 21 °C (Warm-group, n = 30) during the first 7 days after TKA according to a fixed schedule. All patients received the same pre-, peri- and postoperative care with a multimodal pain protocol. Pain was assessed before and after every session of cryotherapy using the numerical rating scale for pain (NRS-pain). The consumption of opioids was strictly noted during the first 4 postoperative days. Secondary outcomes were knee swelling, visual hematoma and patient reported outcome measures (PROMs). These parameters were measured pre-, 1, 2 and 6 weeks postoperatively. In both study groups, a reduction in NRS-pain after every CAC session were seen during the postoperative period of 7 days. A mean reduction of 0.9 and 0.7 on the NRS-pain was seen for respectively the Cold- (P = 0.008) and Warm-group (n.s.). A significant (P = 0.001) lower number of opioids were used by the Cold-group during the acute postoperative phase of 4 days, 47 and 83 tablets for respectively the Cold and Warm-group. No difference could be observed for secondary outcomes and adverse effects between both study groups. Postoperative CAC can be in added value in patients following an outpatient surgery pathway for TKA, resulting in reduced experienced pain and consumption of opioids during the first postoperative days.

  7. Significant prognosticators after primary radiotherapy in 903 nondisseminated nasopharyngeal carcinoma evaluated by computer tomography

    International Nuclear Information System (INIS)

    Teo, P.; Yu, P.; Lee, W.Y.; Leung, S.F.; Kwan, W.H.; Yu, K.H.; Choi, P.; Johnson, P.J.

    1996-01-01

    Purpose: To evaluate the significant prognosticators in nasopharyngeal carcinoma (NPC). Methods and Materials: From 1984 to 1989, 903 treatment-naive nondisseminated (MO) NPC were given primary radical radiotherapy to 60-62.5 Gy in 6 weeks. All patients had computed tomographic (CT) and endoscopic evaluation of the primary tumor. Potentially significant parameters (the patient's age and sex, the anatomical structures infiltrated by the primary lesion, the cervical nodal characteristics, the tumor histological subtypes, and various treatment variables were analyzed by both monovariate and multivariate methods for each of the five clinical endpoints: actuarial survival, disease-free survival, free from distant metastasis, free from local failure, and free from regional failure. Results: The significant prognosticators predicting for an increased risk of distant metastases and poorer survival included male sex, skull base and cranial nerve(s) involvement, advanced Ho's N level, and presence of fixed or partially fixed nodes or nodes contralateral to the side of the bulk of the nasopharyngeal primary. Advanced patient age led to significantly worse survival and poorer local tumor control. Local and regional failures were both increased by tumor infiltrating the skull base and/or the cranial nerves. In addition, regional failure was increased significantly by advancing Ho's N level. Parapharyngeal tumor involvement was the strongest independent prognosticator that determined distant metastasis and survival rates in the absence of the overriding prognosticators of skull base infiltration, cranial nerve(s) palsy, and cervical nodal metastasis. Conclusions: The significant prognosticators are delineated after the advent of CT and these should form the foundation of the modern stage classification for NPC

  8. The feasibility of using computer graphics in environmental evaluations : interim report, documenting historic site locations using computer graphics.

    Science.gov (United States)

    1981-01-01

    This report describes a method for locating historic site information using a computer graphics program. If adopted for use by the Virginia Department of Highways and Transportation, this method should significantly reduce the time now required to de...

  9. Image quality analysis to reduce dental artifacts in head and neck imaging with dual-source computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Ketelsen, D.; Werner, M.K.; Thomas, C.; Tsiflikas, I.; Reimann, A.; Claussen, C.D.; Heuschmid, M. [Tuebingen Univ. (Germany). Abt. fuer Diagnostische und Interventionelle Radiologie; Koitschev, A. [Tuebingen Univ. (Germany). Abt. fuer Hals-Nasen-Ohrenheilkunde

    2009-01-15

    Purpose: Important oropharyngeal structures can be superimposed by metallic artifacts due to dental implants. The aim of this study was to compare the image quality of multiplanar reconstructions and an angulated spiral in dual-source computed tomography (DSCT) of the neck. Materials and Methods: Sixty-two patients were included for neck imaging with DSCT. MPRs from an axial dataset and an additional short spiral parallel to the mouth floor were acquired. Leading anatomical structures were then evaluated with respect to the extent to which they were affected by dental artifacts using a visual scale, ranging from 1 (least artifacts) to 4 (most artifacts). Results: In MPR, 87.1 % of anatomical structures had significant artifacts (3.12 {+-} 0.86), while in angulated slices leading anatomical structures of the oropharynx showed negligible artifacts (1.28 {+-} 0.46). The diagnostic growth due to primarily angulated slices concerning artifact severity was significant (p < 0.01). Conclusion: MPRs are not capable of reducing dental artifacts sufficiently. In patients with dental artifacts overlying the anatomical structures of the oropharynx, an additional short angulated spiral parallel to the floor of the mouth is recommended and should be applied for daily routine. As a result of the static gantry design of DSCT, the use of a flexible head holder is essential. (orig.)

  10. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Science.gov (United States)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  11. A pilot weight reduction program over one year significantly reduced DNA strand breaks in obese subjects

    Directory of Open Access Journals (Sweden)

    Karl-Heinz Wagner

    2015-05-01

    Conclusion: A sustainable lifestyle change under supervision including physical activity and diet quality over a period of one year was not only responsible to reduce body weight and BMI but also led to significant reduction in all parameters of the comet assay. These results underline the importance of body weight reduction and highlight the positive changes in DNA stability.

  12. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  13. Cloud Computing with iPlant Atmosphere.

    Science.gov (United States)

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  14. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  15. Total variation-based neutron computed tomography

    Science.gov (United States)

    Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick

    2018-05-01

    We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.

  16. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  17. Noninvasive Coronary Angiography using 64-Detector-Row Computed Tomography in Patients with a Low to Moderate Pretest Probability of Significant Coronary Artery Disease

    International Nuclear Information System (INIS)

    Schlosser, T.; Mohrs, O.K.; Magedanz, A.; Nowak, B.; Voigtlaender, T.; Barkhausen, J.; Schmermund, A.

    2007-01-01

    Purpose: To evaluate the value of 64-detector-row computed tomography for ruling out high-grade coronary stenoses in patients with a low to moderate pretest probability of significant coronary artery disease. Material and Methods: The study included 61 patients with a suspicion of coronary artery disease on the basis of atypical angina or ambiguous findings in noninvasive stress testing and a class II indication for invasive coronary angiography (ICA). All patients were examined by 64-detector-row computed tomography angiography (CTA) and ICA. On a coronary segmental level, the presence of significant (>50% diameter) stenoses was examined. Results: In a total of 915 segments, CTA detected 62 significant stenoses. Thirty-four significant stenoses were confirmed by ICA, whereas 28 stenoses could not be confirmed by ICA. Twenty-two of them showed wall irregularities on ICA, and six were angiographically normal. Accordingly, on a coronary segmental basis, 28 false-positive and 0 false-negative findings resulted in a sensitivity of 100%, a specificity of 96.8%, a positive predictive value of 54.8%, and a negative predictive value of 100%. The diagnostic accuracy was 96.9%. Conclusion: Sixty-four-detector-row computed tomography reliably detects significant coronary stenoses in patients with suspected coronary artery disease and appears to be helpful in the selection of patients who need to undergo ICA. Calcified and non-calcified plaques are detected. Grading of stenoses in areas with calcification is difficult. Frequently, stenosis severity is overestimated by 64-detector-row computed tomography

  18. Noninvasive Coronary Angiography using 64-Detector-Row Computed Tomography in Patients with a Low to Moderate Pretest Probability of Significant Coronary Artery Disease

    Energy Technology Data Exchange (ETDEWEB)

    Schlosser, T.; Mohrs, O.K.; Magedanz, A.; Nowak, B.; Voigtlaender, T.; Barkhausen, J.; Schmermund, A. [Dept. of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen (Germany)

    2007-04-15

    Purpose: To evaluate the value of 64-detector-row computed tomography for ruling out high-grade coronary stenoses in patients with a low to moderate pretest probability of significant coronary artery disease. Material and Methods: The study included 61 patients with a suspicion of coronary artery disease on the basis of atypical angina or ambiguous findings in noninvasive stress testing and a class II indication for invasive coronary angiography (ICA). All patients were examined by 64-detector-row computed tomography angiography (CTA) and ICA. On a coronary segmental level, the presence of significant (>50% diameter) stenoses was examined. Results: In a total of 915 segments, CTA detected 62 significant stenoses. Thirty-four significant stenoses were confirmed by ICA, whereas 28 stenoses could not be confirmed by ICA. Twenty-two of them showed wall irregularities on ICA, and six were angiographically normal. Accordingly, on a coronary segmental basis, 28 false-positive and 0 false-negative findings resulted in a sensitivity of 100%, a specificity of 96.8%, a positive predictive value of 54.8%, and a negative predictive value of 100%. The diagnostic accuracy was 96.9%. Conclusion: Sixty-four-detector-row computed tomography reliably detects significant coronary stenoses in patients with suspected coronary artery disease and appears to be helpful in the selection of patients who need to undergo ICA. Calcified and non-calcified plaques are detected. Grading of stenoses in areas with calcification is difficult. Frequently, stenosis severity is overestimated by 64-detector-row computed tomography.

  19. Using computer, mobile and wearable technology enhanced interventions to reduce sedentary behaviour: a systematic review and meta-analysis.

    Science.gov (United States)

    Stephenson, Aoife; McDonough, Suzanne M; Murphy, Marie H; Nugent, Chris D; Mair, Jacqueline L

    2017-08-11

    High levels of sedentary behaviour (SB) are associated with negative health consequences. Technology enhanced solutions such as mobile applications, activity monitors, prompting software, texts, emails and websites are being harnessed to reduce SB. The aim of this paper is to evaluate the effectiveness of such technology enhanced interventions aimed at reducing SB in healthy adults and to examine the behaviour change techniques (BCTs) used. Five electronic databases were searched to identify randomised-controlled trials (RCTs), published up to June 2016. Interventions using computer, mobile or wearable technologies to facilitate a reduction in SB, using a measure of sedentary time as an outcome, were eligible for inclusion. Risk of bias was assessed using the Cochrane Collaboration's tool and interventions were coded using the BCT Taxonomy (v1). Meta-analysis of 15/17 RCTs suggested that computer, mobile and wearable technology tools resulted in a mean reduction of -41.28 min per day (min/day) of sitting time (95% CI -60.99, -21.58, I2 = 77%, n = 1402), in favour of the intervention group at end point follow-up. The pooled effects showed mean reductions at short (≤ 3 months), medium (>3 to 6 months), and long-term follow-up (>6 months) of -42.42 min/day, -37.23 min/day and -1.65 min/day, respectively. Overall, 16/17 studies were deemed as having a high or unclear risk of bias, and 1/17 was judged to be at a low risk of bias. A total of 46 BCTs (14 unique) were coded for the computer, mobile and wearable components of the interventions. The most frequently coded were "prompts and cues", "self-monitoring of behaviour", "social support (unspecified)" and "goal setting (behaviour)". Interventions using computer, mobile and wearable technologies can be effective in reducing SB. Effectiveness appeared most prominent in the short-term and lessened over time. A range of BCTs have been implemented in these interventions. Future studies need to improve reporting

  20. Blink rate, incomplete blinks and computer vision syndrome.

    Science.gov (United States)

    Portello, Joan K; Rosenfield, Mark; Chu, Christina A

    2013-05-01

    Computer vision syndrome (CVS), a highly prevalent condition, is frequently associated with dry eye disorders. Furthermore, a reduced blink rate has been observed during computer use. The present study examined whether post task ocular and visual symptoms are associated with either a decreased blink rate or a higher prevalence of incomplete blinks. An additional trial tested whether increasing the blink rate would reduce CVS symptoms. Subjects (N = 21) were required to perform a continuous 15-minute reading task on a desktop computer at a viewing distance of 50 cm. Subjects were videotaped during the task to determine their blink rate and amplitude. Immediately after the task, subjects completed a questionnaire regarding ocular symptoms experienced during the trial. In a second session, the blink rate was increased by means of an audible tone that sounded every 4 seconds, with subjects being instructed to blink on hearing the tone. The mean blink rate during the task without the audible tone was 11.6 blinks per minute (SD, 7.84). The percentage of blinks deemed incomplete for each subject ranged from 0.9 to 56.5%, with a mean of 16.1% (SD, 15.7). A significant positive correlation was observed between the total symptom score and the percentage of incomplete blinks during the task (p = 0.002). Furthermore, a significant negative correlation was noted between the blink score and symptoms (p = 0.035). Increasing the mean blink rate to 23.5 blinks per minute by means of the audible tone did not produce a significant change in the symptom score. Whereas CVS symptoms are associated with a reduced blink rate, the completeness of the blink may be equally significant. Because instructing a patient to increase his or her blink rate may be ineffective or impractical, actions to achieve complete corneal coverage during blinking may be more helpful in alleviating symptoms during computer operation.

  1. New scanning technique using Adaptive Statistical Iterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT.

    Science.gov (United States)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-06-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  2. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  3. Rhythmic chaos: irregularities of computer ECG diagnosis.

    Science.gov (United States)

    Wang, Yi-Ting Laureen; Seow, Swee-Chong; Singh, Devinder; Poh, Kian-Keong; Chai, Ping

    2017-09-01

    Diagnostic errors can occur when physicians rely solely on computer electrocardiogram interpretation. Cardiologists often receive referrals for computer misdiagnoses of atrial fibrillation. Patients may have been inappropriately anticoagulated for pseudo atrial fibrillation. Anticoagulation carries significant risks, and such errors may carry a high cost. Have we become overreliant on machines and technology? In this article, we illustrate three such cases and briefly discuss how we can reduce these errors. Copyright: © Singapore Medical Association.

  4. Reduced bone mineral density is not associated with significantly reduced bone quality in men and women practicing long-term calorie restriction with adequate nutrition.

    Science.gov (United States)

    Villareal, Dennis T; Kotyk, John J; Armamento-Villareal, Reina C; Kenguva, Venkata; Seaman, Pamela; Shahar, Allon; Wald, Michael J; Kleerekoper, Michael; Fontana, Luigi

    2011-02-01

    Calorie restriction (CR) reduces bone quantity but not bone quality in rodents. Nothing is known regarding the long-term effects of CR with adequate intake of vitamin and minerals on bone quantity and quality in middle-aged lean individuals. In this study, we evaluated body composition, bone mineral density (BMD), and serum markers of bone turnover and inflammation in 32 volunteers who had been eating a CR diet (approximately 35% less calories than controls) for an average of 6.8 ± 5.2 years (mean age 52.7 ± 10.3 years) and 32 age- and sex-matched sedentary controls eating Western diets (WD). In a subgroup of 10 CR and 10 WD volunteers, we also measured trabecular bone (TB) microarchitecture of the distal radius using high-resolution magnetic resonance imaging. We found that the CR volunteers had significantly lower body mass index than the WD volunteers (18.9 ± 1.2 vs. 26.5 ± 2.2 kg m(-2) ; P = 0.0001). BMD of the lumbar spine (0.870 ± 0.11 vs. 1.138 ± 0.12 g cm(-2) , P = 0.0001) and hip (0.806 ± 0.12 vs. 1.047 ± 0.12 g cm(-2) , P = 0.0001) was also lower in the CR than in the WD group. Serum C-terminal telopeptide and bone-specific alkaline phosphatase concentration were similar between groups, while serum C-reactive protein (0.19 ± 0.26 vs. 1.46 ± 1.56 mg L(-1) , P = 0.0001) was lower in the CR group. Trabecular bone microarchitecture parameters such as the erosion index (0.916 ± 0.087 vs. 0.877 ± 0.088; P = 0.739) and surface-to-curve ratio (10.3 ± 1.4 vs. 12.1 ± 2.1, P = 0.440) were not significantly different between groups. These findings suggest that markedly reduced BMD is not associated with significantly reduced bone quality in middle-aged men and women practicing long-term calorie restriction with adequate nutrition.

  5. Specialized computer architectures for computational aerodynamics

    Science.gov (United States)

    Stevenson, D. K.

    1978-01-01

    In recent years, computational fluid dynamics has made significant progress in modelling aerodynamic phenomena. Currently, one of the major barriers to future development lies in the compute-intensive nature of the numerical formulations and the relative high cost of performing these computations on commercially available general purpose computers, a cost high with respect to dollar expenditure and/or elapsed time. Today's computing technology will support a program designed to create specialized computing facilities to be dedicated to the important problems of computational aerodynamics. One of the still unresolved questions is the organization of the computing components in such a facility. The characteristics of fluid dynamic problems which will have significant impact on the choice of computer architecture for a specialized facility are reviewed.

  6. Efficient 2-D DCT Computation from an Image Representation Point of View

    OpenAIRE

    Papakostas, G.A.; Koulouriotis, D.E.; Karakasis, E.G.

    2009-01-01

    A novel methodology that ensures the computation of 2-D DCT coefficients in gray-scale images as well as in binary ones, with high computation rates, was presented in the previous sections. Through a new image representation scheme, called ISR (Image Slice Representation) the 2-D DCT coefficients can be computed in significantly reduced time, with the same accuracy.

  7. Tilting the jaw to improve the image quality or to reduce the dose in cone-beam computed tomography

    International Nuclear Information System (INIS)

    Luckow, Marlen; Deyhle, Hans; Beckmann, Felix; Dagassan-Berndt, Dorothea; Müller, Bert

    2011-01-01

    Objective: The image quality in cone-beam computed tomography (CBCT) should be improved tilting the mandible that contains two dental titanium implants, within the relevant range of motion. Materials and methods: Using the mandible of a five-month-old pig, CBCT was performed varying the accelerating voltage, beam current, the starting rotation angle of the mandible in the source-detector plane and the tilt angles of the jaw with respect to the source-detector plane. The different datasets were automatically registered with respect to micro CT data to extract the common volume and the deviance to the pre-defined standard that characterizes the image quality. Results: The variations of the accelerating voltage, beam current and the rotation within the source-detection plane provided the expected quantitative behavior indicating the appropriate choice of the imaging quality factor. The tilting of the porcine mandible by about 14° improves the image quality by almost a factor of two. Conclusions: The tilting of the mandible with two dental implants can be used to significantly reduce the artifacts of the strongly X-ray absorbing materials in the CBCT images. The comparison of 14° jaw tilting with respect to the currently recommended arrangement in plane with the teeth demonstrates that the applied exposure time and the related dose can be reduced by a factor of four without decreasing the image quality.

  8. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  9. A reduced adjoint approach to variational data assimilation

    KAUST Repository

    Altaf, Muhammad

    2013-02-01

    The adjoint method has been used very often for variational data assimilation. The computational cost to run the adjoint model often exceeds several original model runs and the method needs significant programming efforts to implement the adjoint model code. The work proposed here is variational data assimilation based on proper orthogonal decomposition (POD) which avoids the implementation of the adjoint of the tangent linear approximation of the original nonlinear model. An ensemble of the forward model simulations is used to determine the approximation of the covariance matrix and only the dominant eigenvectors of this matrix are used to define a model subspace. The adjoint of the tangent linear model is replaced by the reduced adjoint based on this reduced space. Thus the adjoint model is run in reduced space with negligible computational cost. Once the gradient is obtained in reduced space it is projected back in full space and the minimization process is carried in full space. In the paper the reduced adjoint approach to variational data assimilation is introduced. The characteristics and performance of the method are illustrated with a number of data assimilation experiments in a ground water subsurface contaminant model. © 2012 Elsevier B.V.

  10. A reduced adjoint approach to variational data assimilation

    KAUST Repository

    Altaf, Muhammad; El Gharamti, Mohamad; Heemink, Arnold W.; Hoteit, Ibrahim

    2013-01-01

    The adjoint method has been used very often for variational data assimilation. The computational cost to run the adjoint model often exceeds several original model runs and the method needs significant programming efforts to implement the adjoint model code. The work proposed here is variational data assimilation based on proper orthogonal decomposition (POD) which avoids the implementation of the adjoint of the tangent linear approximation of the original nonlinear model. An ensemble of the forward model simulations is used to determine the approximation of the covariance matrix and only the dominant eigenvectors of this matrix are used to define a model subspace. The adjoint of the tangent linear model is replaced by the reduced adjoint based on this reduced space. Thus the adjoint model is run in reduced space with negligible computational cost. Once the gradient is obtained in reduced space it is projected back in full space and the minimization process is carried in full space. In the paper the reduced adjoint approach to variational data assimilation is introduced. The characteristics and performance of the method are illustrated with a number of data assimilation experiments in a ground water subsurface contaminant model. © 2012 Elsevier B.V.

  11. Mindfulness significantly reduces self-reported levels of anxiety and depression

    DEFF Research Database (Denmark)

    Würtzen, Hanne; Dalton, Susanne Oksbjerg; Elsass, Peter

    2013-01-01

    INTRODUCTION: As the incidence of and survival from breast cancer continue to raise, interventions to reduce anxiety and depression before, during and after treatment are needed. Previous studies have reported positive effects of a structured 8-week group mindfulness-based stress reduction program...

  12. [Study on computed tomography features of nasal septum cellule and its clinical significance].

    Science.gov (United States)

    Huang, Dingqiang; Li, Wanrong; Gao, Liming; Xu, Guanqiang; Ou, Xiaoyi; Tang, Guangcai

    2008-03-01

    To investigate the features of nasal septum cellule in computed tomographic (CT) images and its clinical significance. CT scans data of nasal septum in 173 patients were randomly obtained from January 2001 to June 2005. Prevalence and clinical features were summarized in the data of 19 patients with nasal septum cellule retrospectively. (1) Nineteen cases with nasal septum cellule were found in 173 patients. (2) All nasal septum cellule of 19 cases located in perpendicular plate of the ethmoid bone, in which 8 cases located in upper part of nasal septum and 11 located in middle. (3) There were totally seven patients with nasal diseases related to nasal septum cellule, in which 3 cases with inflammation, 2 cases with bone fracture, 1 case with cholesterol granuloma, 1 case with mucocele. Nasal septum cellule is an anatomic variation of nasal septum bone, and its features can provide further understanding of some diseases related to nasal septum cellule.

  13. Reducing radiation dose in the diagnosis of pulmonary embolism using adaptive statistical iterative reconstruction and lower tube potential in computed tomography

    International Nuclear Information System (INIS)

    Kaul, David; Grupp, Ulrich; Kahn, Johannes; Wiener, Edzard; Hamm, Bernd; Streitparth, Florian; Ghadjar, Pirus

    2014-01-01

    To assess the impact of ASIR (adaptive statistical iterative reconstruction) and lower tube potential on dose reduction and image quality in chest computed tomography angiographies (CTAs) of patients with pulmonary embolism. CT data from 44 patients with pulmonary embolism were acquired using different protocols - Group A: 120 kV, filtered back projection, n = 12; Group B: 120 kV, 40 % ASIR, n = 12; Group C: 100 kV, 40 % ASIR, n = 12 and Group D: 80 kV, 40 % ASIR, n = 8. Normalised effective dose was calculated; image quality was assessed quantitatively and qualitatively. Normalised effective dose in Group B was 33.8 % lower than in Group A (p = 0.014) and 54.4 % lower in Group C than in Group A (p < 0.001). Group A, B and C did not show significant differences in qualitative or quantitative analysis of image quality. Group D showed significantly higher noise levels in qualitative and quantitative analysis, significantly more artefacts and decreased overall diagnosability. Best results, considering dose reduction and image quality, were achieved in Group C. The combination of ASIR and lower tube potential is an option to reduce radiation without significant worsening of image quality in the diagnosis of pulmonary embolism. (orig.)

  14. Reducing radiation dose in the diagnosis of pulmonary embolism using adaptive statistical iterative reconstruction and lower tube potential in computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kaul, David [Campus Virchow-Klinikum, Department of Radiation Oncology, Charite School of Medicine and University Hospital, Berlin (Germany); Charite School of Medicine and University Hospital, Department of Radiology, Berlin (Germany); Grupp, Ulrich; Kahn, Johannes; Wiener, Edzard; Hamm, Bernd; Streitparth, Florian [Charite School of Medicine and University Hospital, Department of Radiology, Berlin (Germany); Ghadjar, Pirus [Campus Virchow-Klinikum, Department of Radiation Oncology, Charite School of Medicine and University Hospital, Berlin (Germany)

    2014-11-15

    To assess the impact of ASIR (adaptive statistical iterative reconstruction) and lower tube potential on dose reduction and image quality in chest computed tomography angiographies (CTAs) of patients with pulmonary embolism. CT data from 44 patients with pulmonary embolism were acquired using different protocols - Group A: 120 kV, filtered back projection, n = 12; Group B: 120 kV, 40 % ASIR, n = 12; Group C: 100 kV, 40 % ASIR, n = 12 and Group D: 80 kV, 40 % ASIR, n = 8. Normalised effective dose was calculated; image quality was assessed quantitatively and qualitatively. Normalised effective dose in Group B was 33.8 % lower than in Group A (p = 0.014) and 54.4 % lower in Group C than in Group A (p < 0.001). Group A, B and C did not show significant differences in qualitative or quantitative analysis of image quality. Group D showed significantly higher noise levels in qualitative and quantitative analysis, significantly more artefacts and decreased overall diagnosability. Best results, considering dose reduction and image quality, were achieved in Group C. The combination of ASIR and lower tube potential is an option to reduce radiation without significant worsening of image quality in the diagnosis of pulmonary embolism. (orig.)

  15. Reduced content of chloroatranol and atranol in oak moss absolute significantly reduces the elicitation potential of this fragrance material.

    Science.gov (United States)

    Andersen, Flemming; Andersen, Kirsten H; Bernois, Armand; Brault, Christophe; Bruze, Magnus; Eudes, Hervé; Gadras, Catherine; Signoret, Anne-Cécile J; Mose, Kristian F; Müller, Boris P; Toulemonde, Bernard; Andersen, Klaus Ejner

    2015-02-01

    Oak moss absolute, an extract from the lichen Evernia prunastri, is a valued perfume ingredient but contains extreme allergens. To compare the elicitation properties of two preparations of oak moss absolute: 'classic oak moss', the historically used preparation, and 'new oak moss', with reduced contents of the major allergens atranol and chloroatranol. The two preparations were compared in randomized double-blinded repeated open application tests and serial dilution patch tests in 30 oak moss-sensitive volunteers and 30 non-allergic control subjects. In both test models, new oak moss elicited significantly less allergic contact dermatitis in oak moss-sensitive subjects than classic oak moss. The control subjects did not react to either of the preparations. New oak moss is still a fragrance allergen, but elicits less allergic contact dermatitis in previously oak moss-sensitized individuals, suggesting that new oak moss is less allergenic to non-sensitized individuals. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  17. New scanning technique using Adaptive Statistical lterative Reconstruction (ASIR) significantly reduced the radiation dose of cardiac CT

    International Nuclear Information System (INIS)

    Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus

    2013-01-01

    The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550mA (450–600) vs. 650mA (500–711.25) (median (interquartile range)), respectively, P<0.001. There was 27% effective radiation dose reduction in the ASIR group compared with FBP group, 4.29mSv (2.84–6.02) vs. 5.84mSv (3.88–8.39) (median (interquartile range)), respectively, P<0.001. Although ASIR was associated with increased image noise compared with FBP (39.93±10.22 vs. 37.63±18.79 (mean ±standard deviation), respectively, P<001), it did not affect the signal intensity, signal-to-noise ratio, contrast-to-noise ratio or the diagnostic quality of CCTA. Application of ASIR reduces the radiation dose of CCTA without affecting the image quality.

  18. Evaluation of two iterative techniques for reducing metal artifacts in computed tomography.

    Science.gov (United States)

    Boas, F Edward; Fleischmann, Dominik

    2011-06-01

    To evaluate two methods for reducing metal artifacts in computed tomography (CT)--the metal deletion technique (MDT) and the selective algebraic reconstruction technique (SART)--and compare these methods with filtered back projection (FBP) and linear interpolation (LI). The institutional review board approved this retrospective HIPAA-compliant study; informed patient consent was waived. Simulated projection data were calculated for a phantom that contained water, soft tissue, bone, and iron. Clinical projection data were obtained retrospectively from 11 consecutively identified CT scans with metal streak artifacts, with a total of 178 sections containing metal. Each scan was reconstructed using FBP, LI, SART, and MDT. The simulated scans were evaluated quantitatively by calculating the average error in Hounsfield units for each pixel compared with the original phantom. Two radiologists who were blinded to the reconstruction algorithms used qualitatively evaluated the clinical scans, ranking the overall severity of artifacts for each algorithm. P values for comparisons of the image quality ranks were calculated from the binomial distribution. The simulations showed that MDT reduces artifacts due to photon starvation, beam hardening, and motion and does not introduce new streaks between metal and bone. MDT had the lowest average error (76% less than FBP, 42% less than LI, 17% less than SART). Blinded comparison of the clinical scans revealed that MDT had the best image quality 100% of the time (95% confidence interval: 72%, 100%). LI had the second best image quality, and SART and FBP had the worst image quality. On images from two CT scans, as compared with images generated by the scanner, MDT revealed information of potential clinical importance. For a wide range of scans, MDT yields reduced metal streak artifacts and better-quality images than does FBP, LI, or SART. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101782/-/DC1. RSNA, 2011

  19. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    International Nuclear Information System (INIS)

    Batou, A.; Soize, C.; Brie, N.

    2013-01-01

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading

  20. Reduced-order computational model in nonlinear structural dynamics for structures having numerous local elastic modes in the low-frequency range. Application to fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Batou, A., E-mail: anas.batou@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-la-Vallee (France); Brie, N., E-mail: nicolas.brie@edf.fr [EDF R and D, Département AMA, 1 avenue du général De Gaulle, 92140 Clamart (France)

    2013-09-15

    Highlights: • A ROM of a nonlinear dynamical structure is built with a global displacements basis. • The reduced order model of fuel assemblies is accurate and of very small size. • The shocks between grids of a row of seven fuel assemblies are computed. -- Abstract: We are interested in the construction of a reduced-order computational model for nonlinear complex dynamical structures which are characterized by the presence of numerous local elastic modes in the low-frequency band. This high modal density makes the use of the classical modal analysis method not suitable. Therefore the reduced-order computational model is constructed using a basis of a space of global displacements, which is constructed a priori and which allows the nonlinear dynamical response of the structure observed on the stiff part to be predicted with a good accuracy. The methodology is applied to a complex industrial structure which is made up of a row of seven fuel assemblies with possibility of collisions between grids and which is submitted to a seismic loading.

  1. Effectiveness of Adaptive Statistical Iterative Reconstruction for 64-Slice Dual-Energy Computed Tomography Pulmonary Angiography in Patients With a Reduced Iodine Load: Comparison With Standard Computed Tomography Pulmonary Angiography.

    Science.gov (United States)

    Lee, Ji Won; Lee, Geewon; Lee, Nam Kyung; Moon, Jin Il; Ju, Yun Hye; Suh, Young Ju; Jeong, Yeon Joo

    2016-01-01

    The aim of the study was to assess the effectiveness of the adaptive statistical iterative reconstruction (ASIR) for dual-energy computed tomography pulmonary angiography (DE-CTPA) with a reduced iodine load. One hundred forty patients referred for chest CT were randomly divided into a DE-CTPA group with a reduced iodine load or a standard CTPA group. Quantitative and qualitative image qualities of virtual monochromatic spectral (VMS) images with filtered back projection (VMS-FBP) and those with 50% ASIR (VMS-ASIR) in the DE-CTPA group were compared. Image qualities of VMS-ASIR images in the DE-CTPA group and ASIR images in the standard CTPA group were also compared. All quantitative and qualitative indices, except attenuation value of pulmonary artery in the VMS-ASIR subgroup, were superior to those in the VMS-FBP subgroup (all P ASIR images were superior to those of ASIR images in the standard CTPA group (P ASIR images of the DE-CTPA group than in ASIR images of the standard CTPA group (P = 0.001). The ASIR technique tends to improve the image quality of VMS imaging. Dual-energy computed tomography pulmonary angiography with ASIR can reduce contrast medium volume and produce images of comparable quality with those of standard CTPA.

  2. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul

    2016-01-01

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room

  3. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room.

  4. Review of pump suction reducer selection: Eccentric or concentric reducers

    OpenAIRE

    Mahaffey, R M; van Vuuren, S J

    2014-01-01

    Eccentric reducers are traditionally recommended for the pump suction reducer fitting to allow for transportation of air through the fitting to the pump. The ability of a concentric reducer to provide an improved approach flow to the pump while still allowing air to be transported through the fitting is investigated. Computational fluid dynamics (CFD) were utilised to analyse six concentric and six eccentric reducer geometries at four different inlet velocities to determine the flow velocity ...

  5. Intriguing model significantly reduces boarding of psychiatric patients, need for inpatient hospitalization.

    Science.gov (United States)

    2015-01-01

    As new approaches to the care of psychiatric emergencies emerge, one solution is gaining particular traction. Under the Alameda model, which has been put into practice in Alameda County, CA, patients who are brought to regional EDs with emergency psychiatric issues are quickly transferred to a designated emergency psychiatric facility as soon as they are medically stabilized. This alleviates boarding problems in area EDs while also quickly connecting patients with specialized care. With data in hand on the model's effectiveness, developers believe the approach could alleviate boarding problems in other communities as well. The model is funded by through a billing code established by California's Medicaid program for crisis stabilization services. Currently, only 22% of the patients brought to the emergency psychiatric facility ultimately need to be hospitalized; the other 78% are able to go home or to an alternative situation. In a 30-day study of the model, involving five community hospitals in Alameda County, CA, researchers found that ED boarding times were as much as 80% lower than comparable ED averages, and that patients were stabilized at least 75% of the time, significantly reducing the need for inpatient hospitalization.

  6. SSTRAP: A computational model for genomic motif discovery ...

    African Journals Online (AJOL)

    Computational methods can potentially provide high-quality prediction of biological molecules such as DNA binding sites and Transcription factors and therefore reduce the time needed for experimental verification and challenges associated with experimental methods. These biological molecules or motifs have significant ...

  7. Significant association between sulfate-reducing bacteria and uranium-reducing microbial communities as revealed by a combined massively parallel sequencing-indicator species approach.

    Science.gov (United States)

    Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K; Jardine, Philip M; Zhou, Jizhong; Criddle, Craig S; Marsh, Terence L; Tiedje, James M

    2010-10-01

    Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared.

  8. In-cylinder diesel spray combustion simulations using parallel computation: A performance benchmarking study

    International Nuclear Information System (INIS)

    Pang, Kar Mun; Ng, Hoon Kiat; Gan, Suyin

    2012-01-01

    Highlights: ► A performance benchmarking exercise is conducted for diesel combustion simulations. ► The reduced chemical mechanism shows its advantages over base and skeletal models. ► High efficiency and great reduction of CPU runtime are achieved through 4-node solver. ► Increasing ISAT memory from 0.1 to 2 GB reduces the CPU runtime by almost 35%. ► Combustion and soot processes are predicted well with minimal computational cost. - Abstract: In the present study, in-cylinder diesel combustion simulation was performed with parallel processing on an Intel Xeon Quad-Core platform to allow both fluid dynamics and chemical kinetics of the surrogate diesel fuel model to be solved simultaneously on multiple processors. Here, Cartesian Z-Coordinate was selected as the most appropriate partitioning algorithm since it computationally bisects the domain such that the dynamic load associated with fuel particle tracking was evenly distributed during parallel computations. Other variables examined included number of compute nodes, chemistry sizes and in situ adaptive tabulation (ISAT) parameters. Based on the performance benchmarking test conducted, parallel configuration of 4-compute node was found to reduce the computational runtime most efficiently whereby a parallel efficiency of up to 75.4% was achieved. The simulation results also indicated that accuracy level was insensitive to the number of partitions or the partitioning algorithms. The effect of reducing the number of species on computational runtime was observed to be more significant than reducing the number of reactions. Besides, the study showed that an increase in the ISAT maximum storage of up to 2 GB reduced the computational runtime by 50%. Also, the ISAT error tolerance of 10 −3 was chosen to strike a balance between results accuracy and computational runtime. The optimised parameters in parallel processing and ISAT, as well as the use of the in-house reduced chemistry model allowed accurate

  9. Computationally fast estimation of muscle tension for realtime bio-feedback.

    Science.gov (United States)

    Murai, Akihiko; Kurosaki, Kosuke; Yamane, Katsu; Nakamura, Yoshihiko

    2009-01-01

    In this paper, we propose a method for realtime estimation of whole-body muscle tensions. The main problem of muscle tension estimation is that there are infinite number of solutions to realize a particular joint torque due to the actuation redundancy. Numerical optimization techniques, e.g. quadratic programming, are often employed to obtain a unique solution, but they are usually computationally expensive. For example, our implementation of quadratic programming takes about 0.17 sec per frame on the musculoskeletal model with 274 elements, which is far from realtime computation. Here, we propose to reduce the computational cost by using EMG data and by reducing the number of unknowns in the optimization. First, we compute the tensions of muscles with surface EMG data based on a biological muscle data, which is a very efficient process. We also assume that their synergists have the same activity levels and compute their tensions with the same model. Tensions of the remaining muscles are then computed using quadratic programming, but the number of unknowns is significantly reduced by assuming that the muscles in the same heteronymous group have the same activity level. The proposed method realizes realtime estimation and visualization of the whole-body muscle tensions that can be applied to sports training and rehabilitation.

  10. Reduced-order modeling (ROM) for simulation and optimization powerful algorithms as key enablers for scientific computing

    CERN Document Server

    Milde, Anja; Volkwein, Stefan

    2018-01-01

    This edited monograph collects research contributions and addresses the advancement of efficient numerical procedures in the area of model order reduction (MOR) for simulation, optimization and control. The topical scope includes, but is not limited to, new out-of-the-box algorithmic solutions for scientific computing, e.g. reduced basis methods for industrial problems and MOR approaches for electrochemical processes. The target audience comprises research experts and practitioners in the field of simulation, optimization and control, but the book may also be beneficial for graduate students alike. .

  11. Reduced radiation exposure to the mammary glands in thoracic computed tomography using organ-based tube-current modulation

    International Nuclear Information System (INIS)

    Munechika, Jiro; Ohgiya, Yoshimitsu; Gokan, Takehiko; Hashimoto, Toshi; Iwai, Tsugunori

    2013-01-01

    Organ-based tube-current modulation has been used to reduce radiation exposure to specific organs. However, there are no reports yet published on reducing radiation exposure in clinical cases. In this study, we assessed the reduction in radiation exposure to the mammary glands during thoracic computed tomography (CT) using X-CARE. In a phantom experiment, the use of X-CARE reduced radiation exposure at the midline of the precordial region by a maximum of 45.1%. In our corresponding clinical study, CT was performed using X-CARE in 15 patients, and without X-CARE in another 15. Compared to the non-X-CARE group, radiation exposure was reduced in the X-CARE group at the midline of the precordial region by 22.3% (P 0.05). X-CARE thus reduced radiation exposure at the midline of the precordial region and allowed us to obtain consistent CT values without increasing noise. However, this study revealed increases in radiation exposure at the lateral sides of the breasts. It is conceivable that patients' breasts were laterally displaced by gravity under the standard thoracic imaging conditions. Further studies that consider factors such as body size and adjustment of imaging conditions may be needed in the future. (author)

  12. Secure cloud computing implementation study for Singapore military operations

    OpenAIRE

    Guoquan, Lai

    2016-01-01

    Approved for public release; distribution is unlimited Cloud computing benefits organizations in many ways. With characteristics such as resource pooling, broad network access, on-demand self-service, and rapid elasticity, an organization's overall IT management can be significantly reduced (in terms of labor, software, and hardware) and its work processes made more efficient. However, is cloud computing suitable for the Singapore Armed Forces (SAF)? How can the SAF migrate its traditional...

  13. Lime and Phosphate Amendment Can Significantly Reduce Uptake of Cd and Pb by Field-Grown Rice

    Directory of Open Access Journals (Sweden)

    Rongbo Xiao

    2017-03-01

    Full Text Available Agricultural soils are suffering from increasing heavy metal pollution, among which, paddy soil polluted by heavy metals is frequently reported and has elicited great public concern. In this study, we carried out field experiments on paddy soil around a Pb-Zn mine to study amelioration effects of four soil amendments on uptake of Cd and Pb by rice, and to make recommendations for paddy soil heavy metal remediation, particularly for combined pollution of Cd and Pb. The results showed that all the four treatments can significantly reduce the Cd and Pb content in the late rice grain compared with the early rice, among which, the combination amendment of lime and phosphate had the best remediation effects where rice grain Cd content was reduced by 85% and 61%, respectively, for the late rice and the early rice, and by 30% in the late rice grain for Pb. The high reduction effects under the Ca + P treatment might be attributed to increase of soil pH from 5.5 to 6.7. We also found that influence of the Ca + P treatment on rice production was insignificant, while the available Cd and Pb content in soil was reduced by 16.5% and 11.7%, respectively.

  14. A Comprehensive study on Cloud Green Computing: To Reduce Carbon Footprints Using Clouds

    OpenAIRE

    Chiranjeeb Roy Chowdhury, Arindam Chatterjee, Alap Sardar, Shalabh Agarwal, Asoke Nath

    2013-01-01

    Cloud computing and Green computing are twomostemergent areas in information communicationtechnology (ICT) with immense applications in theentire globe. The future trends of ICT will be moretowards cloud computing and green computing.Due to tremendous improvements in computernetworks now the people prefer the Network-basedcomputing instead of doing something inan in-house based computing.In any business sector dailybusiness and individual computing are nowmigrating from individual hard drives...

  15. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    International Nuclear Information System (INIS)

    Pita-Machado, Reinado; Perez-Diaz, Marlen; Lorenzo-Ginori, Juan V.; Bravo-Pino, Rolando

    2014-01-01

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which are not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions

  16. Computational fluid dynamics simulations of light water reactor flows

    International Nuclear Information System (INIS)

    Tzanos, C.P.; Weber, D.P.

    1999-01-01

    Advances in computational fluid dynamics (CFD), turbulence simulation, and parallel computing have made feasible the development of three-dimensional (3-D) single-phase and two-phase flow CFD codes that can simulate fluid flow and heat transfer in realistic reactor geometries with significantly reduced reliance, especially in single phase, on empirical correlations. The objective of this work was to assess the predictive power and computational efficiency of a CFD code in the analysis of a challenging single-phase light water reactor problem, as well as to identify areas where further improvements are needed

  17. Incorporation of catalytic dehydrogenation into fischer-tropsch synthesis to significantly reduce carbon dioxide emissions

    Science.gov (United States)

    Huffman, Gerald P.

    2012-11-13

    A new method of producing liquid transportation fuels from coal and other hydrocarbons that significantly reduces carbon dioxide emissions by combining Fischer-Tropsch synthesis with catalytic dehydrogenation is claimed. Catalytic dehydrogenation (CDH) of the gaseous products (C1-C4) of Fischer-Tropsch synthesis (FTS) can produce large quantities of hydrogen while converting the carbon to multi-walled carbon nanotubes (MWCNT). Incorporation of CDH into a FTS-CDH plant converting coal to liquid fuels can eliminate all or most of the CO.sub.2 emissions from the water-gas shift (WGS) reaction that is currently used to elevate the H.sub.2 level of coal-derived syngas for FTS. Additionally, the FTS-CDH process saves large amounts of water used by the WGS reaction and produces a valuable by-product, MWCNT.

  18. Computer vision syndrome among computer office workers in a developing country: an evaluation of prevalence and risk factors.

    Science.gov (United States)

    Ranasinghe, P; Wathurapatha, W S; Perera, Y S; Lamabadusuriya, D A; Kulatunga, S; Jayawardana, N; Katulanda, P

    2016-03-09

    Computer vision syndrome (CVS) is a group of visual symptoms experienced in relation to the use of computers. Nearly 60 million people suffer from CVS globally, resulting in reduced productivity at work and reduced quality of life of the computer worker. The present study aims to describe the prevalence of CVS and its associated factors among a nationally-representative sample of Sri Lankan computer workers. Two thousand five hundred computer office workers were invited for the study from all nine provinces of Sri Lanka between May and December 2009. A self-administered questionnaire was used to collect socio-demographic data, symptoms of CVS and its associated factors. A binary logistic regression analysis was performed in all patients with 'presence of CVS' as the dichotomous dependent variable and age, gender, duration of occupation, daily computer usage, pre-existing eye disease, not using a visual display terminal (VDT) filter, adjusting brightness of screen, use of contact lenses, angle of gaze and ergonomic practices knowledge as the continuous/dichotomous independent variables. A similar binary logistic regression analysis was performed in all patients with 'severity of CVS' as the dichotomous dependent variable and other continuous/dichotomous independent variables. Sample size was 2210 (response rate-88.4%). Mean age was 30.8 ± 8.1 years and 50.8% of the sample were males. The 1-year prevalence of CVS in the study population was 67.4%. Female gender (OR: 1.28), duration of occupation (OR: 1.07), daily computer usage (1.10), pre-existing eye disease (OR: 4.49), not using a VDT filter (OR: 1.02), use of contact lenses (OR: 3.21) and ergonomics practices knowledge (OR: 1.24) all were associated with significantly presence of CVS. The duration of occupation (OR: 1.04) and presence of pre-existing eye disease (OR: 1.54) were significantly associated with the presence of 'severe CVS'. Sri Lankan computer workers had a high prevalence of CVS. Female gender

  19. A study on the effectiveness of lockup-free caches for a Reduced Instruction Set Computer (RISC) processor

    OpenAIRE

    Tharpe, Leonard.

    1992-01-01

    Approved for public release; distribution is unlimited This thesis presents a simulation and analysis of the Reduced Instruction Set Computer (RISC) architecture and the effects on RISC performance of a lockup-free cache interface. RISC architectures achieve high performance by having a small, but sufficient, instruction set with most instructions executing in one clock cycle. Current RISC performance range from 1.5 to 2.0 CPI. The goal of RISC is to attain a CPI of 1.0. The major hind...

  20. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  1. Prevalence and clinical significance of pleural microbubbles in computed tomography of thoracic empyema

    International Nuclear Information System (INIS)

    Smolikov, A.; Smolyakov, R.; Riesenberg, K.; Schlaeffer, F.; Borer, A.; Cherniavsky, E.; Gavriel, A.; Gilad, J.

    2006-01-01

    AIM: To determine the prevalence and clinical significance of pleural microbubbles in thoracic empyema. MATERIALS AND METHODS: The charts of 71 consecutive patients with empyema were retrospectively reviewed for relevant demographic, laboratory, microbiological, therapeutic and outcome data. Computed tomography (CT) images were reviewed for various signs of empyema as well as pleural microbubbles. Two patient groups, with and without microbubbles were compared. RESULTS: Mean patient age was 49 years and 72% were males. Microbubbles were detected in 58% of patients. There were no significant differences between patients with and without microbubbles in regard to pleural fluid chemistry. A causative organism was identified in about 75% of cases in both. There was no difference in the rates of pleural thickening and enhancement, increased extra-pleural fat attenuation, air-fluid levels or loculations. Microbubbles were diagnosed after a mean of 7.8 days from admission. Thoracentesis before CT was performed in 90 and 57% of patients with and without microbubbles (p=0.0015), respectively. Patients with microbubbles were more likely to require repeated drainage (65.9 versus 36.7%, p=0.015) and surgical decortication (31.7 versus 6.7%, p=0.011). Mortalities were 9.8 and 6.6% respectively (p=0.53). CONCLUSION: Pleural microbubbles are commonly encountered in CT imaging of empyema but have not been systematically studied to date. Microbubbles may be associated with adverse outcome such as repeated drainage or surgical decortication. The sensitivity and specificity of this finding and its prognostic implications need further assessment

  2. Interaction between FOXO1A-209 Genotype and Tea Drinking is Significantly Associated with Reduced Mortality at Advanced Ages

    DEFF Research Database (Denmark)

    Zeng, Yi; Chen, Huashuai; Ni, Ting

    2016-01-01

    Based on the genotypic/phenotypic data from Chinese Longitudinal Healthy Longevity Survey (CLHLS) and Cox proportional hazard model, the present study demonstrates that interactions between carrying FOXO1A-209 genotypes and tea drinking are significantly associated with lower risk of mortality...... at advanced ages. Such significant association is replicated in two independent Han Chinese CLHLS cohorts (p =0.028-0.048 in the discovery and replication cohorts, and p =0.003-0.016 in the combined dataset). We found the associations between tea drinking and reduced mortality are much stronger among carriers...... of the FOXO1A-209 genotype compared to non-carriers, and drinking tea is associated with a reversal of the negative effects of carrying FOXO1A-209 minor alleles, that is, from a substantially increased mortality risk to substantially reduced mortality risk at advanced ages. The impacts are considerably...

  3. Induction-heating MOCVD reactor with significantly improved heating efficiency and reduced harmful magnetic coupling

    KAUST Repository

    Li, Kuang-Hui; Alotaibi, Hamad S.; Sun, Haiding; Lin, Ronghui; Guo, Wenzhe; Torres-Castanedo, Carlos G.; Liu, Kaikai; Galan, Sergio V.; Li, Xiaohang

    2018-01-01

    In a conventional induction-heating III-nitride metalorganic chemical vapor deposition (MOCVD) reactor, the induction coil is outside the chamber. Therefore, the magnetic field does not couple with the susceptor well, leading to compromised heating efficiency and harmful coupling with the gas inlet and thus possible overheating. Hence, the gas inlet has to be at a minimum distance away from the susceptor. Because of the elongated flow path, premature reactions can be more severe, particularly between Al- and B-containing precursors and NH3. Here, we propose a structure that can significantly improve the heating efficiency and allow the gas inlet to be closer to the susceptor. Specifically, the induction coil is designed to surround the vertical cylinder of a T-shaped susceptor comprising the cylinder and a top horizontal plate holding the wafer substrate within the reactor. Therefore, the cylinder coupled most magnetic field to serve as the thermal source for the plate. Furthermore, the plate can block and thus significantly reduce the uncoupled magnetic field above the susceptor, thereby allowing the gas inlet to be closer. The results show approximately 140% and 2.6 times increase in the heating and susceptor coupling efficiencies, respectively, as well as a 90% reduction in the harmful magnetic flux on the gas inlet.

  4. Induction-heating MOCVD reactor with significantly improved heating efficiency and reduced harmful magnetic coupling

    KAUST Repository

    Li, Kuang-Hui

    2018-02-23

    In a conventional induction-heating III-nitride metalorganic chemical vapor deposition (MOCVD) reactor, the induction coil is outside the chamber. Therefore, the magnetic field does not couple with the susceptor well, leading to compromised heating efficiency and harmful coupling with the gas inlet and thus possible overheating. Hence, the gas inlet has to be at a minimum distance away from the susceptor. Because of the elongated flow path, premature reactions can be more severe, particularly between Al- and B-containing precursors and NH3. Here, we propose a structure that can significantly improve the heating efficiency and allow the gas inlet to be closer to the susceptor. Specifically, the induction coil is designed to surround the vertical cylinder of a T-shaped susceptor comprising the cylinder and a top horizontal plate holding the wafer substrate within the reactor. Therefore, the cylinder coupled most magnetic field to serve as the thermal source for the plate. Furthermore, the plate can block and thus significantly reduce the uncoupled magnetic field above the susceptor, thereby allowing the gas inlet to be closer. The results show approximately 140% and 2.6 times increase in the heating and susceptor coupling efficiencies, respectively, as well as a 90% reduction in the harmful magnetic flux on the gas inlet.

  5. The transhumanism of Ray Kurzweil. Is biological ontology reducible to computation?

    Directory of Open Access Journals (Sweden)

    Javier Monserrat

    2016-02-01

    Full Text Available Computer programs, primarily engineering machine vision and programming of somatic sensors, have already allowed, and they will do it more perfectly in the future, to build high perfection androids or cyborgs. They will collaborate with man and open new moral reflections to respect the ontological dignity in the new humanoid machines. In addition, both men and new androids will be in connection with huge external computer networks that will grow up to almost incredible levels the efficiency in the domain of body and nature. However, our current scientific knowledge, on the one hand, about hardware and software that will support both the humanoid machines and external computer networks, made with existing engineering (and also the foreseeable medium and even long term engineering and, on the other hand, our scientific knowledge about animal and human behavior from neural-biological structures that produce a psychic system, allow us to establish that there is no scientific basis to talk about an ontological identity between the computational machines and man. Accordingly, different ontologies (computational machines and biological entities will produce various different functional systems. There may be simulation, but never ontological identity. These ideas are essential to assess the transhumanism of Ray Kurzweil.

  6. Computing Gröbner fans

    DEFF Research Database (Denmark)

    Fukuda, K.; Jensen, Anders Nedergaard; Thomas, R.R.

    2005-01-01

    This paper presents algorithms for computing the Gröbner fan of an arbitrary polynomial ideal. The computation involves enumeration of all reduced Gröbner bases of the ideal. Our algorithms are based on a uniform definition of the Gröbner fan that applies to both homogeneous and non......-homogeneous ideals and a proof that this object is a polyhedral complex. We show that the cells of a Gröbner fan can easily be oriented acyclically and with a unique sink, allowing their enumeration by the memory-less reverse search procedure. The significance of this follows from the fact that Gröbner fans...... are not always normal fans of polyhedra in which case reverse search applies automatically. Computational results using our implementation of these algorithms in the software package Gfan are included....

  7. Reducing constraints on quantum computer design by encoded selective recoupling

    International Nuclear Information System (INIS)

    Lidar, D.A.; Wu, L.-A.

    2002-01-01

    The requirement of performing both single-qubit and two-qubit operations in the implementation of universal quantum logic often leads to very demanding constraints on quantum computer design. We show here how to eliminate the need for single-qubit operations in a large subset of quantum computer proposals: those governed by isotropic and XXZ , XY -type anisotropic exchange interactions. Our method employs an encoding of one logical qubit into two physical qubits, while logic operations are performed using an analogue of the NMR selective recoupling method

  8. Office ergonomics: deficiencies in computer workstation design.

    Science.gov (United States)

    Shikdar, Ashraf A; Al-Kindi, Mahmoud A

    2007-01-01

    The objective of this research was to study and identify ergonomic deficiencies in computer workstation design in typical offices. Physical measurements and a questionnaire were used to study 40 workstations. Major ergonomic deficiencies were found in physical design and layout of the workstations, employee postures, work practices, and training. The consequences in terms of user health and other problems were significant. Forty-five percent of the employees used nonadjustable chairs, 48% of computers faced windows, 90% of the employees used computers more than 4 hrs/day, 45% of the employees adopted bent and unsupported back postures, and 20% used office tables for computers. Major problems reported were eyestrain (58%), shoulder pain (45%), back pain (43%), arm pain (35%), wrist pain (30%), and neck pain (30%). These results indicated serious ergonomic deficiencies in office computer workstation design, layout, and usage. Strategies to reduce or eliminate ergonomic deficiencies in computer workstation design were suggested.

  9. A Recombinant Multi-Stage Vaccine against Paratuberculosis Significantly Reduces Bacterial Level in Tissues without Interference in Diagnostics

    DEFF Research Database (Denmark)

    Jungersen, Gregers; Thakur, Aneesh; Aagaard, C.

    , PPDj-specific IFN-γ responses or positive PPDa or PPDb skin tests developed in vaccinees. Antibodies and cell-mediated immune responses were developed against FET11 antigens, however. At necropsy 8 or 12 months of age, relative Map burden was determined in a number of gut tissues by quantitative IS900...... PCR and revealed significantly reduced levels of Map and reduced histopathology. Diagnostic tests for antibody responses and cell-mediated immune responses, used as surrogates of infection, corroborated the observed vaccine efficacy: Five of seven non‐vaccinated calves seroconverted in ID Screen......-γ assay responses from 40 to 52 weeks compared to non-vaccinated calves. These results indicate the FET11 vaccine can be used to accelerate eradication of paratuberculosis while surveillance or test-and-manage control programs for tuberculosis and Johne’s disease remain in place. Funded by EMIDA ERA...

  10. Behavior Life Style Analysis for Mobile Sensory Data in Cloud Computing through MapReduce

    Science.gov (United States)

    Hussain, Shujaat; Bang, Jae Hun; Han, Manhyung; Ahmed, Muhammad Idris; Amin, Muhammad Bilal; Lee, Sungyoung; Nugent, Chris; McClean, Sally; Scotney, Bryan; Parr, Gerard

    2014-01-01

    Cloud computing has revolutionized healthcare in today's world as it can be seamlessly integrated into a mobile application and sensor devices. The sensory data is then transferred from these devices to the public and private clouds. In this paper, a hybrid and distributed environment is built which is capable of collecting data from the mobile phone application and store it in the cloud. We developed an activity recognition application and transfer the data to the cloud for further processing. Big data technology Hadoop MapReduce is employed to analyze the data and create user timeline of user's activities. These activities are visualized to find useful health analytics and trends. In this paper a big data solution is proposed to analyze the sensory data and give insights into user behavior and lifestyle trends. PMID:25420151

  11. Evaluation of Musculoskeletal Disorders among computer Users in Isfahan

    Directory of Open Access Journals (Sweden)

    Ayoub Ghanbary

    2015-08-01

    Full Text Available Along with widespread use of computers, work-related musculoskeletal disorders (MSDs have become the most prevalent ergonomic problems in computer users. With evaluating musculoskeletal disorders among Computer Users can intervent a action to reduce musculoskeletal disorders carried out. The aim of the present study was to Assessment of Musculoskeletal Disorders among Computer Users in Isfahan University with Rapid Office Strain Assessment (ROSA method and Nordic questionnaire. This cross-sectional study was conducted on 96 computer users in Isfahan university. The data were analyzed using correlation and line regression by test spss 20. and descriptive statistics and Anova test. Data collection tool was Nordic questionnaire and Rapid Office Strain Assessment method checklist. The results of Nordic questionnaire showed that prevalence of musculoskeletal disorders in computer users were in the shoulder (62.1%, neck (54.9% and back (53.1% respectively more than in other parts of the body. Based on the level of risk of ROSA were 19 individuals in an area of low risk, 50 individual area of notification and 27 individual in the area hazard and need for ergonomics interventions. Musculoskeletal disorders prevalence were in women more than men. Also Anova test showed that there is a direct and significant correlation between age and work experience with a final score ROSA (p<0.001. The study result showed that the prevalence of MSDs among computer users of Isfahan universities is pretty high and must ergonomic interventions such as computer workstation redesign, users educate about ergonomic principles computer with work, reduced working hours in computers with work, and elbows should be kept close to the body with the angle between 90 and 120 degrees to reduce musculoskeletal disorders carried out.

  12. Reproducibility of Dynamic Computed Tomography Brain Perfusion Measurements in Patients with Significant Carotid Artery Stenosis

    International Nuclear Information System (INIS)

    Serafin, Z.; Kotarski, M.; Karolkiewicz, M.; Mindykowski, R.; Lasek, W.; Molski, S.; Gajdzinska, M.; Nowak-Nowacka, A.

    2009-01-01

    Background: Perfusion computed tomography (PCT) determination is a minimally invasive and widely available technique for brain blood flow assessment, but its application may be restricted by large variation of results. Purpose: To determine the intraobserver, interobserver, and inter examination variability of brain PCT absolute measurements in patients with significant carotid artery stenosis (CAS), and to evaluate the effect of the use of relative perfusion values on PCT reproducibility. Material and Methods: PCT imaging was completed in 61 patients before endarterectomy, and in 38 of these within 4 weeks after treatment. Cerebral blood flow (CBF), cerebral blood volume (CBV), time to peak (TTP), and peak enhancement intensity (PEI) were calculated with the maximum slope method. Inter examination variability was evaluated based on perfusion of hemisphere contralateral to the treated CAS, from repeated examinations. Interobserver and intraobserver variability were established for the untreated side, based on pretreatment examination. Results: Interobserver and intraobserver variability were highest for CBF measurement (28.8% and 32.5%, respectively), and inter examination variability was the highest for CBV (24.1%). Intraobserver and interobserver variability were higher for absolute perfusion values compared with their respective ratios for CBF and TTP. The only statistically significant difference between perfusion values measured by two observers was for CBF (mean 78.3 vs. 67.5 ml/100 g/min). The inter examination variability of TTP (12.1%) was significantly lower than the variability of other absolute perfusion measures, and the inter examination variability of ratios was significantly lower than absolute values for all the parameters. Conclusion: In longitudinal studies of patients with chronic cerebral ischemia, PCT ratios and either TTP or CBV are more suitable measures than absolute CBF values, because of their considerably lower inter- and intraobserver

  13. Reproducibility of Dynamic Computed Tomography Brain Perfusion Measurements in Patients with Significant Carotid Artery Stenosis

    Energy Technology Data Exchange (ETDEWEB)

    Serafin, Z.; Kotarski, M.; Karolkiewicz, M.; Mindykowski, R.; Lasek, W.; Molski, S.; Gajdzinska, M.; Nowak-Nowacka, A. (Dept. of Radiology and Diagnostic Imaging, and Dept. of General and Vascular Surgery, Nicolaus Copernicus Univ., Collegium Medicum, Bydgoszcz (Poland))

    2009-02-15

    Background: Perfusion computed tomography (PCT) determination is a minimally invasive and widely available technique for brain blood flow assessment, but its application may be restricted by large variation of results. Purpose: To determine the intraobserver, interobserver, and inter examination variability of brain PCT absolute measurements in patients with significant carotid artery stenosis (CAS), and to evaluate the effect of the use of relative perfusion values on PCT reproducibility. Material and Methods: PCT imaging was completed in 61 patients before endarterectomy, and in 38 of these within 4 weeks after treatment. Cerebral blood flow (CBF), cerebral blood volume (CBV), time to peak (TTP), and peak enhancement intensity (PEI) were calculated with the maximum slope method. Inter examination variability was evaluated based on perfusion of hemisphere contralateral to the treated CAS, from repeated examinations. Interobserver and intraobserver variability were established for the untreated side, based on pretreatment examination. Results: Interobserver and intraobserver variability were highest for CBF measurement (28.8% and 32.5%, respectively), and inter examination variability was the highest for CBV (24.1%). Intraobserver and interobserver variability were higher for absolute perfusion values compared with their respective ratios for CBF and TTP. The only statistically significant difference between perfusion values measured by two observers was for CBF (mean 78.3 vs. 67.5 ml/100 g/min). The inter examination variability of TTP (12.1%) was significantly lower than the variability of other absolute perfusion measures, and the inter examination variability of ratios was significantly lower than absolute values for all the parameters. Conclusion: In longitudinal studies of patients with chronic cerebral ischemia, PCT ratios and either TTP or CBV are more suitable measures than absolute CBF values, because of their considerably lower inter- and intraobserver

  14. Computational enzyme design approaches with significant biological outcomes: progress and challenges

    OpenAIRE

    Li, Xiaoman; Zhang, Ziding; Song, Jiangning

    2012-01-01

    Enzymes are powerful biocatalysts, however, so far there is still a large gap between the number of enzyme-based practical applications and that of naturally occurring enzymes. Multiple experimental approaches have been applied to generate nearly all possible mutations of target enzymes, allowing the identification of desirable variants with improved properties to meet the practical needs. Meanwhile, an increasing number of computational methods have been developed to assist in the modificati...

  15. Computation of External Quality Factors for RF Structures by Means of Model Order Reduction and a Perturbation Approach

    CERN Document Server

    Flisgen, Thomas; van Rienen, Ursula

    2016-01-01

    External quality factors are significant quantities to describe losses via waveguide ports in radio frequency resonators. The current contribution presents a novel approach to determine external quality factors by means of a two-step procedure: First, a state-space model for the lossless radio frequency structure is generated and its model order is reduced. Subsequently, a perturbation method is applied on the reduced model so that external losses are accounted for. The advantage of this approach results from the fact that the challenges in dealing with lossy systems are shifted to the reduced order model. This significantly saves computational costs. The present paper provides a short overview on existing methods to compute external quality factors. Then, the novel approach is introduced and validated in terms of accuracy and computational time by means of commercial software.

  16. Significant Association between Sulfate-Reducing Bacteria and Uranium-Reducing Microbial Communities as Revealed by a Combined Massively Parallel Sequencing-Indicator Species Approach▿ †

    Science.gov (United States)

    Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Jardine, Philip M.; Zhou, Jizhong; Criddle, Craig S.; Marsh, Terence L.; Tiedje, James M.

    2010-01-01

    Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared. PMID:20729318

  17. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  18. Pegasus project. DLC coating and low viscosity oil reduce energy losses significantly

    Energy Technology Data Exchange (ETDEWEB)

    Doerwald, Dave; Jacobs, Ruud [Hauzer Techno Coating (Netherlands). Tribological Coatings

    2012-03-15

    Pegasus, the flying horse from Greek mythology, is a suitable name for the research project initiated by a German automotive OEM with participation of Hauzer Techno Coating and several automotive suppliers. It will enable future automotive vehicles to reduce fuel consumption without losing power. The project described in this article focuses on the rear differential, because reducing friction here can contribute considerably to efficiency improvement of the whole vehicle. Surfaces, coating and oil viscosity have been investigated and interesting conclusions have been reached. (orig.)

  19. Implementation of standardized follow-up care significantly reduces peritonitis in children on chronic peritoneal dialysis.

    Science.gov (United States)

    Neu, Alicia M; Richardson, Troy; Lawlor, John; Stuart, Jayne; Newland, Jason; McAfee, Nancy; Warady, Bradley A

    2016-06-01

    The Standardizing Care to improve Outcomes in Pediatric End stage renal disease (SCOPE) Collaborative aims to reduce peritonitis rates in pediatric chronic peritoneal dialysis patients by increasing implementation of standardized care practices. To assess this, monthly care bundle compliance and annualized monthly peritonitis rates were evaluated from 24 SCOPE centers that were participating at collaborative launch and that provided peritonitis rates for the 13 months prior to launch. Changes in bundle compliance were assessed using either a logistic regression model or a generalized linear mixed model. Changes in average annualized peritonitis rates over time were illustrated using the latter model. In the first 36 months of the collaborative, 644 patients with 7977 follow-up encounters were included. The likelihood of compliance with follow-up care practices increased significantly (odds ratio 1.15, 95% confidence interval 1.10, 1.19). Mean monthly peritonitis rates significantly decreased from 0.63 episodes per patient year (95% confidence interval 0.43, 0.92) prelaunch to 0.42 (95% confidence interval 0.31, 0.57) at 36 months postlaunch. A sensitivity analysis confirmed that as mean follow-up compliance increased, peritonitis rates decreased, reaching statistical significance at 80% at which point the prelaunch rate was 42% higher than the rate in the months following achievement of 80% compliance. In its first 3 years, the SCOPE Collaborative has increased the implementation of standardized follow-up care and demonstrated a significant reduction in average monthly peritonitis rates. Copyright © 2016 International Society of Nephrology. Published by Elsevier Inc. All rights reserved.

  20. Clinical significance of adrenal computed tomography in Addison's disease

    International Nuclear Information System (INIS)

    Sun, Zhong-Hua; Nomura, Kaoru; Toraya, Shohzoh; Ujihara, Makoto; Horiba, Nobuo; Suda, Toshihiro; Tsushima, Toshio; Demura, Hiroshi; Kono, Atsushi

    1992-01-01

    Adrenal computed tomographic (CT) scanning was conducted in twelve patients with Addison's disease during the clinical course. In tuberculous Addison's disease (n=8), three of four patients examined during the first two years after disease onset had bilaterally enlarged adrenals, while one of four had a unilaterally enlarged one. At least one adrenal gland was enlarged after onset in all six patients examined during the first four years. Thereafter, the adrenal glands was atrophied bilaterally, in contrast to adrenal glands in idiopathic Addison's disease which was atrophied bilaterally from disease onset (n=2). Adrenal calcification was a less sensitive clue in tracing pathogenesis, i.e., adrenal calcification was observed in five of eight patients with tuberculous Addison's disease, but not idiopathic patients. Thus, adrenal CT scanning could show the etiology of Addison's disease (infection or autoimmunity) and the phase of Addison's disease secondary to tuberculosis, which may be clinically important for initiating antituberculous treatment. (author)

  1. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  2. SLMRACE: a noise-free RACE implementation with reduced computational time

    Science.gov (United States)

    Chauvin, Juliet; Provenzi, Edoardo

    2017-05-01

    We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).

  3. Assessment of right ventricular function using gated blood pool single photon emission computed tomography in inferior myocardial infarction with or without hemodynamically significant right ventricular infarction

    International Nuclear Information System (INIS)

    Takahashi, Masaharu

    1992-01-01

    Right ventricular function was assessed using gated blood pool single photon emission computed tomography (GSPECT) in 10 normal subjects and 14 patients with inferior myocardial infarction. Three-dimensional backbround subtraction was achieved by applying an optimal cut off level. The patient group consisted of 6 patients with definite hemodynamic abnormalities indicative of right ventricular infarction (RVI) and 8 other patients with significant obstructive lesion at the proximal portion of right coronary artery without obvious hemodynamic signs of RVI. Right ventricular regional wall motion abnormalities were demonstrated on GSPECT functional images and the indices of right ventricular function (i.e the right ventricular ejection fraction (RVEF), the right ventricular peak ejection rate (RVPER) and the right ventricular peak filling rate (RVPFR)) were significantly reduced in the patient group, not only in the patients with definite RVI but also in those without hemodynamic signs of RVI, even in the absence of definite hemodynamic signs, when the proximal portion of right coronary artery is obstructed. It is concluded that GSPECT is reliable for the assessment of right ventricular function and regional wall motion, and is also useful for the diagnosis of RVI. (author)

  4. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  5. Behavior Life Style Analysis for Mobile Sensory Data in Cloud Computing through MapReduce

    Directory of Open Access Journals (Sweden)

    Shujaat Hussain

    2014-11-01

    Full Text Available Cloud computing has revolutionized healthcare in today’s world as it can be seamlessly integrated into a mobile application and sensor devices. The sensory data is then transferred from these devices to the public and private clouds. In this paper, a hybrid and distributed environment is built which is capable of collecting data from the mobile phone application and store it in the cloud. We developed an activity recognition application and transfer the data to the cloud for further processing. Big data technology Hadoop MapReduce is employed to analyze the data and create user timeline of user’s activities. These activities are visualized to find useful health analytics and trends. In this paper a big data solution is proposed to analyze the sensory data and give insights into user behavior and lifestyle trends.

  6. MUSIDH, multiple use of simulated demographic histories, a novel method to reduce computation time in microsimulation models of infectious diseases.

    Science.gov (United States)

    Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F

    2008-09-01

    Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.

  7. Computational Modeling | Bioenergy | NREL

    Science.gov (United States)

    cell walls and are the source of biofuels and biomaterials. Our modeling investigates their properties . Quantum Mechanical Models NREL studies chemical and electronic properties and processes to reduce barriers Computational Modeling Computational Modeling NREL uses computational modeling to increase the

  8. Significantly reduced c-axis thermal diffusivity of graphene-based papers

    Science.gov (United States)

    Han, Meng; Xie, Yangsu; Liu, Jing; Zhang, Jingchao; Wang, Xinwei

    2018-06-01

    Owing to their very high thermal conductivity as well as large surface-to-volume ratio, graphene-based films/papers have been proposed as promising candidates of lightweight thermal interface materials and lateral heat spreaders. In this work, we study the cross-plane (c-axis) thermal conductivity (k c ) and diffusivity (α c ) of two typical graphene-based papers, which are partially reduced graphene paper (PRGP) and graphene oxide paper (GOP), and compare their thermal properties with highly-reduced graphene paper and graphite. The determined α c of PRGP varies from (1.02 ± 0.09) × 10‑7 m2 s‑1 at 295 K to (2.31 ± 0.18) × 10‑7 m2 s‑1 at 12 K. This low α c is mainly attributed to the strong phonon scattering at the grain boundaries and defect centers due to the small grain sizes and high-level defects. For GOP, α c varies from (1.52 ± 0.05) × 10‑7 m2 s‑1 at 295 K to (2.28 ± 0.08) × 10‑7 m2 s‑1 at 12.5 K. The cross-plane thermal transport of GOP is attributed to the high density of functional groups between carbon layers which provide weak thermal transport tunnels across the layers in the absence of direct energy coupling among layers. This work sheds light on the understanding and optimizing of nanostructure of graphene-based paper-like materials for desired thermal performance.

  9. Computed tomographic pelvimetry in English bulldogs.

    Science.gov (United States)

    Dobak, Tetyda P; Voorhout, George; Vernooij, Johannes C M; Boroffka, Susanne A E B

    2018-05-31

    English bulldogs have been reported to have a high incidence of dystocia and caesarean section is often performed electively in this breed. A narrow pelvic canal is the major maternal factor contributing to obstructive dystocia. The objective of this cross-sectional study was to assess the pelvic dimensions of 40 clinically healthy English bulldogs using computed tomography pelvimetry. A control group consisting of 30 non-brachycephalic dogs that underwent pelvic computed tomography was retrospectively collected from the patient archive system. Univariate analysis of variance was used to compare computed tomography pelvimetry of both groups and the effects of weight and gender on the measurements. In addition, ratios were obtained to address pelvic shape differences. A significantly (P = 0.00) smaller pelvic size was found in English bulldogs compared to the control group for all computed tomography measurements: width and length of the pelvis, pelvic inlet and caudal pelvic aperture. The pelvic conformation was significantly different between the groups, English bulldogs had an overall shorter pelvis and pelvic canal and a narrower pelvic outlet. Weight had a significant effect on all measurements whereas gender that only had a significant effect on some (4/11) pelvic dimensions. Our findings prove that English bulldogs have a generally reduced pelvic size as well as a shorter pelvis and narrower pelvic outlet when compared to non-brachycephalic breeds. We suggest that some of our measurements may serve as a baseline for pelvic dimensions in English bulldogs and may be useful for future studies on dystocia in this breed. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Clinical significance of computed tomographic arteriography for minute hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, H; Matsui, O; Suzuki, M; Ida, M; Kitagawa, K [Kanazawa Univ. (Japan). School of Medicine

    1982-03-01

    Computed tomographic arteriography (CTA) can clearly demonstrate minute hepatocellular carcinoma (H.C.C.) more than 2 cm in diameter as an enhanced mass lesion. In this case the precise localization of H.C.C. becomes so obvious that CTA plays an important role to evaluate its resectability. However, H.C.C. of the size from 2 cm to 1 cm indiameter, which is visualized with celiac and infusion hepatic angiography, becomes more difficult in detection, and particularly H.C.C. of less than 1 cm in diameter can hardly be recognized, nor be diagnosed as a malignant nodule by CTA, therefore it appears that in these sizes of H.C.C. the detectability of CTA is not superior to the hepatic angiography.

  11. Principles for the wise use of computers by children.

    Science.gov (United States)

    Straker, L; Pollock, C; Maslen, B

    2009-11-01

    Computer use by children at home and school is now common in many countries. Child computer exposure varies with the type of computer technology available and the child's age, gender and social group. This paper reviews the current exposure data and the evidence for positive and negative effects of computer use by children. Potential positive effects of computer use by children include enhanced cognitive development and school achievement, reduced barriers to social interaction, enhanced fine motor skills and visual processing and effective rehabilitation. Potential negative effects include threats to child safety, inappropriate content, exposure to violence, bullying, Internet 'addiction', displacement of moderate/vigorous physical activity, exposure to junk food advertising, sleep displacement, vision problems and musculoskeletal problems. The case for child specific evidence-based guidelines for wise use of computers is presented based on children using computers differently to adults, being physically, cognitively and socially different to adults, being in a state of change and development and the potential to impact on later adult risk. Progress towards child-specific guidelines is reported. Finally, a set of guideline principles is presented as the basis for more detailed guidelines on the physical, cognitive and social impact of computer use by children. The principles cover computer literacy, technology safety, child safety and privacy and appropriate social, cognitive and physical development. The majority of children in affluent communities now have substantial exposure to computers. This is likely to have significant effects on child physical, cognitive and social development. Ergonomics can provide and promote guidelines for wise use of computers by children and by doing so promote the positive effects and reduce the negative effects of computer-child, and subsequent computer-adult, interaction.

  12. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    Science.gov (United States)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  13. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    Science.gov (United States)

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  14. Reduced combustion mechanism for C1-C4 hydrocarbons and its application in computational fluid dynamics flare modeling.

    Science.gov (United States)

    Damodara, Vijaya; Chen, Daniel H; Lou, Helen H; Rasel, Kader M A; Richmond, Peyton; Wang, Anan; Li, Xianchang

    2017-05-01

    Emissions from flares constitute unburned hydrocarbons, carbon monoxide (CO), soot, and other partially burned and altered hydrocarbons along with carbon dioxide (CO 2 ) and water. Soot or visible smoke is of particular concern for flare operators/regulatory agencies. The goal of the study is to develop a computational fluid dynamics (CFD) model capable of predicting flare combustion efficiency (CE) and soot emission. Since detailed combustion mechanisms are too complicated for (CFD) application, a 50-species reduced mechanism, LU 3.0.1, was developed. LU 3.0.1 is capable of handling C 4 hydrocarbons and soot precursor species (C 2 H 2 , C 2 H 4 , C 6 H 6 ). The new reduced mechanism LU 3.0.1 was first validated against experimental performance indicators: laminar flame speed, adiabatic flame temperature, and ignition delay. Further, CFD simulations using LU 3.0.1 were run to predict soot emission and CE of air-assisted flare tests conducted in 2010 in Tulsa, Oklahoma, using ANSYS Fluent software. Results of non-premixed probability density function (PDF) model and eddy dissipation concept (EDC) model are discussed. It is also noteworthy that when used in conjunction with the EDC turbulence-chemistry model, LU 3.0.1 can reasonably predict volatile organic compound (VOC) emissions as well. A reduced combustion mechanism containing 50 C 1 -C 4 species and soot precursors has been developed and validated against experimental data. The combustion mechanism is then employed in the computational fluid dynamics (CFD) of modeling of soot emission and combustion efficiency (CE) of controlled flares for which experimental soot and CE data are available. The validated CFD modeling tools are useful for oil, gas, and chemical industries to comply with U.S. Environmental Protection Agency's (EPA) mandate to achieve smokeless flaring with a high CE.

  15. Pharmacological kynurenine 3-monooxygenase enzyme inhibition significantly reduces neuropathic pain in a rat model.

    Science.gov (United States)

    Rojewska, Ewelina; Piotrowska, Anna; Makuch, Wioletta; Przewlocka, Barbara; Mika, Joanna

    2016-03-01

    Recent studies have highlighted the involvement of the kynurenine pathway in the pathology of neurodegenerative diseases, but the role of this system in neuropathic pain requires further extensive research. Therefore, the aim of our study was to examine the role of kynurenine 3-monooxygenase (Kmo), an enzyme that is important in this pathway, in a rat model of neuropathy after chronic constriction injury (CCI) to the sciatic nerve. For the first time, we demonstrated that the injury-induced increase in the Kmo mRNA levels in the spinal cord and the dorsal root ganglia (DRG) was reduced by chronic administration of the microglial inhibitor minocycline and that this effect paralleled a decrease in the intensity of neuropathy. Further, minocycline administration alleviated the lipopolysaccharide (LPS)-induced upregulation of Kmo mRNA expression in microglial cell cultures. Moreover, we demonstrated that not only indirect inhibition of Kmo using minocycline but also direct inhibition using Kmo inhibitors (Ro61-6048 and JM6) decreased neuropathic pain intensity on the third and the seventh days after CCI. Chronic Ro61-6048 administration diminished the protein levels of IBA-1, IL-6, IL-1beta and NOS2 in the spinal cord and/or the DRG. Both Kmo inhibitors potentiated the analgesic properties of morphine. In summary, our data suggest that in neuropathic pain model, inhibiting Kmo function significantly reduces pain symptoms and enhances the effectiveness of morphine. The results of our studies show that the kynurenine pathway is an important mediator of neuropathic pain pathology and indicate that Kmo represents a novel pharmacological target for the treatment of neuropathy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Reducing Eating Disorder Onset in a Very High Risk Sample with Significant Comorbid Depression: A Randomized Controlled Trial

    Science.gov (United States)

    Taylor, C. Barr; Kass, Andrea E.; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E.

    2015-01-01

    Objective Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated on-line eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. Method 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or non-clinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or wait-list control. Assessments included the Eating Disorder Examination (EDE to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. Results ED attitudes and behaviors improved more in the intervention than control group (p = 0.02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = 0.28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% versus 42%, p = 0.025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = 0.016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% versus 57%, NNT = 4). Conclusions An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. PMID:26795936

  17. Reducing eating disorder onset in a very high risk sample with significant comorbid depression: A randomized controlled trial.

    Science.gov (United States)

    Taylor, C Barr; Kass, Andrea E; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E

    2016-05-01

    Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated online eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or nonclinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or waitlist control. Assessments included the Eating Disorder Examination (EDE, to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. ED attitudes and behaviors improved more in the intervention than control group (p = .02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = .28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% vs. 42%, p = .025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = .016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% vs. 57%, NNT = 4). An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. (c) 2016 APA, all rights reserved).

  18. Low-power hardware implementation of movement decoding for brain computer interface with reduced-resolution discrete cosine transform.

    Science.gov (United States)

    Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E

    2014-01-01

    This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.

  19. Computer vision syndrome: a study of knowledge and practices in university students.

    Science.gov (United States)

    Reddy, S C; Low, C K; Lim, Y P; Low, L L; Mardina, F; Nursaleha, M P

    2013-01-01

    Computer vision syndrome (CVS) is a condition in which a person experiences one or more of eye symptoms as a result of prolonged working on a computer. To determine the prevalence of CVS symptoms, knowledge and practices of computer use in students studying in different universities in Malaysia, and to evaluate the association of various factors in computer use with the occurrence of symptoms. In a cross sectional, questionnaire survey study, data was collected in college students regarding the demography, use of spectacles, duration of daily continuous use of computer, symptoms of CVS, preventive measures taken to reduce the symptoms, use of radiation filter on the computer screen, and lighting in the room. A total of 795 students, aged between 18 and 25 years, from five universities in Malaysia were surveyed. The prevalence of symptoms of CVS (one or more) was found to be 89.9%; the most disturbing symptom was headache (19.7%) followed by eye strain (16.4%). Students who used computer for more than 2 hours per day experienced significantly more symptoms of CVS (p=0.0001). Looking at far objects in-between the work was significantly (p=0.0008) associated with less frequency of CVS symptoms. The use of radiation filter on the screen (p=0.6777) did not help in reducing the CVS symptoms. Ninety percent of university students in Malaysia experienced symptoms related to CVS, which was seen more often in those who used computer for more than 2 hours continuously per day. © NEPjOPH.

  20. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  1. Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    Directory of Open Access Journals (Sweden)

    J. D. Herman

    2013-07-01

    Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  2. Reduced service of the “IT Service Desk” (Computing Helpdesk) on the after noon of Friday 8 October 2010

    CERN Multimedia

    IT Department

    2010-01-01

    Please note that due to relocation, the “IT Service Desk” will be operating a reduced service on Friday 8th October from 12-30. In particular, the telephone line 78888 will not be available and users will be invited to submit their requests by e-mail (Computing.Helpdesk@cern.ch). E-mail requests will be treated as normal, but some delays are possible. In the event of urgent problems you may call the IT Manager on Duty on 163013. We also take this opportunity to remind you about the “IT Service Status Board” where all computing incidents and scheduled interventions are updated online. Please see: http://cern.ch/it-servicestatus. Normal service will be resumed at 8-30 a.m on Monday 11 October. Thank you in advance for your understanding. The CERN “User Support” Team (IT-UDS-HUS)

  3. Roadmap to greener computing

    CERN Document Server

    Nguemaleu, Raoul-Abelin Choumin

    2014-01-01

    A concise and accessible introduction to green computing and green IT, this book addresses how computer science and the computer infrastructure affect the environment and presents the main challenges in making computing more environmentally friendly. The authors review the methodologies, designs, frameworks, and software development tools that can be used in computer science to reduce energy consumption and still compute efficiently. They also focus on Computer Aided Design (CAD) and describe what design engineers and CAD software applications can do to support new streamlined business directi

  4. PA positioning significantly reduces testicular dose during sacroiliac joint radiography

    Energy Technology Data Exchange (ETDEWEB)

    Mekis, Nejc [Faculty of Health Sciences, University of Ljubljana (Slovenia); Mc Entee, Mark F., E-mail: mark.mcentee@ucd.i [School of Medicine and Medical Science, University College Dublin 4 (Ireland); Stegnar, Peter [Jozef Stefan International Postgraduate School, Ljubljana (Slovenia)

    2010-11-15

    Radiation dose to the testes in the antero-posterior (AP) and postero-anterior (PA) projection of the sacroiliac joint (SIJ) was measured with and without a scrotal shield. Entrance surface dose, the dose received by the testicles and the dose area product (DAP) was used. DAP measurements revealed the dose received by the phantom in the PA position is 12.6% lower than the AP (p {<=} 0.009) with no statistically significant reduction in image quality (p {<=} 0.483). The dose received by the testes in the PA projection in SIJ imaging is 93.1% lower than the AP projection when not using protection (p {<=} 0.020) and 94.9% lower with protection (p {<=} 0.019). The dose received by the testicles was not changed by the use of a scrotal shield in the AP position (p {<=} 0.559); but was lowered by its use in the PA (p {<=} 0.058). Use of the PA projection in SIJ imaging significantly lowers, the dose received by the testes compared to the AP projection without significant loss of image quality.

  5. A criterion based on computational singular perturbation for the identification of quasi steady state species: A reduced mechanism for methane oxidation with NO chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Tianfeng; Law, Chung K. [Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 (United States)

    2008-09-15

    A criterion based on computational singular perturbation (CSP) is proposed to effectively distinguish the quasi steady state (QSS) species from the fast species induced by reactions in partial equilibrium. Together with the method of directed relation graph (DRG), it was applied to the reduction of GRI-Mech 3.0 for methane oxidation, leading to the development of a 19-species reduced mechanism with 15 lumped steps, with the concentrations of the QSS species solved analytically for maximum computational efficiency. Compared to the 12-step and 16-species augmented reduced mechanism (ARM) previously developed by Sung, Law and Chen, three species, namely O, CH{sub 3}OH, and CH{sub 2}CO, are now excluded from the QSS species list. The reduced mechanism was validated with a variety of phenomena including perfectly stirred reactors, auto-ignition, and premixed and non-premixed flames, with the worst-case error being less than 10% over a wide range of parameters. This mechanism was then supplemented with the reactions involving NO formation, followed by validations in both homogeneous and diffusive systems. (author)

  6. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  7. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  8. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  9. Computer classes and games in virtual reality environment to reduce loneliness among students of an elderly reference center: Study protocol for a randomised cross-over design.

    Science.gov (United States)

    Antunes, Thaiany Pedrozo Campos; Oliveira, Acary Souza Bulle de; Crocetta, Tania Brusque; Antão, Jennifer Yohanna Ferreira de Lima; Barbosa, Renata Thais de Almeida; Guarnieri, Regiani; Massetti, Thais; Monteiro, Carlos Bandeira de Mello; Abreu, Luiz Carlos de

    2017-03-01

    Physical and mental changes associated with aging commonly lead to a decrease in communication capacity, reducing social interactions and increasing loneliness. Computer classes for older adults make significant contributions to social and cognitive aspects of aging. Games in a virtual reality (VR) environment stimulate the practice of communicative and cognitive skills and might also bring benefits to older adults. Furthermore, it might help to initiate their contact to the modern technology. The purpose of this study protocol is to evaluate the effects of practicing VR games during computer classes on the level of loneliness of students of an elderly reference center. This study will be a prospective longitudinal study with a randomised cross-over design, with subjects aged 50 years and older, of both genders, spontaneously enrolled in computer classes for beginners. Data collection will be done in 3 moments: moment 0 (T0) - at baseline; moment 1 (T1) - after 8 typical computer classes; and moment 2 (T2) - after 8 computer classes which include 15 minutes for practicing games in VR environment. A characterization questionnaire, the short version of the Short Social and Emotional Loneliness Scale for Adults (SELSA-S) and 3 games with VR (Random, MoviLetrando, and Reaction Time) will be used. For the intervention phase 4 other games will be used: Coincident Timing, Motor Skill Analyser, Labyrinth, and Fitts. The statistical analysis will compare the evolution in loneliness perception, performance, and reaction time during the practice of the games between the 3 moments of data collection. Performance and reaction time during the practice of the games will also be correlated to the loneliness perception. The protocol is approved by the host institution's ethics committee under the number 52305215.3.0000.0082. Results will be disseminated via peer-reviewed journal articles and conferences. This clinical trial is registered at ClinicalTrials.gov identifier: NCT

  10. PA positioning significantly reduces testicular dose during sacroiliac joint radiography

    International Nuclear Information System (INIS)

    Mekis, Nejc; Mc Entee, Mark F.; Stegnar, Peter

    2010-01-01

    Radiation dose to the testes in the antero-posterior (AP) and postero-anterior (PA) projection of the sacroiliac joint (SIJ) was measured with and without a scrotal shield. Entrance surface dose, the dose received by the testicles and the dose area product (DAP) was used. DAP measurements revealed the dose received by the phantom in the PA position is 12.6% lower than the AP (p ≤ 0.009) with no statistically significant reduction in image quality (p ≤ 0.483). The dose received by the testes in the PA projection in SIJ imaging is 93.1% lower than the AP projection when not using protection (p ≤ 0.020) and 94.9% lower with protection (p ≤ 0.019). The dose received by the testicles was not changed by the use of a scrotal shield in the AP position (p ≤ 0.559); but was lowered by its use in the PA (p ≤ 0.058). Use of the PA projection in SIJ imaging significantly lowers, the dose received by the testes compared to the AP projection without significant loss of image quality.

  11. Computation of complexity measures of morphologically significant zones decomposed from binary fractal sets via multiscale convexity analysis

    International Nuclear Information System (INIS)

    Lim, Sin Liang; Koo, Voon Chet; Daya Sagar, B.S.

    2009-01-01

    Multiscale convexity analysis of certain fractal binary objects-like 8-segment Koch quadric, Koch triadic, and random Koch quadric and triadic islands-is performed via (i) morphologic openings with respect to recursively changing the size of a template, and (ii) construction of convex hulls through half-plane closings. Based on scale vs convexity measure relationship, transition levels between the morphologic regimes are determined as crossover scales. These crossover scales are taken as the basis to segment binary fractal objects into various morphologically prominent zones. Each segmented zone is characterized through normalized morphologic complexity measures. Despite the fact that there is no notably significant relationship between the zone-wise complexity measures and fractal dimensions computed by conventional box counting method, fractal objects-whether they are generated deterministically or by introducing randomness-possess morphologically significant sub-zones with varied degrees of spatial complexities. Classification of realistic fractal sets and/or fields according to sub-zones possessing varied degrees of spatial complexities provides insight to explore links with the physical processes involved in the formation of fractal-like phenomena.

  12. Computer ray tracing speeds.

    Science.gov (United States)

    Robb, P; Pawlowski, B

    1990-05-01

    The results of measuring the ray trace speed and compilation speed of thirty-nine computers in fifty-seven configurations, ranging from personal computers to super computers, are described. A correlation of ray trace speed has been made with the LINPACK benchmark which allows the ray trace speed to be estimated using LINPACK performance data. The results indicate that the latest generation of workstations, using CPUs based on RISC (Reduced Instruction Set Computer) technology, are as fast or faster than mainframe computers in compute-bound situations.

  13. The Adaptive QSE-reduced Nuclear Reaction Network for Silicon Burning

    International Nuclear Information System (INIS)

    Parete-Koon, Suzanne; Hix, William Raphael; Thielemann, Friedrich-Karl W.

    2008-01-01

    The nuclei of the 'iron peak' are formed in massive stars shortly before core collapse and during their supernova outbursts as well as during thermonuclear supernovae. Complete and incomplete silicon burning during these events are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present an improvement on our hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. Because the size and membership of these groups vary as the temperature, density and electron faction change, achieving maximal efficiency requires dynamic adjustment of group number and membership. Toward this end, we are implementing a scheme beginning with 2 QSE groups at appropriately high temperature, then progressing through, 3 and 3* group stages (with successively more independent variables) as temperature declines. This combination allows accurate prediction of the nuclear abundance evolution, deleptonization and energy generation at a further reduced computational cost when compared to a conventional nuclear reaction network or our previous 3 fixed group QSE-reduced network. During silicon burning, the resultant QSE-reduced network is up to 20 times faster than the full network it replaces without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multi-dimensional applications

  14. NASA Lewis Stirling SPRE testing and analysis with reduced number of cooler tubes

    International Nuclear Information System (INIS)

    Wong, W.A.; Cairelli, J.E.; Swec, D.M.; Doeberling, T.J.; Lakatos, T.F.; Madi, F.J.

    1994-01-01

    Free-piston Stirling power converters are a candidate for high capacity space power applications. The Space Power Research Engine (SPRE), a free-piston Stirling engine coupled with a linear alternator, is being tested at the NASA Lewis Research Center in support of the Civil Space Technology Initiative. The SPRE is used as a test bed for evaluating converter modifications which have the potential to improve converter performance and for validating computer code predictions. Reducing the number of cooler tubes on the SPRE has been identified as a modification with the potential to significantly improve power and efficiency. This paper describes experimental tests designed to investigate the effects of reducing the number of cooler tubes on converter power, efficiency and dynamics. Presented are test results from the converter operating with a reduced number of cooler tubes and comparisons between this data and both baseline test data and computer code predictions

  15. Iterative Metallic Artifact Reduction for In-Plane Gonadal Shielding During Computed Tomographic Venography of Young Males.

    Science.gov (United States)

    Choi, Jin Woo; Lee, Sang Yub; Kim, Jong Hyo; Jin, Hyeongmin; Lee, Jaewon; Choi, Young Hun; Lee, Hyeon-Kyeong; Park, Jae Hyung

    The purpose of this study was to evaluate a gonadal shield (GS) and iterative metallic artifact reduction (IMAR) during computed tomography scans, regarding the image quality and radiation dose. A phantom was imaged with and without a GS. Prospectively enrolled, young male patients underwent lower extremity computed tomography venography (precontrast imaging without the GS and postcontrast imaging with the GS). Radiation dose was measured each time, and the GS-applied images were reconstructed by weighted filtered back projection and IMAR. In the phantom study, image artifacts were significantly reduced by using IMAR (P = 0.031), whereas the GS reduced the radiation dose by 61.3%. In the clinical study (n = 29), IMAR mitigated artifacts from the GS, thus 96.6% of the IMAR image sets were clinically usable. Gonadal shielding reduced the radiation dose to the testes by 69.0%. The GS in conjunction with IMAR significantly reduced the radiation dose to the testes while maintaining the image quality.

  16. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  17. Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector

    Science.gov (United States)

    Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming

    1996-01-01

    We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.

  18. The co registration of initial PET on the CT-radiotherapy reduces significantly the variabilities of anatomo-clinical target volume in the child hodgkin disease

    International Nuclear Information System (INIS)

    Metwally, H.; Blouet, A.; David, I.; Rives, M.; Izar, F.; Courbon, F.; Filleron, T.; Laprie, A.; Plat, G.; Vial, J.

    2009-01-01

    It exists a great interobserver variability for the anatomo-clinical target volume (C.T.V.) definition in children suffering of Hodgkin disease. In this study, the co-registration of the PET with F.D.G. on the planning computed tomography has significantly lead to a greater coherence in the clinical target volume definition. (N.C.)

  19. Using the Hadoop/MapReduce approach for monitoring the CERN storage system and improving the ATLAS computing model

    CERN Document Server

    Russo, Stefano Alberto; Lamanna, M

    The processing of huge amounts of data, an already fundamental task for the research in the elementary particle physics field, is becoming more and more important also for companies operating in the Information Technology (IT) industry. In this context, if conventional approaches are adopted several problems arise, starting from the congestion of the communication channels. In the IT sector, one of the approaches designed to minimize this congestion on is to exploit the data locality, or in other words, to bring the computation as closer as possible to where the data resides. The most common implementation of this concept is the Hadoop/MapReduce framework. In this thesis work I evaluate the usage of Hadoop/MapReduce in two areas: a standard one similar to typical IT analyses, and an innovative one related to high energy physics analyses. The first consists in monitoring the history of the storage cluster which stores the data generated by the LHC experiments, the second in the physics analysis of the latter, ...

  20. A computationally fast, reduced model for simulating landslide dynamics and tsunamis generated by landslides in natural terrains

    Science.gov (United States)

    Mohammed, F.

    2016-12-01

    Landslide hazards such as fast-moving debris flows, slow-moving landslides, and other mass flows cause numerous fatalities, injuries, and damage. Landslide occurrences in fjords, bays, and lakes can additionally generate tsunamis with locally extremely high wave heights and runups. Two-dimensional depth-averaged models can successfully simulate the entire lifecycle of the three-dimensional landslide dynamics and tsunami propagation efficiently and accurately with the appropriate assumptions. Landslide rheology is defined using viscous fluids, visco-plastic fluids, and granular material to account for the possible landslide source materials. Saturated and unsaturated rheologies are further included to simulate debris flow, debris avalanches, mudflows, and rockslides respectively. The models are obtained by reducing the fully three-dimensional Navier-Stokes equations with the internal rheological definition of the landslide material, the water body, and appropriate scaling assumptions to obtain the depth-averaged two-dimensional models. The landslide and tsunami models are coupled to include the interaction between the landslide and the water body for tsunami generation. The reduced models are solved numerically with a fast semi-implicit finite-volume, shock-capturing based algorithm. The well-balanced, positivity preserving algorithm accurately accounts for wet-dry interface transition for the landslide runout, landslide-water body interface, and the tsunami wave flooding on land. The models are implemented as a General-Purpose computing on Graphics Processing Unit-based (GPGPU) suite of models, either coupled or run independently within the suite. The GPGPU implementation provides up to 1000 times speedup over a CPU-based serial computation. This enables simulations of multiple scenarios of hazard realizations that provides a basis for a probabilistic hazard assessment. The models have been successfully validated against experiments, past studies, and field data

  1. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  2. Reducing the Requirements and Cost of Astronomical Telescopes

    Science.gov (United States)

    Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)

    2002-01-01

    Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.

  3. Phasic firing in vasopressin cells: understanding its functional significance through computational models.

    Directory of Open Access Journals (Sweden)

    Duncan J MacGregor

    Full Text Available Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response

  4. Intra-operative cone beam computed tomography can help avoid reinterventions and reduce CT follow up after infrarenal EVAR.

    Science.gov (United States)

    Törnqvist, P; Dias, N; Sonesson, B; Kristmundsson, T; Resch, T

    2015-04-01

    Re-interventions after endovascular abdominal aortic aneurysm repair (EVAR) are common and therefore a strict imaging follow up protocol is required. The purpose of this study was to evaluate whether cone beam computed tomography (CBCT) can detect intra-operative complications and to compare this with angiography and the 1 month CT follow up (computed tomography angiography [CTA]). Fifty-one patients (44 men) were enrolled in a prospective trial. Patients underwent completion angiography and CBCT during infrarenal EVAR. Contrast was used except when pre-operative renal insufficiency was present or if the maximum contrast dose threshold was reached. CBCT reconstruction included the top of the stent graft to the iliac bifurcation. Endoleaks, kinks, or compressions were recorded. CBCT was technically successful in all patients. Twelve endoleaks were detected on completion digital subtraction angiography (CA). CBCT detected 4/5 type 1 endoleaks, but only one type 2 endoleak. CTA identified eight type 2 endoleaks and one residual type I endoleak. Two cases of stent compression were seen on CA. CBCT revealed five stent compressions and one kink, which resulted in four intra-operative adjunctive manoeuvres. CTA identified all cases of kinks or compressions that were left untreated. Two of them were corrected later. No additional kinks/compressions were found on CTA. Groin closure consisted of 78 fascia sutures, nine cut downs, and 11 percutaneous sutures. Seven femoral artery pseudoaneurysms (<1 cm) were detected on CTA, but no intervention was needed. CA is better than CBCT in detecting and categorizing endoleaks but CBCT (with or without contrast) is better than CA for detection of kinks or stentgraft compression. CTA plus CBCT identified all significant complications noted on the 1 month follow up CTA. The use of intra-operative CA and CBCT could replace early CTA after standard EVAR thus reducing overall radiation and contrast use. Technical development might further

  5. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.

  6. A High-Throughput Computational Framework for Identifying Significant Copy Number Aberrations from Array Comparative Genomic Hybridisation Data

    Directory of Open Access Journals (Sweden)

    Ian Roberts

    2012-01-01

    Full Text Available Reliable identification of copy number aberrations (CNA from comparative genomic hybridization data would be improved by the availability of a generalised method for processing large datasets. To this end, we developed swatCGH, a data analysis framework and region detection heuristic for computational grids. swatCGH analyses sequentially displaced (sliding windows of neighbouring probes and applies adaptive thresholds of varying stringency to identify the 10% of each chromosome that contains the most frequently occurring CNAs. We used the method to analyse a published dataset, comparing data preprocessed using four different DNA segmentation algorithms, and two methods for prioritising the detected CNAs. The consolidated list of the most commonly detected aberrations confirmed the value of swatCGH as a simplified high-throughput method for identifying biologically significant CNA regions of interest.

  7. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2015-12-15

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  8. Hybrid reduced order modeling for assembly calculations

    International Nuclear Information System (INIS)

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur

    2015-01-01

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  9. Sustained attention training reduces spatial bias in Parkinson's disease: a pilot case series.

    Science.gov (United States)

    DeGutis, Joseph; Grosso, Mallory; VanVleet, Thomas; Esterman, Michael; Pistorino, Laura; Cronin-Golomb, Alice

    2016-01-01

    Individuals with Parkinson's disease (PD) commonly demonstrate lateralized spatial biases, which affect daily functioning. Those with PD with initial motor symptoms on the left body side (LPD) have reduced leftward attention, whereas PD with initial motor symptoms on the right side (RPD) may display reduced rightward attention. We investigated whether a sustained attention training program could help reduce these spatial biases. Four non-demented individuals with PD (2 LPD, 2 RPD) performed a visual search task before and after 1 month of computer training. Before training, all participants showed a significant spatial bias and after training, all participants' spatial bias was eliminated.

  10. Effect of yoga on self-rated visual discomfort in computer users.

    Science.gov (United States)

    Telles, Shirley; Naveen, K V; Dash, Manoj; Deginal, Rajendra; Manjunath, N K

    2006-12-03

    'Dry eye' appears to be the main contributor to the symptoms of computer vision syndrome. Regular breaks and the use of artificial tears or certain eye drops are some of the options to reduce visual discomfort. A combination of yoga practices have been shown to reduce visual strain in persons with progressive myopia. The present randomized controlled trial was planned to evaluate the effect of a combination of yoga practices on self-rated symptoms of visual discomfort in professional computer users in Bangalore. Two hundred and ninety one professional computer users were randomly assigned to two groups, yoga (YG, n = 146) and wait list control (WL, n = 145). Both groups were assessed at baseline and after sixty days for self-rated visual discomfort using a standard questionnaire. During these 60 days the YG group practiced an hour of yoga daily for five days in a week and the WL group did their usual recreational activities also for an hour daily for the same duration. At 60 days there were 62 in the YG group and 55 in the WL group. While the scores for visual discomfort of both groups were comparable at baseline, after 60 days there was a significantly decreased score in the YG group, whereas the WL group showed significantly increased scores. The results suggest that the yoga practice appeared to reduce visual discomfort, while the group who had no yoga intervention (WL) showed an increase in discomfort at the end of sixty days.

  11. Modest hypoxia significantly reduces triglyceride content and lipid droplet size in 3T3-L1 adipocytes

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, Takeshi, E-mail: thashimo@fc.ritsumei.ac.jp [Faculty of Sport and Health Science, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan); Yokokawa, Takumi; Endo, Yuriko [Faculty of Sport and Health Science, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan); Iwanaka, Nobumasa [Ritsumeikan Global Innovation Research Organization, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan); Higashida, Kazuhiko [Faculty of Sport and Health Science, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan); Faculty of Sport Science, Waseda University, 2-579-15 Mikajima, Tokorozawa, Saitama 359-1192 (Japan); Taguchi, Sadayoshi [Faculty of Sport and Health Science, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577 (Japan)

    2013-10-11

    Highlights: •Long-term hypoxia decreased the size of LDs and lipid storage in 3T3-L1 adipocytes. •Long-term hypoxia increased basal lipolysis in 3T3-L1 adipocytes. •Hypoxia decreased lipid-associated proteins in 3T3-L1 adipocytes. •Hypoxia decreased basal glucose uptake and lipogenic proteins in 3T3-L1 adipocytes. •Hypoxia-mediated lipogenesis may be an attractive therapeutic target against obesity. -- Abstract: Background: A previous study has demonstrated that endurance training under hypoxia results in a greater reduction in body fat mass compared to exercise under normoxia. However, the cellular and molecular mechanisms that underlie this hypoxia-mediated reduction in fat mass remain uncertain. Here, we examine the effects of modest hypoxia on adipocyte function. Methods: Differentiated 3T3-L1 adipocytes were incubated at 5% O{sub 2} for 1 week (long-term hypoxia, HL) or one day (short-term hypoxia, HS) and compared with a normoxia control (NC). Results: HL, but not HS, resulted in a significant reduction in lipid droplet size and triglyceride content (by 50%) compared to NC (p < 0.01). As estimated by glycerol release, isoproterenol-induced lipolysis was significantly lowered by hypoxia, whereas the release of free fatty acids under the basal condition was prominently enhanced with HL compared to NC or HS (p < 0.01). Lipolysis-associated proteins, such as perilipin 1 and hormone-sensitive lipase, were unchanged, whereas adipose triglyceride lipase and its activator protein CGI-58 were decreased with HL in comparison to NC. Interestingly, such lipogenic proteins as fatty acid synthase, lipin-1, and peroxisome proliferator-activated receptor gamma were decreased. Furthermore, the uptake of glucose, the major precursor of 3-glycerol phosphate for triglyceride synthesis, was significantly reduced in HL compared to NC or HS (p < 0.01). Conclusion: We conclude that hypoxia has a direct impact on reducing the triglyceride content and lipid droplet size via

  12. Modest hypoxia significantly reduces triglyceride content and lipid droplet size in 3T3-L1 adipocytes

    International Nuclear Information System (INIS)

    Hashimoto, Takeshi; Yokokawa, Takumi; Endo, Yuriko; Iwanaka, Nobumasa; Higashida, Kazuhiko; Taguchi, Sadayoshi

    2013-01-01

    Highlights: •Long-term hypoxia decreased the size of LDs and lipid storage in 3T3-L1 adipocytes. •Long-term hypoxia increased basal lipolysis in 3T3-L1 adipocytes. •Hypoxia decreased lipid-associated proteins in 3T3-L1 adipocytes. •Hypoxia decreased basal glucose uptake and lipogenic proteins in 3T3-L1 adipocytes. •Hypoxia-mediated lipogenesis may be an attractive therapeutic target against obesity. -- Abstract: Background: A previous study has demonstrated that endurance training under hypoxia results in a greater reduction in body fat mass compared to exercise under normoxia. However, the cellular and molecular mechanisms that underlie this hypoxia-mediated reduction in fat mass remain uncertain. Here, we examine the effects of modest hypoxia on adipocyte function. Methods: Differentiated 3T3-L1 adipocytes were incubated at 5% O 2 for 1 week (long-term hypoxia, HL) or one day (short-term hypoxia, HS) and compared with a normoxia control (NC). Results: HL, but not HS, resulted in a significant reduction in lipid droplet size and triglyceride content (by 50%) compared to NC (p < 0.01). As estimated by glycerol release, isoproterenol-induced lipolysis was significantly lowered by hypoxia, whereas the release of free fatty acids under the basal condition was prominently enhanced with HL compared to NC or HS (p < 0.01). Lipolysis-associated proteins, such as perilipin 1 and hormone-sensitive lipase, were unchanged, whereas adipose triglyceride lipase and its activator protein CGI-58 were decreased with HL in comparison to NC. Interestingly, such lipogenic proteins as fatty acid synthase, lipin-1, and peroxisome proliferator-activated receptor gamma were decreased. Furthermore, the uptake of glucose, the major precursor of 3-glycerol phosphate for triglyceride synthesis, was significantly reduced in HL compared to NC or HS (p < 0.01). Conclusion: We conclude that hypoxia has a direct impact on reducing the triglyceride content and lipid droplet size via

  13. Smoking cessation programmes in radon affected areas: can they make a significant contribution to reducing radon-induced lung cancers?

    International Nuclear Information System (INIS)

    Denman, A.R.; Groves-Kirkby, C.J.; Timson, K.; Shield, G.; Rogers, S.; Phillips, P.S.

    2008-01-01

    Domestic radon levels in parts of the UK are sufficiently high to increase the risk of lung cancer in the occupants. Public health campaigns in Northamptonshire, a designated radon affected area with 6.3% of homes having average radon levels over the UK action level of 200 Bq m -3 , have encouraged householders to test for radon and then to carry out remediation in their homes, but have been only partially successful. Only 40% of Northamptonshire houses have been tested, and only 15% of householders finding raised levels proceed to remediate. Of those who did remediate, only 9% smoked, compared to a countywide average of 28.8%. This is unfortunate, since radon and smoking combine to place the individual at higher risk by a factor of around 4, and suggests that current strategies to reduce domestic radon exposure are not reaching those most at risk. During 2004-5, the NHS Stop Smoking Services in Northamptonshire assisted 2,808 smokers to quit to the 4-week stage, with some 30% of 4-week quitters remaining quitters at 1 year. We consider whether smoking cessation campaigns make significant contributions to radon risk reduction on their own, by assessing individual occupants' risk of developing lung cancer from knowledge of their age, gender, and smoking habits, together with he radon level in their house. The results demonstrate that smoking cessation programmes have significant added value in radon affected areas, and contribute a greater health benefit than reducing radon levels in the smokers' homes, whilst they remain smokers. Additionally, results are presented from a questionnaire-based survey of quitters, addressing their reasons for seeking help in quitting smoking, and whether knowledge of radon risks influenced this decision. The impact of these findings on future public health campaigns to reduce the impact of radon and smoking are discussed. (author)

  14. [The significance of dermatologic management in computer-assisted occupational dermatology consultation].

    Science.gov (United States)

    Rakoski, J; Borelli, S

    1989-01-15

    At our occupational outpatient clinic, 230 patients were treated for about 15 months. With the help of a standardized questionary, we registered all the data regarding the relevant substances the patients contacted during their work as well as their various jobs since they left school. The patients were repeatedly seen and trained in procedures of skin care and skin protection. If required, we took steps to find new jobs for them within their employing company; this was done in cooperation with the trade cooperative association according to the dermatological insurance consultanship. If these proceedings did not work out, the patient had to change his profession altogether. All data were computerized. As an example for this computer-based documentation we present the data of barbers.

  15. Computing and the Crisis: The Significant Role of New Information Technologies in the Current Socio-economic Meltdown

    Directory of Open Access Journals (Sweden)

    David Hakken

    2010-08-01

    Full Text Available There is good reason to be concerned about the long-term implications of the current crisis for the reproduction of contemporary social formations. Thus there is an urgent need to understand it character, especially its distinctive features. This article identifies profound ambiguities in valuing assets as new and key economic features of this crisis, ambiguities traceable to the dominant, “computationalist” computing used to develop new financial instruments. After some preliminaries, the article identifies four specific ways in which computerization of finance is generative of crisis. It then demonstrates how computationalist computing is linked to other efforts to extend commodification based on the ideology of so-called “intellectual property” (IP. Several other accounts for the crisis are considered and then demonstrated to have less explanatory value. After considering how some commons-oriented (e.g., Free/Libre and/or Opening Source Software development projects forms of computing also undermine the IP project, the article concludes with a brief discussion of what research on Socially Robust and Enduring Computing might contribute to fostering alternative, non-crisis generative ways to compute.

  16. Cone-beam computed tomography analysis of accessory maxillary ostium and Haller cells: Prevalence and clinical significance

    Energy Technology Data Exchange (ETDEWEB)

    Ali, Ibrahim K.; Sansare, Kaustubh; Karjodkar, Freny R.; Vanga, Kavita; Salve, Prashant [Dept. of Oral Medicine and Radiology, Nair Hospital Dental College, Mumbai (India); Pawar, Ajinkya M. [Dept. of Conservative Dentistry and Endodontics, Nair Hospital Dental College, Mumbai (India)

    2017-03-15

    This study aimed to evaluate the prevalence of Haller cells and accessory maxillary ostium (AMO) in cone-beam computed tomography (CBCT) images, and to analyze the relationships among Haller cells, AMO, and maxillary sinusitis. Volumetric CBCT scans from 201 patients were retrieved from our institution's Digital Imaging and Communications in Medicine archive folder. Two observers evaluated the presence of Haller cells, AMO, and maxillary sinusitis in the CBCT scans. AMO was observed in 114 patients, of whom 27 (23.7%) had AMO exclusively on the right side, 26 (22.8%) only on the left side, and 61 (53.5%) bilaterally. Haller cells were identified in 73 (36.3%) patients. In 24 (32.9%) they were present exclusively on the right side, in 17 (23.3%) they were only present on the left side, and in 32 (43.8%) they were located bilaterally. Of the 73 (36.3%) patients with Haller cells, maxillary sinusitis was also present in 50 (68.5%). On using chi-square test, a significant association was observed between AMO and maxillary sinusitis in the presence of Haller cells. Our results showed AMO and Haller cells to be associated with maxillary sinusitis. This study provides evidence for the usefulness of CBCT in imaging the bony anatomy of the sinonasal complex with significantly higher precision and a smaller radiation dose.

  17. Computer science: Key to a space program renaissance. The 1981 NASA/ASEE summer study on the use of computer science and technology in NASA. Volume 2: Appendices

    Science.gov (United States)

    Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)

    1983-01-01

    Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.

  18. Problems experienced by people with arthritis when using a computer.

    Science.gov (United States)

    Baker, Nancy A; Rogers, Joan C; Rubinstein, Elaine N; Allaire, Saralynn H; Wasko, Mary Chester

    2009-05-15

    To describe the prevalence of computer use problems experienced by a sample of people with arthritis, and to determine differences in the magnitude of these problems among people with rheumatoid arthritis (RA), osteoarthritis (OA), and fibromyalgia (FM). Subjects were recruited from the Arthritis Network Disease Registry and asked to complete a survey, the Computer Problems Survey, which was developed for this study. Descriptive statistics were calculated for the total sample and the 3 diagnostic subgroups. Ordinal regressions were used to determine differences between the diagnostic subgroups with respect to each equipment item while controlling for confounding demographic variables. A total of 359 respondents completed a survey. Of the 315 respondents who reported using a computer, 84% reported a problem with computer use attributed to their underlying disorder, and approximately 77% reported some discomfort related to computer use. Equipment items most likely to account for problems and discomfort were the chair, keyboard, mouse, and monitor. Of the 3 subgroups, significantly more respondents with FM reported more severe discomfort, more problems, and greater limitations related to computer use than those with RA or OA for all 4 equipment items. Computer use is significantly affected by arthritis. This could limit the ability of a person with arthritis to participate in work and home activities. Further study is warranted to delineate disease-related limitations and develop interventions to reduce them.

  19. Point Cloud-Based Automatic Assessment of 3D Computer Animation Courseworks

    Science.gov (United States)

    Paravati, Gianluca; Lamberti, Fabrizio; Gatteschi, Valentina; Demartini, Claudio; Montuschi, Paolo

    2017-01-01

    Computer-supported assessment tools can bring significant benefits to both students and teachers. When integrated in traditional education workflows, they may help to reduce the time required to perform the evaluation and consolidate the perception of fairness of the overall process. When integrated within on-line intelligent tutoring systems,…

  20. Fast computation of the characteristics method on vector computers

    International Nuclear Information System (INIS)

    Kugo, Teruhiko

    2001-11-01

    Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)

  1. Reduced Numerical Approximation of Reduced Fluid-Structure Interaction Problems With Applications in Hemodynamics

    Directory of Open Access Journals (Sweden)

    Claudia M. Colciago

    2018-06-01

    Full Text Available This paper deals with fast simulations of the hemodynamics in large arteries by considering a reduced model of the associated fluid-structure interaction problem, which in turn allows an additional reduction in terms of the numerical discretisation. The resulting method is both accurate and computationally cheap. This goal is achieved by means of two levels of reduction: first, we describe the model equations with a reduced mathematical formulation which allows to write the fluid-structure interaction problem as a Navier-Stokes system with non-standard boundary conditions; second, we employ numerical reduction techniques to further and drastically lower the computational costs. The non standard boundary condition is of a generalized Robin type, with a boundary mass and boundary stiffness terms accounting for the arterial wall compliance. The numerical reduction is obtained coupling two well-known techniques: the proper orthogonal decomposition and the reduced basis method, in particular the greedy algorithm. We start by reducing the numerical dimension of the problem at hand with a proper orthogonal decomposition and we measure the system energy with specific norms; this allows to take into account the different orders of magnitude of the state variables, the velocity and the pressure. Then, we introduce a strategy based on a greedy procedure which aims at enriching the reduced discretization space with low offline computational costs. As application, we consider a realistic hemodynamics problem with a perturbation in the boundary conditions and we show the good performances of the reduction techniques presented in the paper. The results obtained with the numerical reduction algorithm are compared with the one obtained by a standard finite element method. The gains obtained in term of CPU time are of three orders of magnitude.

  2. Utilizing Commercial Hardware and Open Source Computer Vision Software to Perform Motion Capture for Reduced Gravity Flight

    Science.gov (United States)

    Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth

    2016-01-01

    Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the

  3. Communication: Proper treatment of classically forbidden electronic transitions significantly improves detailed balance in surface hopping

    Energy Technology Data Exchange (ETDEWEB)

    Sifain, Andrew E. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Wang, Linjun [Department of Chemistry, Zhejiang University, Hangzhou 310027 (China); Prezhdo, Oleg V. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Department of Chemistry, University of Southern California, Los Angeles, California 90089-1062 (United States)

    2016-06-07

    Surface hopping is the most popular method for nonadiabatic molecular dynamics. Many have reported that it does not rigorously attain detailed balance at thermal equilibrium, but does so approximately. We show that convergence to the Boltzmann populations is significantly improved when the nuclear velocity is reversed after a classically forbidden hop. The proposed prescription significantly reduces the total number of classically forbidden hops encountered along a trajectory, suggesting that some randomization in nuclear velocity is needed when classically forbidden hops constitute a large fraction of attempted hops. Our results are verified computationally using two- and three-level quantum subsystems, coupled to a classical bath undergoing Langevin dynamics.

  4. Fast algorithms for computing phylogenetic divergence time.

    Science.gov (United States)

    Crosby, Ralph W; Williams, Tiffani L

    2017-12-06

    The inference of species divergence time is a key step in most phylogenetic studies. Methods have been available for the last ten years to perform the inference, but the performance of the methods does not yet scale well to studies with hundreds of taxa and thousands of DNA base pairs. For example a study of 349 primate taxa was estimated to require over 9 months of processing time. In this work, we present a new algorithm, AncestralAge, that significantly improves the performance of the divergence time process. As part of AncestralAge, we demonstrate a new method for the computation of phylogenetic likelihood and our experiments show a 90% improvement in likelihood computation time on the aforementioned dataset of 349 primates taxa with over 60,000 DNA base pairs. Additionally, we show that our new method for the computation of the Bayesian prior on node ages reduces the running time for this computation on the 349 taxa dataset by 99%. Through the use of these new algorithms we open up the ability to perform divergence time inference on large phylogenetic studies.

  5. Using lytic bacteriophages to eliminate or significantly reduce contamination of food by foodborne bacterial pathogens.

    Science.gov (United States)

    Sulakvelidze, Alexander

    2013-10-01

    Bacteriophages (also called 'phages') are viruses that kill bacteria. They are arguably the oldest (3 billion years old, by some estimates) and most ubiquitous (total number estimated to be 10(30) -10(32) ) known organisms on Earth. Phages play a key role in maintaining microbial balance in every ecosystem where bacteria exist, and they are part of the normal microflora of all fresh, unprocessed foods. Interest in various practical applications of bacteriophages has been gaining momentum recently, with perhaps the most attention focused on using them to improve food safety. That approach, called 'phage biocontrol', typically includes three main types of applications: (i) using phages to treat domesticated livestock in order to reduce their intestinal colonization with, and shedding of, specific bacterial pathogens; (ii) treatments for decontaminating inanimate surfaces in food-processing facilities and other food establishments, so that foods processed on those surfaces are not cross-contaminated with the targeted pathogens; and (iii) post-harvest treatments involving direct applications of phages onto the harvested foods. This mini-review primarily focuses on the last type of intervention, which has been gaining the most momentum recently. Indeed, the results of recent studies dealing with improving food safety, and several recent regulatory approvals of various commercial phage preparations developed for post-harvest food safety applications, strongly support the idea that lytic phages may provide a safe, environmentally-friendly, and effective approach for significantly reducing contamination of various foods with foodborne bacterial pathogens. However, some important technical and nontechnical problems may need to be addressed before phage biocontrol protocols can become an integral part of routine food safety intervention strategies implemented by food industries in the USA. © 2013 Society of Chemical Industry.

  6. From meatless Mondays to meatless Sundays: motivations for meat reduction among vegetarians and semi-vegetarians who mildly or significantly reduce their meat intake.

    Science.gov (United States)

    De Backer, Charlotte J S; Hudders, Liselot

    2014-01-01

    This study explores vegetarians' and semi-vegetarians' motives for reducing their meat intake. Participants are categorized as vegetarians (remove all meat from their diet); semi-vegetarians (significantly reduce meat intake: at least three days a week); or light semi-vegetarians (mildly reduce meat intake: once or twice a week). Most differences appear between vegetarians and both groups of semi-vegetarians. Animal-rights and ecological concerns, together with taste preferences, predict vegetarianism, while an increase in health motives increases the odds of being semi-vegetarian. Even within each group, subgroups with different motives appear, and it is recommended that future researchers pay more attention to these differences.

  7. Reduced Chemical Kinetic Mechanisms for JP-8 Combustion

    National Research Council Canada - National Science Library

    Montgomery, Christopher J; Cannon, S. M; Mawid, M. A; Sekar, B

    2002-01-01

    Using CARM (Computer Aided Reduction Method), a computer program that automates the mechanism reduction process, six different reduced chemical kinetic mechanisms for JP-8 combustion have been generated...

  8. Simulation of reactive nanolaminates using reduced models: III. Ingredients for a general multidimensional formulation

    Energy Technology Data Exchange (ETDEWEB)

    Salloum, Maher; Knio, Omar M. [Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD 21218-2686 (United States)

    2010-06-15

    A transient multidimensional reduced model is constructed for the simulation of reaction fronts in Ni/Al multilayers. The formulation is based on the generalization of earlier methodologies developed for quasi-1D axial and normal propagation, specifically by adapting the reduced formalism for atomic mixing and heat release. This approach enables us to focus on resolving the thermal front structure, whose evolution is governed by thermal diffusion and heat release. A mixed integration scheme is used for this purpose, combining an extended-stability, Runge-Kutta-Chebychev (RKC) integration of the diffusion term with exact treatment of the chemical source term. Thus, a detailed description of atomic mixing within individual layers is avoided, which enables transient modeling of the reduced equations of motion in multiple dimensions. Two-dimensional simulations are first conducted of front propagation in composites combining two bilayer periods. Results are compared with the experimental measurements of Knepper et al., which reveal that the reaction velocity can depend significantly on layering frequency. The comparison indicates that, using a concentration-dependent conductivity model, the transient 2D computations can reasonably reproduce the experimental behavior. Additional tests are performed based on 3D computations of surface initiated reactions. Comparison of computed predictions with laser ignition measurements indicates that the computations provide reasonable estimates of ignition thresholds. A detailed discussion is finally provided of potential generalizations and associated hurdles. (author)

  9. Data mining in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ruxandra-Ştefania PETRE

    2012-10-01

    Full Text Available This paper describes how data mining is used in cloud computing. Data Mining is used for extracting potentially useful information from raw data. The integration of data mining techniques into normal day-to-day activities has become common place. Every day people are confronted with targeted advertising, and data mining techniques help businesses to become more efficient by reducing costs.Data mining techniques and applications are very much needed in the cloud computing paradigm. The implementation of data mining techniques through Cloud computing will allow the users to retrieve meaningful information from virtually integrated data warehouse that reduces the costs of infrastructure and storage.

  10. [Computer-aided prescribing: from utopia to reality].

    Science.gov (United States)

    Suárez-Varela Ubeda, J; Beltrán Calvo, C; Molina López, T; Navarro Marín, P

    2005-05-31

    To determine whether the introduction of computer-aided prescribing helped reduce the administrative burden at primary care centers. Descriptive, cross-sectional design. Torreblanca Health Center in the province of Seville, southern Spain. From 29 October 2003 to the present a pilot project involving nine pharmacies in the basic health zone served by this health center has been running to evaluate computer-aided prescribing (the Receta XXI project) with real patients. All patients on the center's list of patients who came to the center for an administrative consultation to renew prescriptions for medications or supplies for long-term treatment. Total number of administrative visits per patient for patients who came to the center to renew prescriptions for long-term treatment, as recorded by the Diraya system (Historia Clinica Digital del Ciudadano, or Citizen's Digital Medical Record) during the period from February to July 2004. Total number of the same type of administrative visits recorded by the previous system (TASS) during the period from February to July 2003. The mean number of administrative visits per month during the period from February to July 2003 was 160, compared to a mean number of 64 visits during the period from February to July 2004. The reduction in the number of visits for prescription renewal was 60%. Introducing a system for computer-aided prescribing significantly reduced the number of administrative visits for prescription renewal for long-term treatment. This could help reduce the administrative burden considerably in primary care if the system were used in all centers.

  11. Low-cost addition-subtraction sequences for the final exponentiation computation in pairings

    DEFF Research Database (Denmark)

    Guzmán-Trampe, Juan E; Cruz-Cortéz, Nareli; Dominguez Perez, Luis

    2014-01-01

    In this paper, we address the problem of finding low cost addition–subtraction sequences for situations where a doubling step is significantly cheaper than a non-doubling one. One application of this setting appears in the computation of the final exponentiation step of the reduced Tate pairing d...

  12. Automatic Identification of the Repolarization Endpoint by Computing the Dominant T-wave on a Reduced Number of Leads.

    Science.gov (United States)

    Giuliani, C; Agostinelli, A; Di Nardo, F; Fioretti, S; Burattini, L

    2016-01-01

    Electrocardiographic (ECG) T-wave endpoint (Tend) identification suffers lack of reliability due to the presence of noise and variability among leads. Tend identification can be improved by using global repolarization waveforms obtained by combining several leads. The dominant T-wave (DTW) is a global repolarization waveform that proved to improve Tend identification when computed using the 15 (I to III, aVr, aVl, aVf, V1 to V6, X, Y, Z) leads usually available in clinics, of which only 8 (I, II, V1 to V6) are independent. The aim of the present study was to evaluate if the 8 independent leads are sufficient to obtain a DTW which allows a reliable Tend identification. To this aim Tend measures automatically identified from 15-dependent-lead DTWs of 46 control healthy subjects (CHS) and 103 acute myocardial infarction patients (AMIP) were compared with those obtained from 8-independent-lead DTWs. Results indicate that Tend distributions have not statistically different median values (CHS: 340 ms vs. 340 ms, respectively; AMIP: 325 ms vs. 320 ms, respectively), besides being strongly correlated (CHS: ρ=0.97, AMIP: 0.88; Pautomatic Tend identification from DTW, the 8 independent leads can be used without a statistically significant loss of accuracy but with a significant decrement of computational effort. The lead dependence of 7 out of 15 leads does not introduce a significant bias in the Tend determination from 15 dependent lead DTWs.

  13. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  14. Evaluating computer program performance on the CRAY-1

    International Nuclear Information System (INIS)

    Rudsinski, L.; Pieper, G.W.

    1979-01-01

    The Advanced Scientific Computers Project of Argonne's Applied Mathematics Division has two objectives: to evaluate supercomputers and to determine their effect on Argonne's computing workload. Initial efforts have focused on the CRAY-1, which is the only advanced computer currently available. Users from seven Argonne divisions executed test programs on the CRAY and made performance comparisons with the IBM 370/195 at Argonne. This report describes these experiences and discusses various techniques for improving run times on the CRAY. Direct translations of code from scalar to vector processor reduced running times as much as two-fold, and this reduction will become more pronounced as the CRAY compiler is developed. Further improvement (two- to ten-fold) was realized by making minor code changes to facilitate compiler recognition of the parallel and vector structure within the programs. Finally, extensive rewriting of the FORTRAN code structure reduced execution times dramatically, in three cases by a factor of more than 20; and even greater reduction should be possible by changing algorithms within a production code. It is condluded that the CRAY-1 would be of great benefit to Argonne researchers. Existing codes could be modified with relative ease to run significantly faster than on the 370/195. More important, the CRAY would permit scientists to investigate complex problems currently deemed infeasibile on traditional scalar machines. Finally, an interface between the CRAY-1 and IBM computers such as the 370/195, scheduled by Cray Research for the first quarter of 1979, would considerably facilitate the task of integrating the CRAY into Argonne's Central Computing Facility. 13 tables

  15. Walking with a four wheeled walker (rollator) significantly reduces EMG lower-limb muscle activity in healthy subjects.

    Science.gov (United States)

    Suica, Zorica; Romkes, Jacqueline; Tal, Amir; Maguire, Clare

    2016-01-01

    To investigate the immediate effect of four-wheeled- walker(rollator)walking on lower-limb muscle activity and trunk-sway in healthy subjects. In this cross-sectional design electromyographic (EMG) data was collected in six lower-limb muscle groups and trunk-sway was measured as peak-to-peak angular displacement of the centre-of-mass (level L2/3) in the sagittal and frontal-planes using the SwayStar balance system. 19 subjects walked at self-selected speed firstly without a rollator then in randomised order 1. with rollator 2. with rollator with increased weight-bearing. Rollator-walking caused statistically significant reductions in EMG activity in lower-limb muscle groups and effect-sizes were medium to large. Increased weight-bearing increased the effect. Trunk-sway in the sagittal and frontal-planes showed no statistically significant difference between conditions. Rollator-walking reduces lower-limb muscle activity but trunk-sway remains unchanged as stability is likely gained through forces generated by the upper-limbs. Short-term stability is gained but the long-term effect is unclear and requires investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Improving computational efficiency of Monte Carlo simulations with variance reduction

    International Nuclear Information System (INIS)

    Turner, A.; Davis, A.

    2013-01-01

    CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effectively intractable. To combat this effect, CCFE has developed an adaptation of MCNP which dynamically adjusts the WW where a large weight deviation is encountered. The method effectively 'de-optimises' the WW, reducing the VR performance but this is offset by a significant increase in parallel efficiency. Testing with a simple geometry has shown the method does not bias the result. This 'long history method' has enabled CCFE to significantly improve the performance of MCNP calculations for ITER on parallel clusters, and will be beneficial for any geometry combining streaming and deep penetration effects. (authors)

  17. Concatenated codes for fault tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  18. Advanced Computational Methods for Thermal Radiative Heat Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,

    2016-10-01

    Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

  19. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    Science.gov (United States)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  20. Potent corticosteroid cream (mometasone furoate) significantly reduces acute radiation dermatitis: results from a double-blind, randomized study

    International Nuclear Information System (INIS)

    Bostroem, Aasa; Lindman, Henrik; Swartling, Carl; Berne, Berit; Bergh, Jonas

    2001-01-01

    Purpose: Radiation-induced dermatitis is a very common side effect of radiation therapy, and may necessitate interruption of the therapy. There is a substantial lack of evidence-based treatments for this condition. The aim of this study was to investigate the effect of mometasone furoate cream (MMF) on radiation dermatitis in a prospective, double-blind, randomized study. Material and methods: The study comprised 49 patients with node-negative breast cancer. They were operated on with sector resection and scheduled for postoperative radiotherapy using photons with identical radiation qualities and dosage to the breast parenchyma. The patients were randomized to receive either MMF or emollient cream. The cream was applied on the irradiated skin twice a week from the start of radiotherapy until the 12th fraction (24 Gy) and thereafter once daily until 3 weeks after completion of radiation. Both groups additionally received non-blinded emollient cream daily. The intensity of the acute radiation dermatitis was evaluated on a weekly basis regarding erythema and pigmentation, using a reflectance spectrophotometer together with visual scoring of the skin reactions. Results: MMF in combination with emollient cream treatment significantly decreased acute radiation dermatitis (P=0.0033) compared with emollient cream alone. There was no significant difference in pigmentation between the two groups. Conclusions: Adding MMF, a potent topical corticosteroid, to an emollient cream is statistically significantly more effective than emollient cream alone in reducing acute radiation dermatitis

  1. Effect of yoga on self-rated visual discomfort in computer users

    Directory of Open Access Journals (Sweden)

    Deginal Rajendra

    2006-12-01

    Full Text Available Abstract Background 'Dry eye' appears to be the main contributor to the symptoms of computer vision syndrome. Regular breaks and the use of artificial tears or certain eye drops are some of the options to reduce visual discomfort. A combination of yoga practices have been shown to reduce visual strain in persons with progressive myopia. The present randomized controlled trial was planned to evaluate the effect of a combination of yoga practices on self-rated symptoms of visual discomfort in professional computer users in Bangalore. Methods Two hundred and ninety one professional computer users were randomly assigned to two groups, yoga (YG, n = 146 and wait list control (WL, n = 145. Both groups were assessed at baseline and after sixty days for self-rated visual discomfort using a standard questionnaire. During these 60 days the YG group practiced an hour of yoga daily for five days in a week and the WL group did their usual recreational activities also for an hour daily for the same duration. At 60 days there were 62 in the YG group and 55 in the WL group. Results While the scores for visual discomfort of both groups were comparable at baseline, after 60 days there was a significantly decreased score in the YG group, whereas the WL group showed significantly increased scores. Conclusion The results suggest that the yoga practice appeared to reduce visual discomfort, while the group who had no yoga intervention (WL showed an increase in discomfort at the end of sixty days.

  2. Diagnostic significance of rib series in minor thorax trauma compared to plain chest film and computed tomography.

    Science.gov (United States)

    Hoffstetter, Patrick; Dornia, Christian; Schäfer, Stephan; Wagner, Merle; Dendl, Lena M; Stroszczynski, Christian; Schreyer, Andreas G

    2014-01-01

    Rib series (RS) are a special radiological technique to improve the visualization of the bony parts of the chest. The aim of this study was to evaluate the diagnostic accuracy of rib series in minor thorax trauma. Retrospective study of 56 patients who received RS, 39 patients where additionally evaluated by plain chest film (PCF). All patients underwent a computed tomography (CT) of the chest. RS and PCF were re-read independently by three radiologists, the results were compared with the CT as goldstandard. Sensitivity, specificity, negative and positive predictive value were calculated. Significance in the differences of findings was determined by McNemar test, interobserver variability by Cohens kappa test. 56 patients were evaluated (34 men, 22 women, mean age =61 y.). In 22 patients one or more rib fracture could be identified by CT. In 18 of these cases (82%) the correct diagnosis was made by RS, in 16 cases (73%) the correct number of involved ribs was detected. These differences were significant (p = 0.03). Specificity was 100%, negative and positive predictive value were 85% and 100%. Kappa values for the interobserver agreement was 0.92-0.96. Sensitivity of PCF was 46% and was significantly lower (p = 0.008) compared to CT. Rib series does not seem to be an useful examination in evaluating minor thorax trauma. CT seems to be the method of choice to detect rib fractures, but the clinical value of the radiological proof has to be discussed and investigated in larger follow up studies.

  3. Environmental program with operational cases to reduce risk to the marine environment significantly

    International Nuclear Information System (INIS)

    Cline, J.T.; Forde, R.

    1991-01-01

    In this paper Amoco Norway Oil Company's environmental program is detailed, followed by example operational programs and achievements aimed to minimize environmental risks to the marine environment at Valhall platform. With a corporate goal to be a leader in protecting the environment, the appropriate strategies and policies that form the basis of the environmental management system are incorporated in the quality assurance programs. Also, included in the program are necessary organizational structures, responsibilities of environmental affairs and line organization personnel, compliance procedures and a waste task force obliged to implement operations improvements. An internal environmental audit system has been initiated, in addition to corporate level audits, which, when communicated to the line organization closes the environmental management loop through experience feed back. Environmental projects underway are significantly decreasing the extent and/or risk of pollution from offshore activities. The cradle to grave responsibility is assumed with waste separated offshore and onshore followed by disposal in audited sites. A $5 MM program is underway to control produced oily solids and reduce oil in produced water aiming to less than 20 ppm. When oil-based mud is used in deeper hole sections, drill solids disposed at sea average less than 60 g oil/kg dry cuttings using appropriate shaker screens, and a washing/centrifuge system to remove fines. Certain oily liquid wastes are being injected down hole whereas previously they were burned using a mud burner. Finally, a program is underway with a goal to eliminate sea discharge of oil on cuttings through injection disposal of oily wastes, drilling with alternative muds such as a cationic water base mud, and/or proper onshore disposal of oily wastes

  4. Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries

    OpenAIRE

    Narendra Kumar; Shikha Jain

    2012-01-01

    The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...

  5. From Computer-interpretable Guidelines to Computer-interpretable Quality Indicators: A Case for an Ontology.

    Science.gov (United States)

    White, Pam; Roudsari, Abdul

    2014-01-01

    In the United Kingdom's National Health Service, quality indicators are generally measured electronically by using queries and data extraction, resulting in overlap and duplication of query components. Electronic measurement of health care quality indicators could be improved through an ontology intended to reduce duplication of effort during healthcare quality monitoring. While much research has been published on ontologies for computer-interpretable guidelines, quality indicators have lagged behind. We aimed to determine progress on the use of ontologies to facilitate computer-interpretable healthcare quality indicators. We assessed potential for improvements to computer-interpretable healthcare quality indicators in England. We concluded that an ontology for a large, diverse set of healthcare quality indicators could benefit the NHS and reduce workload, with potential lessons for other countries.

  6. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    Full Text Available Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA. Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  7. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    Science.gov (United States)

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  8. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    Science.gov (United States)

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  9. Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends.

    Science.gov (United States)

    Mohammed, Emad A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    The emergence of massive datasets in a clinical setting presents both challenges and opportunities in data storage and analysis. This so called "big data" challenges traditional analytic tools and will increasingly require novel solutions adapted from other fields. Advances in information and communication technology present the most viable solutions to big data analysis in terms of efficiency and scalability. It is vital those big data solutions are multithreaded and that data access approaches be precisely tailored to large volumes of semi-structured/unstructured data. THE MAPREDUCE PROGRAMMING FRAMEWORK USES TWO TASKS COMMON IN FUNCTIONAL PROGRAMMING: Map and Reduce. MapReduce is a new parallel processing framework and Hadoop is its open-source implementation on a single computing node or on clusters. Compared with existing parallel processing paradigms (e.g. grid computing and graphical processing unit (GPU)), MapReduce and Hadoop have two advantages: 1) fault-tolerant storage resulting in reliable data processing by replicating the computing tasks, and cloning the data chunks on different computing nodes across the computing cluster; 2) high-throughput data processing via a batch processing framework and the Hadoop distributed file system (HDFS). Data are stored in the HDFS and made available to the slave nodes for computation. In this paper, we review the existing applications of the MapReduce programming framework and its implementation platform Hadoop in clinical big data and related medical health informatics fields. The usage of MapReduce and Hadoop on a distributed system represents a significant advance in clinical big data processing and utilization, and opens up new opportunities in the emerging era of big data analytics. The objective of this paper is to summarize the state-of-the-art efforts in clinical big data analytics and highlight what might be needed to enhance the outcomes of clinical big data analytics tools. This paper is concluded by

  10. REST-MapReduce: An Integrated Interface but Differentiated Service

    Directory of Open Access Journals (Sweden)

    Jong-Hyuk Park

    2014-01-01

    Full Text Available With the fast deployment of cloud computing, MapReduce architectures are becoming the major technologies for mobile cloud computing. The concept of MapReduce was first introduced as a novel programming model and implementation for a large set of computing devices. In this research, we propose a novel concept of REST-MapReduce, enabling users to use only the REST interface without using the MapReduce architecture. This approach provides a higher level of abstraction by integration of the two types of access interface, REST API and MapReduce. The motivation of this research stems from the slower response time for accessing simple RDBMS on Hadoop than direct access to RDMBS. This is because there is overhead to job scheduling, initiating, starting, tracking, and management during MapReduce-based parallel execution. Therefore, we provide a good performance for REST Open API service and for MapReduce, respectively. This is very useful for constructing REST Open API services on Hadoop hosting services, for example, Amazon AWS (Macdonald, 2005 or IBM Smart Cloud. For evaluating performance of our REST-MapReduce framework, we conducted experiments with Jersey REST web server and Hadoop. Experimental result shows that our approach outperforms conventional approaches.

  11. All-Particle Multiscale Computation of Hypersonic Rarefied Flow

    Science.gov (United States)

    Jun, E.; Burt, J. M.; Boyd, I. D.

    2011-05-01

    This study examines a new hybrid particle scheme used as an alternative means of multiscale flow simulation. The hybrid particle scheme employs the direct simulation Monte Carlo (DSMC) method in rarefied flow regions and the low diffusion (LD) particle method in continuum flow regions. The numerical procedures of the low diffusion particle method are implemented within an existing DSMC algorithm. The performance of the LD-DSMC approach is assessed by studying Mach 10 nitrogen flow over a sphere with a global Knudsen number of 0.002. The hybrid scheme results show good overall agreement with results from standard DSMC and CFD computation. Subcell procedures are utilized to improve computational efficiency and reduce sensitivity to DSMC cell size in the hybrid scheme. This makes it possible to perform the LD-DSMC simulation on a much coarser mesh that leads to a significant reduction in computation time.

  12. Use of theory in computer-based interventions to reduce alcohol use among adolescents and young adults: a systematic review.

    Science.gov (United States)

    Tebb, Kathleen P; Erenrich, Rebecca K; Jasik, Carolyn Bradner; Berna, Mark S; Lester, James C; Ozer, Elizabeth M

    2016-06-17

    Alcohol use and binge drinking among adolescents and young adults remain frequent causes of preventable injuries, disease, and death, and there has been growing attention to computer-based modes of intervention delivery to prevent/reduce alcohol use. Research suggests that health interventions grounded in established theory are more effective than those with no theoretical basis. The goal of this study was to conduct a literature review of computer-based interventions (CBIs) designed to address alcohol use among adolescents and young adults (aged 12-21 years) and examine the extent to which CBIs use theories of behavior change in their development and evaluations. This study also provides an update on extant CBIs addressing alcohol use among youth and their effectiveness. Between November and December of 2014, a literature review of CBIs aimed at preventing or reducing alcohol in PsychINFO, PubMed, and Google Scholar was conducted. The use of theory in each CBI was examined using a modified version of the classification system developed by Painter et al. (Ann Behav Med 35:358-362, 2008). The search yielded 600 unique articles, 500 were excluded because they did not meet the inclusion criteria. The 100 remaining articles were retained for analyses. Many articles were written about a single intervention; thus, the search revealed a total of 42 unique CBIs. In examining the use of theory, 22 CBIs (52 %) explicitly named one or more theoretical frameworks. Primary theories mentioned were social cognitive theory, transtheoretical model, theory of planned behavior and reasoned action, and health belief model. Less than half (48 %), did not use theory, but mentioned either use of a theoretical construct (such as self-efficacy) or an intervention technique (e.g., manipulating social norms). Only a few articles provided detailed information about how the theory was applied to the CBI; the vast majority included little to no information. Given the importance of theory in

  13. Use of theory in computer-based interventions to reduce alcohol use among adolescents and young adults: a systematic review

    Directory of Open Access Journals (Sweden)

    Kathleen P. Tebb

    2016-06-01

    Full Text Available Abstract Background Alcohol use and binge drinking among adolescents and young adults remain frequent causes of preventable injuries, disease, and death, and there has been growing attention to computer-based modes of intervention delivery to prevent/reduce alcohol use. Research suggests that health interventions grounded in established theory are more effective than those with no theoretical basis. The goal of this study was to conduct a literature review of computer-based interventions (CBIs designed to address alcohol use among adolescents and young adults (aged 12–21 years and examine the extent to which CBIs use theories of behavior change in their development and evaluations. This study also provides an update on extant CBIs addressing alcohol use among youth and their effectiveness. Methods Between November and December of 2014, a literature review of CBIs aimed at preventing or reducing alcohol in PsychINFO, PubMed, and Google Scholar was conducted. The use of theory in each CBI was examined using a modified version of the classification system developed by Painter et al. (Ann Behav Med 35:358–362, 2008. Results The search yielded 600 unique articles, 500 were excluded because they did not meet the inclusion criteria. The 100 remaining articles were retained for analyses. Many articles were written about a single intervention; thus, the search revealed a total of 42 unique CBIs. In examining the use of theory, 22 CBIs (52 % explicitly named one or more theoretical frameworks. Primary theories mentioned were social cognitive theory, transtheoretical model, theory of planned behavior and reasoned action, and health belief model. Less than half (48 %, did not use theory, but mentioned either use of a theoretical construct (such as self-efficacy or an intervention technique (e.g., manipulating social norms. Only a few articles provided detailed information about how the theory was applied to the CBI; the vast majority included little

  14. Occupational stress in human computer interaction.

    Science.gov (United States)

    Smith, M J; Conway, F T; Karsh, B T

    1999-04-01

    There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.

  15. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  16. Accelerating Climate and Weather Simulations through Hybrid Computing

    Science.gov (United States)

    Zhou, Shujia; Cruz, Carlos; Duffy, Daniel; Tucker, Robert; Purcell, Mark

    2011-01-01

    Unconventional multi- and many-core processors (e.g. IBM (R) Cell B.E.(TM) and NVIDIA (R) GPU) have emerged as effective accelerators in trial climate and weather simulations. Yet these climate and weather models typically run on parallel computers with conventional processors (e.g. Intel, AMD, and IBM) using Message Passing Interface. To address challenges involved in efficiently and easily connecting accelerators to parallel computers, we investigated using IBM's Dynamic Application Virtualization (TM) (IBM DAV) software in a prototype hybrid computing system with representative climate and weather model components. The hybrid system comprises two Intel blades and two IBM QS22 Cell B.E. blades, connected with both InfiniBand(R) (IB) and 1-Gigabit Ethernet. The system significantly accelerates a solar radiation model component by offloading compute-intensive calculations to the Cell blades. Systematic tests show that IBM DAV can seamlessly offload compute-intensive calculations from Intel blades to Cell B.E. blades in a scalable, load-balanced manner. However, noticeable communication overhead was observed, mainly due to IP over the IB protocol. Full utilization of IB Sockets Direct Protocol and the lower latency production version of IBM DAV will reduce this overhead.

  17. Aeroelastic simulation using CFD based reduced order models

    International Nuclear Information System (INIS)

    Zhang, W.; Ye, Z.; Li, H.; Yang, Q.

    2005-01-01

    This paper aims at providing an accurate and efficient method for aeroelastic simulation. System identification is used to get the reduced order models of unsteady aerodynamics. Unsteady Euler codes are used to compute the output signals while 3211 multistep input signals are utilized. LS(Least Squares) method is used to estimate the coefficients of the input-output difference model. The reduced order models are then used in place of the unsteady CFD code for aeroelastic simulation. The aeroelastic equations are marched by an improved 4th order Runge-Kutta method that only needs to compute the aerodynamic loads one time at every time step. The computed results agree well with that of the direct coupling CFD/CSD methods. The computational efficiency is improved 1∼2 orders while still retaining the high accuracy. A standard aeroelastic computing example (isogai wing) with S type flutter boundary is computed and analyzed. It is due to the system has more than one neutral points at the Mach range of 0.875∼0.9. (author)

  18. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  19. Definition of bulky disease in early stage Hodgkin lymphoma in computed tomography era: prognostic significance of measurements in the coronal and transverse planes.

    Science.gov (United States)

    Kumar, Anita; Burger, Irene A; Zhang, Zhigang; Drill, Esther N; Migliacci, Jocelyn C; Ng, Andrea; LaCasce, Ann; Wall, Darci; Witzig, Thomas E; Ristow, Kay; Yahalom, Joachim; Moskowitz, Craig H; Zelenetz, Andrew D

    2016-10-01

    Disease bulk is an important prognostic factor in early stage Hodgkin lymphoma, but its definition is unclear in the computed tomography era. This retrospective analysis investigated the prognostic significance of bulky disease measured in transverse and coronal planes on computed tomography imaging. Early stage Hodgkin lymphoma patients (n=185) treated with chemotherapy with or without radiotherapy from 2000-2010 were included. The longest diameter of the largest lymph node mass was measured in transverse and coronal axes on pre-treatment imaging. The optimal cut off for disease bulk was maximal diameter greater than 7 cm measured in either the transverse or coronal plane. Thirty patients with maximal transverse diameter of 7 cm or under were found to have bulk in coronal axis. The 4-year overall survival was 96.5% (CI: 93.3%, 100%) and 4-year relapse-free survival was 86.8% (CI: 81.9%, 92.1%) for all patients. Relapse-free survival at four years for bulky patients was 80.5% (CI: 73%, 88.9%) compared to 94.4% (CI: 89.1%, 100%) for non-bulky; Cox HR 4.21 (CI: 1.43, 12.38) (P=0.004). In bulky patients, relapse-free survival was not impacted in patients treated with chemoradiotherapy; however, it was significantly lower in patients treated with chemotherapy alone. In an independent validation cohort of 38 patients treated with chemotherapy alone, patients with bulky disease had an inferior relapse-free survival [at 4 years, 71.1% (CI: 52.1%, 97%) vs 94.1% (CI: 83.6%, 100%), Cox HR 5.27 (CI: 0.62, 45.16); P=0.09]. Presence of bulky disease on multidimensional computed tomography imaging is a significant prognostic factor in early stage Hodgkin lymphoma. Coronal reformations may be included for routine Hodgkin lymphoma staging evaluation. In future, our definition of disease bulk may be useful in identifying patients who are most appropriate for chemotherapy alone. Copyright© Ferrata Storti Foundation.

  20. Radiological protection procedures for industrial applications of computed radiography

    International Nuclear Information System (INIS)

    Aquino, Josilto Oliveira de

    2009-03-01

    Due to its very particular characteristics, industrial radiography is responsible for roughly half of the relevant accidents in nuclear industry, in developed as well as in developing countries, according to the International Atomic Energy Agency (IAEA). Thus, safety and radiological protection in industrial gamma radiography have been receiving especial treatment by regulatory authorities of most Member States. The main objective of the present work was to evaluate, from the radioprotection point of view, the main advantages of computed radiography (CR) for filmless industrial radiography. In order to accomplish this, both techniques, i.e. conventional and filmless computed radiography were evaluated and compared through practical studies. After the studies performed at the present work it was concluded that computed radiography significantly reduces the inherent doses, reflecting in smaller restricted areas and costs, with consequent improvement in radiological protection and safety. (author)

  1. Respiratory-Gated Positron Emission Tomography and Breath-Hold Computed Tomography Coupling to Reduce the Influence of Respiratory Motion: Methodology and Feasibility

    International Nuclear Information System (INIS)

    Daouk, J.; Fin, L.; Bailly, P.; Meyer, M.E.

    2009-01-01

    Background: Respiratory motion causes uptake in positron emission tomography (PET) images of chest and abdominal structures to be blurred and reduced in intensity. Purpose: To compare two respiratory-gated PET binning methods (based on frequency and amplitude analyses of the respiratory signal) and to propose a 'BH-based' method based on an additional breath-hold computed tomography (CT) acquisition. Material and Methods: Respiratory-gated PET consists in list-mode (LM) acquisition with simultaneous respiratory signal recording. A phantom study featured rectilinear movement of a 0.5-ml sphere filled with 18 F-fluorodeoxyglucose ( 18 F-FDG) solution, placed in a radioactive background (sphere-to-background contrast 6:1). Two patients were also examined. Three figures of merit were calculated: the target-to-background ratio profile (TBRP) in the axial direction through the uptake (i.e., the sphere or lesion), full-width-at-half-maximum (FWHM) values, and maximized standard uptake values (SUVmax). Results: In the phantom study, the peak TBRP was 0.9 for non-gated volume, 1.83 for BH-based volume, and varied between 1.13 and 1.73 for Freq-based volumes and between 1.34 and 1.66 for Amp-based volumes. A reference volume (REF-static) was also acquired for the phantom (in a static, 'expiratory' state), with a peak TBRP at 1.88. TBRPs were computed for patient data, with higher peak values for all gated volumes than for non-gated volumes. Conclusion: Respiratory-gated PET acquisition reduces the blurring effect and increases image contrast. However, Freq-based and Amp-based volumes are still influenced by inappropriate attenuation correction and misregistration of mobile lesions on CT images. The proposed BH-based method both reduces motion artifacts and improves PET-CT registration

  2. Computing Bounds on Resource Levels for Flexible Plans

    Science.gov (United States)

    Muscvettola, Nicola; Rijsman, David

    2009-01-01

    algorithm applied to an auxiliary flow network of 2N nodes. The algorithm is believed to be efficient in practice; experimental analysis shows the practical cost of maxflow to be as low as O(N1.5). The algorithm could be enhanced following at least two approaches. In the first approach, incremental subalgorithms for the computation of the envelope could be developed. By use of temporal scanning of the events in the temporal network, it may be possible to significantly reduce the size of the networks on which it is necessary to run the maximum-flow subalgorithm, thereby significantly reducing the time required for envelope calculation. In the second approach, the practical effectiveness of resource envelopes in the inner loops of search algorithms could be tested for multi-capacity resource scheduling. This testing would include inner-loop backtracking and termination tests and variable and value-ordering heuristics that exploit the properties of resource envelopes more directly.

  3. Improvements in fast-response flood modeling: desktop parallel computing and domain tracking

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R [Los Alamos National Laboratory; Mcpherson, Timothy N [Los Alamos National Laboratory; Burian, Steven J [UNIV. OF UTAH

    2009-01-01

    It is becoming increasingly important to have the ability to accurately forecast flooding, as flooding accounts for the most losses due to natural disasters in the world and the United States. Flood inundation modeling has been dominated by one-dimensional approaches. These models are computationally efficient and are considered by many engineers to produce reasonably accurate water surface profiles. However, because the profiles estimated in these models must be superimposed on digital elevation data to create a two-dimensional map, the result may be sensitive to the ability of the elevation data to capture relevant features (e.g. dikes/levees, roads, walls, etc...). Moreover, one-dimensional models do not explicitly represent the complex flow processes present in floodplains and urban environments and because two-dimensional models based on the shallow water equations have significantly greater ability to determine flow velocity and direction, the National Research Council (NRC) has recommended that two-dimensional models be used over one-dimensional models for flood inundation studies. This paper has shown that two-dimensional flood modeling computational time can be greatly reduced through the use of Java multithreading on multi-core computers which effectively provides a means for parallel computing on a desktop computer. In addition, this paper has shown that when desktop parallel computing is coupled with a domain tracking algorithm, significant computation time can be eliminated when computations are completed only on inundated cells. The drastic reduction in computational time shown here enhances the ability of two-dimensional flood inundation models to be used as a near-real time flood forecasting tool, engineering, design tool, or planning tool. Perhaps even of greater significance, the reduction in computation time makes the incorporation of risk and uncertainty/ensemble forecasting more feasible for flood inundation modeling (NRC 2000; Sayers et al

  4. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  5. Motion estimation by data assimilation in reduced dynamic models

    International Nuclear Information System (INIS)

    Drifi, Karim

    2013-01-01

    Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr

  6. Lipid Replacement Therapy Drink Containing a Glycophospholipid Formulation Rapidly and Significantly Reduces Fatigue While Improving Energy and Mental Clarity

    Directory of Open Access Journals (Sweden)

    Robert Settineri

    2011-08-01

    Full Text Available Background: Fatigue is the most common complaint of patients seeking general medical care and is often treated with stimulants. It is also important in various physical activities of relatively healthy men and women, such as sports performance. Recent clinical trials using patients with chronic fatigue have shown the benefit of Lipid Replacement Therapy in restoring mitochondrial electron transport function and reducing moderate to severe chronic fatigue. Methods: Lipid Replacement Therapy was administered for the first time as an all-natural functional food drink (60 ml containing polyunsaturated glycophospholipids but devoid of stimulants or herbs to reduce fatigue. This preliminary study used the Piper Fatigue Survey instrument as well as a supplemental questionnaire to assess the effects of the glycophospholipid drink on fatigue and the acceptability of the test drink in adult men and women. A volunteer group of 29 subjects of mean age 56.2±4.5 years with various fatigue levels were randomly recruited in a clinical health fair setting to participate in an afternoon open label trial on the effects of the test drink. Results: Using the Piper Fatigue instrument overall fatigue among participants was reduced within the 3-hour seminar by a mean of 39.6% (p<0.0001. All of the subcategories of fatigue showed significant reductions. Some subjects responded within 15 minutes, and the majority responded within one hour with increased energy and activity and perceived improvements in cognitive function, mental clarity and focus. The test drink was determined to be quite acceptable in terms of taste and appearance. There were no adverse events from the energy drink during the study.Functional Foods in Health and Disease 2011; 8:245-254Conclusions: The Lipid Replacement Therapy functional food drink appeared to be a safe, acceptable and potentially useful new method to reduce fatigue, sustain energy and improve perceptions of mental function.

  7. Survey of Storage and Fault Tolerance Strategies Used in Cloud Computing

    Science.gov (United States)

    Ericson, Kathleen; Pallickara, Shrideep

    Cloud computing has gained significant traction in recent years. Companies such as Google, Amazon and Microsoft have been building massive data centers over the past few years. Spanning geographic and administrative domains, these data centers tend to be built out of commodity desktops with the total number of computers managed by these companies being in the order of millions. Additionally, the use of virtualization allows a physical node to be presented as a set of virtual nodes resulting in a seemingly inexhaustible set of computational resources. By leveraging economies of scale, these data centers can provision cpu, networking, and storage at substantially reduced prices which in turn underpins the move by many institutions to host their services in the cloud.

  8. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  9. Projected role of advanced computational aerodynamic methods at the Lockheed-Georgia company

    Science.gov (United States)

    Lores, M. E.

    1978-01-01

    Experience with advanced computational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

  10. The effectiveness of a training method using self-modeling webcam photos for reducing musculoskeletal risk among office workers using computers.

    Science.gov (United States)

    Taieb-Maimon, Meirav; Cwikel, Julie; Shapira, Bracha; Orenstein, Ido

    2012-03-01

    An intervention study was conducted to examine the effectiveness of an innovative self-modeling photo-training method for reducing musculoskeletal risk among office workers using computers. Sixty workers were randomly assigned to either: 1) a control group; 2) an office training group that received personal, ergonomic training and workstation adjustments or 3) a photo-training group that received both office training and an automatic frequent-feedback system that displayed on the computer screen a photo of the worker's current sitting posture together with the correct posture photo taken earlier during office training. Musculoskeletal risk was evaluated using the Rapid Upper Limb Assessment (RULA) method before, during and after the six weeks intervention. Both training methods provided effective short-term posture improvement; however, sustained improvement was only attained with the photo-training method. Both interventions had a greater effect on older workers and on workers suffering more musculoskeletal pain. The photo-training method had a greater positive effect on women than on men. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  11. Augmented Quadruple-Phase Contrast Media Administration and Triphasic Scan Protocol Increases Image Quality at Reduced Radiation Dose During Computed Tomography Urography.

    Science.gov (United States)

    Saade, Charbel; Mohamad, May; Kerek, Racha; Hamieh, Nadine; Alsheikh Deeb, Ibrahim; El-Achkar, Bassam; Tamim, Hani; Abdul Razzak, Farah; Haddad, Maurice; Abi-Ghanem, Alain S; El-Merhi, Fadi

    The aim of this article was to investigate the opacification of the renal vasculature and the urogenital system during computed tomography urography by using a quadruple-phase contrast media in a triphasic scan protocol. A total of 200 patients with possible urinary tract abnormalities were equally divided between 2 protocols. Protocol A used the conventional single bolus and quadruple-phase scan protocol (pre, arterial, venous, and delayed), retrospectively. Protocol B included a quadruple-phase contrast media injection with a triphasic scan protocol (pre, arterial and combined venous, and delayed), prospectively. Each protocol used 100 mL contrast and saline at a flow rate of 4.5 mL. Attenuation profiles and contrast-to-noise ratio of the renal arteries, veins, and urogenital tract were measured. Effective radiation dose calculation, data analysis by independent sample t test, receiver operating characteristic, and visual grading characteristic analyses were performed. In arterial circulation, only the inferior interlobular arteries in both protocols showed a statistical significance (P contrast-to-noise ratio than protocol A (protocol B: 22.68 ± 13.72; protocol A: 14.75 ± 5.76; P contrast media and triphasic scan protocol usage increases the image quality at a reduced radiation dose.

  12. Effectiveness of a Web-Based Computer-Tailored Multiple-Lifestyle Intervention for People Interested in Reducing their Cardiovascular Risk: A Randomized Controlled Trial.

    Science.gov (United States)

    Storm, Vera; Dörenkämper, Julia; Reinwand, Dominique Alexandra; Wienert, Julian; De Vries, Hein; Lippke, Sonia

    2016-04-11

    Web-based computer-tailored interventions for multiple health behaviors can improve the strength of behavior habits in people who want to reduce their cardiovascular risk. Nonetheless, few randomized controlled trials have tested this assumption to date. The study aim was to test an 8-week Web-based computer-tailored intervention designed to improve habit strength for physical activity and fruit and vegetable consumption among people who want to reduce their cardiovascular risk. In a randomized controlled design, self-reported changes in perceived habit strength, self-efficacy, and planning across different domains of physical activity as well as fruit and vegetable consumption were evaluated. This study was a randomized controlled trial involving an intervention group (n=403) and a waiting control group (n=387). Web-based data collection was performed in Germany and the Netherlands during 2013-2015. The intervention content was based on the Health Action Process Approach and involved personalized feedback on lifestyle behaviors, which indicated whether participants complied with behavioral guidelines for physical activity and fruit and vegetable consumption. There were three Web-based assessments: baseline (T0, N=790), a posttest 8 weeks after the baseline (T1, n=206), and a follow-up 3 months after the baseline (T2, n=121). Data analysis was conducted by analyzing variances and structural equation analysis. Significant group by time interactions revealed superior treatment effects for the intervention group, with substantially higher increases in self-reported habit strength for physical activity (F1,199=7.71, P=.006, Cohen's d=0.37) and fruit and vegetable consumption (F1,199=7.71, P=.006, Cohen's d=0.30) at posttest T1 for the intervention group. Mediation analyses yielded behavior-specific sequential mediator effects for T1 planning and T1 self-efficacy between the intervention and habit strength at follow-up T2 (fruit and vegetable consumption: beta=0.12, 95

  13. Security and Privacy in Fog Computing: Challenges

    OpenAIRE

    Mukherjee, Mithun; Matam, Rakesh; Shu, Lei; Maglaras, Leandros; Ferrag, Mohamed Amine; Choudhry, Nikumani; Kumar, Vikas

    2017-01-01

    open access article Fog computing paradigm extends the storage, networking, and computing facilities of the cloud computing toward the edge of the networks while offloading the cloud data centers and reducing service latency to the end users. However, the characteristics of fog computing arise new security and privacy challenges. The existing security and privacy measurements for cloud computing cannot be directly applied to the fog computing due to its features, such as mobility, heteroge...

  14. Reducing overlay sampling for APC-based correction per exposure by replacing measured data with computational prediction

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Oh, Jong Hun; Kim, Hyun Sik; Sung, Jun Ha; Kea, Marc

    2016-03-01

    One of the keys to successful mass production of sub-20nm nodes in the semiconductor industry is the development of an overlay correction strategy that can meet specifications, reduce the number of layers that require dedicated chuck overlay, and minimize measurement time. Three important aspects of this strategy are: correction per exposure (CPE), integrated metrology (IM), and the prioritization of automated correction over manual subrecipes. The first and third aspects are accomplished through an APC system that uses measurements from production lots to generate CPE corrections that are dynamically applied to future lots. The drawback of this method is that production overlay sampling must be extremely high in order to provide the system with enough data to generate CPE. That drawback makes IM particularly difficult because of the throughput impact that can be created on expensive bottleneck photolithography process tools. The goal is to realize the cycle time and feedback benefits of IM coupled with the enhanced overlay correction capability of automated CPE without impacting process tool throughput. This paper will discuss the development of a system that sends measured data with reduced sampling via an optimized layout to the exposure tool's computational modelling platform to predict and create "upsampled" overlay data in a customizable output layout that is compatible with the fab user CPE APC system. The result is dynamic CPE without the burden of extensive measurement time, which leads to increased utilization of IM.

  15. Soil nitrate reducing processes - drivers, mechanisms for spatial variation, and significance for nitrous oxide production.

    Science.gov (United States)

    Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M; Daniell, Tim J

    2012-01-01

    The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate ([Formula: see text]) and production of the potent greenhouse gas, nitrous oxide (N(2)O). A number of factors are known to control these processes, including O(2) concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N(2)O production from soils.

  16. The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method

    Directory of Open Access Journals (Sweden)

    Dipakkumar Gohil

    2012-06-01

    Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.

  17. Significantly reduced hypoxemic events in morbidly obese patients undergoing gastrointestinal endoscopy: Predictors and practice effect

    Directory of Open Access Journals (Sweden)

    Basavana Gouda Goudra

    2014-01-01

    Full Text Available Background: Providing anesthesia for gastrointestinal (GI endoscopy procedures in morbidly obese patients is a challenge for a variety of reasons. The negative impact of obesity on the respiratory system combined with a need to share the upper airway and necessity to preserve the spontaneous ventilation, together add to difficulties. Materials and Methods: This retrospective cohort study included patients with a body mass index (BMI >40 kg/m 2 that underwent out-patient GI endoscopy between September 2010 and February 2011. Patient data was analyzed for procedure, airway management technique as well as hypoxemic and cardiovascular events. Results: A total of 119 patients met the inclusion criteria. Our innovative airway management technique resulted in a lower rate of intraoperative hypoxemic events compared with any published data available. Frequency of desaturation episodes showed statistically significant relation to previous history of obstructive sleep apnea (OSA. These desaturation episodes were found to be statistically independent of increasing BMI of patients. Conclusion: Pre-operative history of OSA irrespective of associated BMI values can be potentially used as a predictor of intra-procedural desaturation. With suitable modification of anesthesia technique, it is possible to reduce the incidence of adverse respiratory events in morbidly obese patients undergoing GI endoscopy procedures, thereby avoiding the need for endotracheal intubation.

  18. Significantly reduced hypoxemic events in morbidly obese patients undergoing gastrointestinal endoscopy: Predictors and practice effect.

    Science.gov (United States)

    Goudra, Basavana Gouda; Singh, Preet Mohinder; Penugonda, Lakshmi C; Speck, Rebecca M; Sinha, Ashish C

    2014-01-01

    Providing anesthesia for gastrointestinal (GI) endoscopy procedures in morbidly obese patients is a challenge for a variety of reasons. The negative impact of obesity on the respiratory system combined with a need to share the upper airway and necessity to preserve the spontaneous ventilation, together add to difficulties. This retrospective cohort study included patients with a body mass index (BMI) >40 kg/m(2) that underwent out-patient GI endoscopy between September 2010 and February 2011. Patient data was analyzed for procedure, airway management technique as well as hypoxemic and cardiovascular events. A total of 119 patients met the inclusion criteria. Our innovative airway management technique resulted in a lower rate of intraoperative hypoxemic events compared with any published data available. Frequency of desaturation episodes showed statistically significant relation to previous history of obstructive sleep apnea (OSA). These desaturation episodes were found to be statistically independent of increasing BMI of patients. Pre-operative history of OSA irrespective of associated BMI values can be potentially used as a predictor of intra-procedural desaturation. With suitable modification of anesthesia technique, it is possible to reduce the incidence of adverse respiratory events in morbidly obese patients undergoing GI endoscopy procedures, thereby avoiding the need for endotracheal intubation.

  19. Prognostic Significance of Tumor Size of Small Lung Adenocarcinomas Evaluated with Mediastinal Window Settings on Computed Tomography

    OpenAIRE

    Sakao, Yukinori; Kuroda, Hiroaki; Mun, Mingyon; Uehara, Hirofumi; Motoi, Noriko; Ishikawa, Yuichi; Nakagawa, Ken; Okumura, Sakae

    2014-01-01

    BACKGROUND: We aimed to clarify that the size of the lung adenocarcinoma evaluated using mediastinal window on computed tomography is an important and useful modality for predicting invasiveness, lymph node metastasis and prognosis in small adenocarcinoma. METHODS: We evaluated 176 patients with small lung adenocarcinomas (diameter, 1-3 cm) who underwent standard surgical resection. Tumours were examined using computed tomography with thin section conditions (1.25 mm thick on high-resolution ...

  20. Dermal application of nitric oxide releasing acidified nitrite-containing liniments significantly reduces blood pressure in humans.

    Science.gov (United States)

    Opländer, Christian; Volkmar, Christine M; Paunel-Görgülü, Adnana; Fritsch, Thomas; van Faassen, Ernst E; Mürtz, Manfred; Grieb, Gerrit; Bozkurt, Ahmet; Hemmrich, Karsten; Windolf, Joachim; Suschek, Christoph V

    2012-02-15

    Vascular ischemic diseases, hypertension, and other systemic hemodynamic and vascular disorders may be the result of impaired bioavailability of nitric oxide (NO). NO but also its active derivates like nitrite or nitroso compounds are important effector and signal molecules with vasodilating properties. Our previous findings point to a therapeutical potential of cutaneous administration of NO in the treatment of systemic hemodynamic disorders. Unfortunately, no reliable data are available on the mechanisms, kinetics and biological responses of dermal application of nitric oxide in humans in vivo. The aim of the study was to close this gap and to explore the therapeutical potential of dermal nitric oxide application. We characterized with human skin in vitro and in vivo the capacity of NO, applied in a NO-releasing acidified form of nitrite-containing liniments, to penetrate the epidermis and to influence local as well as systemic hemodynamic parameters. We found that dermal application of NO led to a very rapid and significant transepidermal translocation of NO into the underlying tissue. Depending on the size of treated skin area, this translocation manifests itself through a significant systemic increase of the NO derivates nitrite and nitroso compounds, respectively. In parallel, this translocation was accompanied by an increased systemic vasodilatation and blood flow as well as reduced blood pressure. We here give evidence that in humans dermal application of NO has a therapeutic potential for systemic hemodynamic disorders that might arise from local or systemic insufficient availability of NO or its bio-active NO derivates, respectively. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Significance of cranial computer tomography for the early diagnosis of peri- and postnatal damage

    Energy Technology Data Exchange (ETDEWEB)

    Richter, E I

    1981-01-01

    It is reported on examination-technical possibilities with craniocerebral Computer Tomography in the peri- and postnatal period. Some typical tomographic images from a 17 1/2 months period in our own patient material of 327 children are demonstrated. The special advantages of this new technical-extensive method are: exact diagnoses, observation possibility of the longitudinal section, and the absolute harmlessness to the child.

  2. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  3. Low tube voltage dual source computed tomography to reduce contrast media doses in adult abdomen examinations: A phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Thor, Daniel [Department of Diagnostic Medical Physics, Karolinska University Hospital, Stockholm 14186 (Sweden); Brismar, Torkel B., E-mail: torkel.brismar@gmail.com; Fischer, Michael A. [Department of Clinical Science, Intervention and Technology at Karolinska Institutet and Department of Radiology, Karolinska University Hospital in Huddinge, Stockholm 14186 (Sweden)

    2015-09-15

    Purpose: To evaluate the potential of low tube voltage dual source (DS) single energy (SE) and dual energy (DE) computed tomography (CT) to reduce contrast media (CM) dose in adult abdominal examinations of various sizes while maintaining soft tissue and iodine contrast-to-noise ratio (CNR). Methods: Four abdominal phantoms simulating a body mass index of 16 to 35 kg/m{sup 2} with four inserted syringes of 0, 2, 4, and 8 mgI/ml CM were scanned using a 64-slice DS-CT scanner. Six imaging protocols were used; one single source (SS) reference protocol (120 kV, 180 reference mAs), four low kV SE protocols (70 and 80 kV using both SS and DS), and one DE protocol at 80/140 kV. Potential CM reduction with unchanged CNRs relative to the 120 kV protocol was calculated along with the corresponding increase in radiation dose. Results: The potential contrast media reductions were determined to be approximately 53% for DS 70 kV, 51% for SS 70 kV, 44% for DS 80 kV, 40% for SS 80 kV, and 20% for DE (all differences were significant, P < 0.05). Constant CNR could be achieved by using DS 70 kV for small to medium phantom sizes (16–26 kg/m{sup 2}) and for all sizes (16–35 kg/m{sup 2}) when using DS 80 kV and DE. Corresponding radiation doses increased by 60%–107%, 23%–83%, and 6%–12%, respectively. Conclusions: DS single energy CT can be used to reduce CM dose by 44%–53% with maintained CNR in adult abdominal examinations at the cost of an increased radiation dose. DS dual-energy CT allows reduction of CM dose by 20% at similar radiation dose as compared to a standard 120 kV single source.

  4. On the Use of Surface Porosity to Reduce Unsteady Lift

    Science.gov (United States)

    Tinetti, Ana F.; Kelly, Jeffrey J.; Bauer, Steven X. S.; Thomas, Russell H.

    2001-01-01

    An innovative application of existing technology is proposed for attenuating the effects of transient phenomena, such as rotor-stator and rotor-strut interactions, linked to noise and fatigue failure in turbomachinery environments. A computational study was designed to assess the potential of passive porosity technology as a mechanism for alleviating interaction effects by reducing the unsteady lift developed on a stator airfoil subject to wake impingement. The study involved a typical high bypass fan Stator airfoil (solid baseline and several porous configurations), immersed in a free field and exposed to the effects of a transversely moving wake. It was found that, for the airfoil under consideration, the magnitude of the unsteady lift could be reduced more than 18% without incurring significant performance losses.

  5. Modification of Flow Structure Over a Van Model By Suction Flow Control to Reduce Aerodynamics Drag

    Directory of Open Access Journals (Sweden)

    Harinaldi Harinaldi

    2012-05-01

    Full Text Available Automobile aerodynamic studies are typically undertaken to improve safety and increase fuel efficiency as well as to  find new innovation in automobile technology to deal with the problem of energy crisis and global warming. Some car companies have the objective to develop control solutions that enable to reduce the aerodynamic drag of vehicle and  significant modification progress is still possible by reducing the mass, rolling friction or aerodynamic drag. Some flow  control method provides the possibility to modify the flow separation to reduce the development of the swirling structures around the vehicle. In this study, a family van is modeled with a modified form of Ahmed's body by changing the orientation of the flow from its original form (modified/reversed Ahmed body. This model is equipped with a suction on the rear side to comprehensively examine the pressure field modifications that occur. The investigation combines computational and experimental work. Computational approach used  a commercial software with standard k-epsilon flow turbulence model, and the objectives was  to determine the characteristics of the flow field and aerodynamic drag reduction that occurred in the test model. Experimental approach used load cell in order to validate the aerodynamic drag reduction obtained by computational approach. The results show that the application of a suction in the rear part of the van model give the effect of reducing the wake and the vortex formation. Futhermore, aerodynamic drag reduction close to 13.86% for the computational approach and 16.32% for the experimental have been obtained.

  6. Cloud computing: An innovative tool for library services

    OpenAIRE

    Sahu, R.

    2015-01-01

    Cloud computing is a new technique of information communication technology because of its potential benefits such as reduced cost, accessible anywhere any time as well as its elasticity and flexibility. In this Paper defines cloud Computing, definition, essential characteristics, model of cloud computing, components of cloud, advantages & drawbacks of cloud computing and also describe cloud computing in libraries.

  7. Cerebral Embolic Protection During Transcatheter Aortic Valve Replacement Significantly Reduces Death and Stroke Compared With Unprotected Procedures.

    Science.gov (United States)

    Seeger, Julia; Gonska, Birgid; Otto, Markus; Rottbauer, Wolfgang; Wöhrle, Jochen

    2017-11-27

    The aim of this study was to evaluate the impact of cerebral embolic protection on stroke-free survival in patients undergoing transcatheter aortic valve replacement (TAVR). Imaging data on cerebral embolic protection devices have demonstrated a significant reduction in number and volume of cerebral lesions. A total of 802 consecutive patients were enrolled. The Sentinel cerebral embolic protection device (Claret Medical Inc., Santa Rosa, California) was used in 34.9% (n = 280) of consecutive patients. In 65.1% (n = 522) of patients TAVR was performed in the identical setting except without cerebral embolic protection. Neurological follow-up was done within 7 days post-procedure. The primary endpoint was a composite of all-cause mortality or all-stroke according to Valve Academic Research Consortium-2 criteria within 7 days. Propensity score matching was performed to account for possible confounders. Both filters of the device were successfully positioned in 280 of 305 (91.8%) consecutive patients. With use of cerebral embolic protection rate of disabling and nondisabling stroke was significantly reduced from 4.6% to 1.4% (p = 0.03; odds ratio: 0.29, 95% confidence interval: 0.10 to 0.93) in the propensity-matched population (n = 560). The primary endpoint occurred significantly less frequently, with 2.1% (n = 6 of 280) in the protected group compared with 6.8% (n = 19 of 280) in the control group (p = 0.01; odds ratio: 0.30; 95% confidence interval: 0.12 to 0.77). In multivariable analysis Society of Thoracic Surgeons score for mortality (p = 0.02) and TAVR without protection (p = 0.02) were independent predictors for the primary endpoint. In patients undergoing TAVR use of a cerebral embolic protection device demonstrated a significant higher rate of stroke-free survival compared with unprotected TAVR. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  8. Reduced cost mission design using surrogate models

    Science.gov (United States)

    Feldhacker, Juliana D.; Jones, Brandon A.; Doostan, Alireza; Hampton, Jerrad

    2016-01-01

    This paper uses surrogate models to reduce the computational cost associated with spacecraft mission design in three-body dynamical systems. Sampling-based least squares regression is used to project the system response onto a set of orthogonal bases, providing a representation of the ΔV required for rendezvous as a reduced-order surrogate model. Models are presented for mid-field rendezvous of spacecraft in orbits in the Earth-Moon circular restricted three-body problem, including a halo orbit about the Earth-Moon L2 libration point (EML-2) and a distant retrograde orbit (DRO) about the Moon. In each case, the initial position of the spacecraft, the time of flight, and the separation between the chaser and the target vehicles are all considered as design inputs. The results show that sample sizes on the order of 102 are sufficient to produce accurate surrogates, with RMS errors reaching 0.2 m/s for the halo orbit and falling below 0.01 m/s for the DRO. A single function call to the resulting surrogate is up to two orders of magnitude faster than computing the same solution using full fidelity propagators. The expansion coefficients solved for in the surrogates are then used to conduct a global sensitivity analysis of the ΔV on each of the input parameters, which identifies the separation between the spacecraft as the primary contributor to the ΔV cost. Finally, the models are demonstrated to be useful for cheap evaluation of the cost function in constrained optimization problems seeking to minimize the ΔV required for rendezvous. These surrogate models show significant advantages for mission design in three-body systems, in terms of both computational cost and capabilities, over traditional Monte Carlo methods.

  9. Extending OFDM Symbols to Reduce Power Consumption

    NARCIS (Netherlands)

    Kokkeler, Andre B.J.; Smit, Gerardus Johannes Maria

    2012-01-01

    Existing communication standards have limited capabilities to adapt to low SNR environments or to exploit low data rate requirements in a power efficient way. Existing techniques like e.g. control coding do not reduce the computational load when reducing data rates. In this paper, we introduce

  10. Soil nitrate reducing processes drivers, mechanisms for spatial variation, and significance for nitrous oxide production

    OpenAIRE

    Giles, M.; Morley, N.; Baggs, E.M.; Daniell, T.J.

    2012-01-01

    The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium\\ud (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for\\ud the loss of nitrate (NO−\\ud 3 ) and production of the potent greenhouse gas, nitrous oxide (N2O).\\ud A number of factors are known to control these processes, including O2 concentrations and\\ud moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms\\ud responsible for the ...

  11. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  12. The significance of routine thoracic computed tomography in patients with blunt chest trauma.

    Science.gov (United States)

    Çorbacıoğlu, Seref Kerem; Er, Erhan; Aslan, Sahin; Seviner, Meltem; Aksel, Gökhan; Doğan, Nurettin Özgür; Güler, Sertaç; Bitir, Aysen

    2015-05-01

    The purpose of this study is to investigate whether the use of thoracic computed tomography (TCT) as part of nonselective computed tomography (CT) guidelines is superior to selective CT during the diagnosis of blunt chest trauma. This study was planned as a prospective cohort study, and it was conducted at the emergency department between 2013 and 2014. A total of 260 adult patients who did not meet the exclusion criteria were enrolled in the study. All patients were evaluated by an emergency physician, and their primary surveys were completed based on the Advanced Trauma Life Support (ATLS) principles. Based on the initial findings and ATLS recommendations, patients in whom thoracic CT was indicated were determined (selective CT group). Routine CTs were then performed on all patients. Thoracic injuries were found in 97 (37.3%) patients following routine TCT. In 53 (20%) patients, thoracic injuries were found by selective CT. Routine TCT was able to detect chest injury in 44 (16%) patients for whom selective TCT would not otherwise be ordered based on the EP evaluation (nonselective TCT group). Five (2%) patients in this nonselective TCT group required tube thoracostomy, while there was no additional treatment provided for thoracic injuries in the remaining 39 (15%). In conclusion, we found that the nonselective TCT method was superior to the selective TCT method in detecting thoracic injuries in patients with blunt trauma. Furthermore, we were able to demonstrate that the nonselective TCT method can change the course of patient management albeit at low rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Impact of singular excessive computer game and television exposure on sleep patterns and memory performance of school-aged children.

    Science.gov (United States)

    Dworak, Markus; Schierl, Thomas; Bruns, Thomas; Strüder, Heiko Klaus

    2007-11-01

    Television and computer game consumption are a powerful influence in the lives of most children. Previous evidence has supported the notion that media exposure could impair a variety of behavioral characteristics. Excessive television viewing and computer game playing have been associated with many psychiatric symptoms, especially emotional and behavioral symptoms, somatic complaints, attention problems such as hyperactivity, and family interaction problems. Nevertheless, there is insufficient knowledge about the relationship between singular excessive media consumption on sleep patterns and linked implications on children. The aim of this study was to investigate the effects of singular excessive television and computer game consumption on sleep patterns and memory performance of children. Eleven school-aged children were recruited for this polysomnographic study. Children were exposed to voluntary excessive television and computer game consumption. In the subsequent night, polysomnographic measurements were conducted to measure sleep-architecture and sleep-continuity parameters. In addition, a visual and verbal memory test was conducted before media stimulation and after the subsequent sleeping period to determine visuospatial and verbal memory performance. Only computer game playing resulted in significant reduced amounts of slow-wave sleep as well as significant declines in verbal memory performance. Prolonged sleep-onset latency and more stage 2 sleep were also detected after previous computer game consumption. No effects on rapid eye movement sleep were observed. Television viewing reduced sleep efficiency significantly but did not affect sleep patterns. The results suggest that television and computer game exposure affect children's sleep and deteriorate verbal cognitive performance, which supports the hypothesis of the negative influence of media consumption on children's sleep, learning, and memory.

  14. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  15. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  16. Cloud Computing Organizational Benefits : A Managerial concern

    OpenAIRE

    Mandala, Venkata Bhaskar Reddy; Chandra, Marepalli Sharat

    2012-01-01

    Context: Software industry is looking for new methods and opportunities to reduce the project management problems and operational costs. Cloud Computing concept is providing answers to these problems. Cloud Computing is made possible with the availability of high internet bandwidth. Cloud Computing is providing wide range of various services to varied customer base. Cloud Computing has some key elements such as on-demand services, large pool of configurable computing resources and minimal man...

  17. Computational engineering applied to the concentrating solar power technology

    International Nuclear Information System (INIS)

    Giannuzzi, Giuseppe Mauro; Miliozzi, Adio

    2006-01-01

    Solar power plants based on parabolic-trough collectors present innumerable thermo-structural problems related on the one hand to the high temperatures of the heat transfer fluid, and on the other to the need og highly precise aiming and structural resistance. Devising an engineering response to these problems implies analysing generally unconventional solutions. At present, computational engineering is the principal investigating tool; it speeds the design of prototype installations and significantly reduces the necessary but costly experimental programmes [it

  18. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  19. Learning to REDUCE: A Reduced Electricity Consumption Prediction Ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Chelmis, Charalampos; Prasanna, Viktor

    2016-02-12

    Utilities use Demand Response (DR) to balance supply and demand in the electric grid by involving customers in efforts to reduce electricity consumption during peak periods. To implement and adapt DR under dynamically changing conditions of the grid, reliable prediction of reduced consumption is critical. However, despite the wealth of research on electricity consumption prediction and DR being long in practice, the problem of reduced consumption prediction remains largely un-addressed. In this paper, we identify unique computational challenges associated with the prediction of reduced consumption and contrast this to that of normal consumption and DR baseline prediction.We propose a novel ensemble model that leverages different sequences of daily electricity consumption on DR event days as well as contextual attributes for reduced consumption prediction. We demonstrate the success of our model on a large, real-world, high resolution dataset from a university microgrid comprising of over 950 DR events across a diverse set of 32 buildings. Our model achieves an average error of 13.5%, an 8.8% improvement over the baseline. Our work is particularly relevant for buildings where electricity consumption is not tied to strict schedules. Our results and insights should prove useful to the researchers and practitioners working in the sustainable energy domain.

  20. Advanced computations in plasma physics

    International Nuclear Information System (INIS)

    Tang, W.M.

    2002-01-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  1. PseKRAAC: a flexible web server for generating pseudo K-tuple reduced amino acids composition.

    Science.gov (United States)

    Zuo, Yongchun; Li, Yuan; Chen, Yingli; Li, Guangpeng; Yan, Zhenhe; Yang, Lei

    2017-01-01

    The reduced amino acids perform powerful ability for both simplifying protein complexity and identifying functional conserved regions. However, dealing with different protein problems may need different kinds of cluster methods. Encouraged by the success of pseudo-amino acid composition algorithm, we developed a freely available web server, called PseKRAAC (the pseudo K-tuple reduced amino acids composition). By implementing reduced amino acid alphabets, the protein complexity can be significantly simplified, which leads to decrease chance of overfitting, lower computational handicap and reduce information redundancy. PseKRAAC delivers more capability for protein research by incorporating three crucial parameters that describes protein composition. Users can easily generate many different modes of PseKRAAC tailored to their needs by selecting various reduced amino acids alphabets and other characteristic parameters. It is anticipated that the PseKRAAC web server will become a very useful tool in computational proteomics and protein sequence analysis. Freely available on the web at http://bigdata.imu.edu.cn/psekraac CONTACTS: yczuo@imu.edu.cn or imu.hema@foxmail.com or yanglei_hmu@163.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Virtually going green: The role of quantum computational chemistry in reducing pollution and toxicity in chemistry

    Science.gov (United States)

    Stevens, Jonathan

    2017-07-01

    Continuing advances in computational chemistry has permitted quantum mechanical calculation to assist in research in green chemistry and to contribute to the greening of chemical practice. Presented here are recent examples illustrating the contribution of computational quantum chemistry to green chemistry, including the possibility of using computation as a green alternative to experiments, but also illustrating contributions to greener catalysis and the search for greener solvents. Examples of applications of computation to ambitious projects for green synthetic chemistry using carbon dioxide are also presented.

  3. Studi Perbandingan Layanan Cloud Computing

    OpenAIRE

    Afdhal, Afdhal

    2013-01-01

    In the past few years, cloud computing has became a dominant topic in the IT area. Cloud computing offers hardware, infrastructure, platform and applications without requiring end-users knowledge of the physical location and the configuration of providers who deliver the services. It has been a good solution to increase reliability, reduce computing cost, and make opportunities to IT industries to get more advantages. The purpose of this article is to present a better understanding of cloud d...

  4. Denver screening protocol for blunt cerebrovascular injury reduces the use of multi-detector computed tomography angiography.

    Science.gov (United States)

    Beliaev, Andrei M; Barber, P Alan; Marshall, Roger J; Civil, Ian

    2014-06-01

    Blunt cerebrovascular injury (BCVI) occurs in 0.2-2.7% of blunt trauma patients and has up to 30% mortality. Conventional screening does not recognize up to 20% of BCVI patients. To improve diagnosis of BCVI, both an expanded battery of screening criteria and a multi-detector computed tomography angiography (CTA) have been suggested. The aim of this study is to investigate whether the use of CTA restricted to the Denver protocol screen-positive patients would reduce the unnecessary use of CTA as a pre-emptive screening tool. This is a registry-based study of blunt trauma patients admitted to Auckland City Hospital from 1998 to 2012. The diagnosis of BCVI was confirmed or excluded with CTA, magnetic resonance angiography and, if these imaging were non-conclusive, four-vessel digital subtraction angiography. Thirty (61%) BCVI and 19 (39%) non-BCVI patients met eligibility criteria. The Denver protocol applied to our cohort of patients had a sensitivity of 97% (95% confidence interval (CI): 83-100%) and a specificity of 42% (95% CI: 20-67%). With a prevalence of BCVI in blunt trauma patients of 0.2% and 2.7%, post-test odds of a screen-positive test were 0.03 (95% CI: 0.002-0.005) and 0.046 (95% CI: 0.314-0.068), respectively. Application of the CTA to the Denver protocol screen-positive trauma patients can decrease the use of CTA as a pre-emptive screening tool by 95-97% and reduces its hazards. © 2013 Royal Australasian College of Surgeons.

  5. Noise filtering algorithm for the MFTF-B computer based control system

    International Nuclear Information System (INIS)

    Minor, E.G.

    1983-01-01

    An algorithm to reduce the message traffic in the MFTF-B computer based control system is described. The algorithm filters analog inputs to the control system. Its purpose is to distinguish between changes in the inputs due to noise and changes due to significant variations in the quantity being monitored. Noise is rejected while significant changes are reported to the control system data base, thus keeping the data base updated with a minimum number of messages. The algorithm is memory efficient, requiring only four bytes of storage per analog channel, and computationally simple, requiring only subtraction and comparison. Quantitative analysis of the algorithm is presented for the case of additive Gaussian noise. It is shown that the algorithm is stable and tends toward the mean value of the monitored variable over a wide variety of additive noise distributions

  6. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  7. Computed tomography versus invasive coronary angiography

    DEFF Research Database (Denmark)

    Napp, Adriane E.; Haase, Robert; Laule, Michael

    2017-01-01

    Objectives: More than 3.5 million invasive coronary angiographies (ICA) are performed in Europe annually. Approximately 2 million of these invasive procedures might be reduced by noninvasive tests because no coronary intervention is performed. Computed tomography (CT) is the most accurate...... angiography (ICA) is the reference standard for detection of CAD.• Noninvasive computed tomography angiography excludes CAD with high sensitivity.• CT may effectively reduce the approximately 2 million negative ICAs in Europe.• DISCHARGE addresses this hypothesis in patients with low-to-intermediate pretest...

  8. Automatic Computer Mapping of Terrain

    Science.gov (United States)

    Smedes, H. W.

    1971-01-01

    Computer processing of 17 wavelength bands of visible, reflective infrared, and thermal infrared scanner spectrometer data, and of three wavelength bands derived from color aerial film has resulted in successful automatic computer mapping of eight or more terrain classes in a Yellowstone National Park test site. The tests involved: (1) supervised and non-supervised computer programs; (2) special preprocessing of the scanner data to reduce computer processing time and cost, and improve the accuracy; and (3) studies of the effectiveness of the proposed Earth Resources Technology Satellite (ERTS) data channels in the automatic mapping of the same terrain, based on simulations, using the same set of scanner data. The following terrain classes have been mapped with greater than 80 percent accuracy in a 12-square-mile area with 1,800 feet of relief; (1) bedrock exposures, (2) vegetated rock rubble, (3) talus, (4) glacial kame meadow, (5) glacial till meadow, (6) forest, (7) bog, and (8) water. In addition, shadows of clouds and cliffs are depicted, but were greatly reduced by using preprocessing techniques.

  9. Extreme learning machine for reduced order modeling of turbulent geophysical flows

    Science.gov (United States)

    San, Omer; Maulik, Romit

    2018-04-01

    We investigate the application of artificial neural networks to stabilize proper orthogonal decomposition-based reduced order models for quasistationary geophysical turbulent flows. An extreme learning machine concept is introduced for computing an eddy-viscosity closure dynamically to incorporate the effects of the truncated modes. We consider a four-gyre wind-driven ocean circulation problem as our prototype setting to assess the performance of the proposed data-driven approach. Our framework provides a significant reduction in computational time and effectively retains the dynamics of the full-order model during the forward simulation period beyond the training data set. Furthermore, we show that the method is robust for larger choices of time steps and can be used as an efficient and reliable tool for long time integration of general circulation models.

  10. The reduced basis method for the electric field integral equation

    International Nuclear Information System (INIS)

    Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.

    2011-01-01

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.

  11. Data-Intensive Text Processing with MapReduce

    CERN Document Server

    Lin, Jimmy

    2010-01-01

    Our world is being revolutionized by data-driven methods: access to large amounts of data has generated new insights and opened exciting new opportunities in commerce, science, and computing applications. Processing the enormous quantities of data necessary for these advances requires large clusters, making distributed computing paradigms more crucial than ever. MapReduce is a programming model for expressing distributed computations on massive datasets and an execution framework for large-scale data processing on clusters of commodity servers. The programming model provides an easy-to-underst

  12. Prenatal prochloraz treatment significantly increases pregnancy length and reduces offspring weight but does not affect social-olfactory memory in rats

    DEFF Research Database (Denmark)

    Dmytriyeva, Oksana; Klementiev, Boris; Berezin, Vladimir

    2013-01-01

    Metabolites of the commonly used imidazole fungicide prochloraz are androgen receptor antagonists. They have been shown to block androgen-driven development and compromise reproductive function. We tested the effect of prochloraz on cognitive behavior following exposure to this fungicide during...... the perinatal period. Pregnant Wistar rats were administered a 200mg/kg dose of prochloraz on gestational day (GD) 7, GD11, and GD15. The social recognition test (SRT) was performed on 7-week-old male rat offspring. We found an increase in pregnancy length and a significantly reduced pup weight on PND15 and PND...

  13. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  14. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  15. Modified computation of the nozzle damping coefficient in solid rocket motors

    Science.gov (United States)

    Liu, Peijin; Wang, Muxin; Yang, Wenjing; Gupta, Vikrant; Guan, Yu; Li, Larry K. B.

    2018-02-01

    In solid rocket motors, the bulk advection of acoustic energy out of the nozzle constitutes a significant source of damping and can thus influence the thermoacoustic stability of the system. In this paper, we propose and test a modified version of a historically accepted method of calculating the nozzle damping coefficient. Building on previous work, we separate the nozzle from the combustor, but compute the acoustic admittance at the nozzle entry using the linearized Euler equations (LEEs) rather than with short nozzle theory. We compute the combustor's acoustic modes also with the LEEs, taking the nozzle admittance as the boundary condition at the combustor exit while accounting for the mean flow field in the combustor using an analytical solution to Taylor-Culick flow. We then compute the nozzle damping coefficient via a balance of the unsteady energy flux through the nozzle. Compared with established methods, the proposed method offers competitive accuracy at reduced computational costs, helping to improve predictions of thermoacoustic instability in solid rocket motors.

  16. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    Science.gov (United States)

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  17. Towards reducing impact-induced brain injury: lessons from a computational study of army and football helmet pads.

    Science.gov (United States)

    Moss, William C; King, Michael J; Blackman, Eric G

    2014-01-01

    We use computational simulations to compare the impact response of different football and U.S. Army helmet pad materials. We conduct experiments to characterise the material response of different helmet pads. We simulate experimental helmet impact tests performed by the U.S. Army to validate our methods. We then simulate a cylindrical impactor striking different pads. The acceleration history of the impactor is used to calculate the head injury criterion for each pad. We conduct sensitivity studies exploring the effects of pad composition, geometry and material stiffness. We find that (1) the football pad materials do not outperform the currently used military pad material in militarily relevant impact scenarios; (2) optimal material properties for a pad depend on impact energy and (3) thicker pads perform better at all velocities. Although we considered only the isolated response of pad materials, not entire helmet systems, our analysis suggests that by using larger helmet shells with correspondingly thicker pads, impact-induced traumatic brain injury may be reduced.

  18. Reducing dysfunctional beliefs about sleep does not significantly improve insomnia in cognitive behavioral therapy.

    Science.gov (United States)

    Okajima, Isa; Nakajima, Shun; Ochi, Moeko; Inoue, Yuichi

    2014-01-01

    The present study examined to examine whether improvement of insomnia is mediated by a reduction in sleep-related dysfunctional beliefs through cognitive behavioral therapy for insomnia. In total, 64 patients with chronic insomnia received cognitive behavioral therapy for insomnia consisting of 6 biweekly individual treatment sessions of 50 minutes in length. Participants were asked to complete the Athens Insomnia Scale and the Dysfunctional Beliefs and Attitudes about Sleep scale both at the baseline and at the end of treatment. The results showed that although cognitive behavioral therapy for insomnia greatly reduced individuals' scores on both scales, the decrease in dysfunctional beliefs and attitudes about sleep with treatment did not seem to mediate improvement in insomnia. The findings suggest that sleep-related dysfunctional beliefs endorsed by patients with chronic insomnia may be attenuated by cognitive behavioral therapy for insomnia, but changes in such beliefs are not likely to play a crucial role in reducing the severity of insomnia.

  19. Accommodation and convergence during sustained computer work.

    Science.gov (United States)

    Collier, Juanita D; Rosenfield, Mark

    2011-07-01

    With computer usage becoming almost universal in contemporary society, the reported prevalence of computer vision syndrome (CVS) is extremely high. However, the precise physiological mechanisms underlying CVS remain unclear. Although abnormal accommodation and vergence responses have been cited as being responsible for the symptoms produced, there is little objective evidence to support this claim. Accordingly, this study measured both of these oculomotor parameters during a sustained period of computer use. Subjects (N = 20) were required to read text aloud from a laptop computer at a viewing distance of 50 cm for a sustained 30-minute period through their habitual refractive correction. At 2-minute intervals, the accommodative response (AR) to the computer screen was measured objectively using a Grand Seiko WAM 5500 optometer (Grand Seiko, Hiroshima, Japan). Additionally, the vergence response was assessed by measuring the associated phoria (AP), i.e., prism to eliminate fixation disparity, using a customized fixation disparity target that appeared on the computer screen. Subjects were asked to rate the degree of difficulty of the reading task on a scale from 1 to 10. Mean accommodation and AP values during the task were 1.07 diopters and 0.74∆ base-in (BI), respectively. The mean discomfort score was 4.9. No significant changes in accommodation or vergence were observed during the course of the 30-minute test period. There was no significant difference in the AR as a function of subjective difficulty. However, the mean AP for the subjects who reported the least and greatest discomfort during the task was 1.55∆ BI and 0, respectively (P = 0.02). CVS, after 30 minutes was worse in subjects exhibiting zero fixation disparity when compared with those subjects having a BI AP but does not appear to be related to differences in accommodation. A slightly reduced vergence response increases subject comfort during the task. Copyright © 2011 American Optometric

  20. Secure Data Access Control for Fog Computing Based on Multi-Authority Attribute-Based Signcryption with Computation Outsourcing and Attribute Revocation.

    Science.gov (United States)

    Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia

    2018-05-17

    Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.

  1. [Effective Techniques to Reduce Radiation Exposure to Medical Staff during Assist of X-ray Computed Tomography Examination].

    Science.gov (United States)

    Miyajima, Ryuichi; Fujibuchi, Toshioh; Miyachi, Yusuke; Tateishi, Satoshi; Uno, Yoshinori; Amakawa, Kazutoshi; Ohura, Hiroki; Orita, Shinichi

    2018-01-01

    Medical staffs like radiological technologists, doctors, and nurses are at an increased risk of exposure to radiation while assisting the patient in a position or monitor contrast medium injection during computed tomography (CT). However, methods to protect medical staff from radiation exposure and protocols for using radiological protection equipment have not been standardized and differ among hospitals. In this study, the distribution of scattered X-rays in a CT room was measured by placing electronic personal dosimeters in locations where medical staff stands beside the CT scanner gantry while assisting the patient and the exposure dose was measured. Moreover, we evaluated non-uniform exposure and revealed effective techniques to reduce the exposure dose to medical staff during CT. The dose of the scattered X-rays was the lowest at the gantry and at the examination table during both head and abdominal CT. The dose was the highest at the trunk of the upper body of the operator corresponding to a height of 130 cm during head CT and at the head corresponding to a height of 150 cm during abdominal CT. The maximum dose to the crystalline lens was approximately 600 μSv during head CT. We found that the use of volumetric CT scanning and X-ray protective goggles, and face direction toward the gantry reduced the exposure dose, particularly to the crystalline lens, for which lower equivalent dose during CT scan has been recently recommended in the International Commission on Radiological Protection Publication 118.

  2. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2013-07-01

    While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)

  3. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  4. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix; Fu, Qiang; Peng, Yifan; Heidrich, Wolfgang

    2016-01-01

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  5. Encoded diffractive optics for full-spectrum computational imaging

    KAUST Repository

    Heide, Felix

    2016-09-16

    Diffractive optical elements can be realized as ultra-thin plates that offer significantly reduced footprint and weight compared to refractive elements. However, such elements introduce severe chromatic aberrations and are not variable, unless used in combination with other elements in a larger, reconfigurable optical system. We introduce numerically optimized encoded phase masks in which different optical parameters such as focus or zoom can be accessed through changes in the mechanical alignment of a ultra-thin stack of two or more masks. Our encoded diffractive designs are combined with a new computational approach for self-calibrating imaging (blind deconvolution) that can restore high-quality images several orders of magnitude faster than the state of the art without pre-calibration of the optical system. This co-design of optics and computation enables tunable, full-spectrum imaging using thin diffractive optics.

  6. Many Masses on One Stroke:. Economic Computation of Quark Propagators

    Science.gov (United States)

    Frommer, Andreas; Nöckel, Bertold; Güsken, Stephan; Lippert, Thomas; Schilling, Klaus

    The computational effort in the calculation of Wilson fermion quark propagators in Lattice Quantum Chromodynamics can be considerably reduced by exploiting the Wilson fermion matrix structure in inversion algorithms based on the non-symmetric Lanczos process. We consider two such methods: QMR (quasi minimal residual) and BCG (biconjugate gradients). Based on the decomposition M/κ = 1/κ-D of the Wilson mass matrix, using QMR, one can carry out inversions on a whole trajectory of masses simultaneously, merely at the computational expense of a single propagator computation. In other words, one has to compute the propagator corresponding to the lightest mass only, while all the heavier masses are given for free, at the price of extra storage. Moreover, the symmetry γ5M = M†γ5 can be used to cut the computational effort in QMR and BCG by a factor of two. We show that both methods then become — in the critical regime of small quark masses — competitive to BiCGStab and significantly better than the standard MR method, with optimal relaxation factor, and CG as applied to the normal equations.

  7. Quantum computing with trapped ions

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, R.J.

    1998-01-01

    The significance of quantum computation for cryptography is discussed. Following a brief survey of the requirements for quantum computational hardware, an overview of the ion trap quantum computation project at Los Alamos is presented. The physical limitations to quantum computation with trapped ions are analyzed and an assessment of the computational potential of the technology is made.

  8. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  9. Significance of computed tomography in the diagnosis of the mediastinal mass lesions

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Masanori; Takashima, Tsutomu; Suzuki, Masayuki; Itoh, Hiroshi; Hirose, Jinichiro; Choto, Shuichi (Kanazawa Univ. (Japan). School of Medicine)

    1983-08-01

    Thirty cases of the mediastinal mass lesions were examined by computed tomography and diagnostic ability of CT was retrospectively evaluated. We devided them into two major groups: cystic and solid lesions. Cysts and cystic teratomas were differentiated on the thickness of their wall. Pericardial cysts were typically present at the cardiophrenic angle. In the solid mediastinal lesions, the presence of calcific and/or fatty components, the presence of necrosis, the irregularity of the margin and the obliteration of the surrounding fat layer were the clues to differential diagnosis and of evaluation for their invasiveness. Although differential diagnosis of the solid anterior mediastinal tumors was often difficult, teratomas with calcific and fatty componets were easily diagnosed. Invasiveness of the malignant thymoma and other malignant lesions were successfully evaluated to some extent. Neurogenic posterior mediastinal tumors were easily diagnosed because of the presence of the spine deformity and typical dumbbell shaped appearance. We stress that our diagnostic approach is useful to differentiate the mediastinal mass lesions.

  10. Significance of computed tomography in the diagnosis of the mediastinal mass lesions

    International Nuclear Information System (INIS)

    Kimura, Masanori; Takashima, Tsutomu; Suzuki, Masayuki; Itoh, Hiroshi; Hirose, Jinichiro; Choto, Shuichi

    1983-01-01

    Thirty cases of the mediastinal mass lesions were examined by computed tomography and diagnostic ability of CT was retrospectively evaluated. We devided them into two major groups: cystic and solid lesions. Cysts and cystic teratomas were differentiated on the thickness of their wall. Pericardial cysts were typically present at the cardiophrenic angle. In the solid mediastinal lesions, the presence of calcific and/or fatty components, the presence of necrosis, the irregularity of the margin and the obliteration of the surrounding fat layer were the clues to differential diagnosis and of evaluation for their invasiveness. Although differential diagnosis of the solid anterior mediastinal tumors was often difficult, teratomas with calcific and fatty componets were easily diagnosed. Invasiveness of the malignant thymoma and other malignant lesions were successfully evaluated to some extent. Neurogenic posterior mediastinal tumors were easily diagnosed because of the presence of the spine deformity and typical dumbbell shaped appearance. We stress that our diagnostic approach is useful to differentiate the mediastinal mass lesions. (author)

  11. Calculation of normalised organ and effective doses to adult reference computational phantoms from contemporary computed tomography scanners

    International Nuclear Information System (INIS)

    Jansen, Jan T.M.; Shrimpton, Paul C.

    2010-01-01

    The general-purpose Monte Carlo radiation transport code MCNPX has been used to simulate photon transport and energy deposition in anthropomorphic phantoms due to the x-ray exposure from the Philips iCT 256 and Siemens Definition CT scanners, together with the previously studied General Electric 9800. The MCNPX code was compiled with the Intel FORTRAN compiler and run on a Linux PC cluster. A patch has been successfully applied to reduce computing times by about 4%. The International Commission on Radiological Protection (ICRP) has recently published the Adult Male (AM) and Adult Female (AF) reference computational voxel phantoms as successors to the Medical Internal Radiation Dose (MIRD) stylised hermaphrodite mathematical phantoms that form the basis for the widely-used ImPACT CT dosimetry tool. Comparisons of normalised organ and effective doses calculated for a range of scanner operating conditions have demonstrated significant differences in results (in excess of 30%) between the voxel and mathematical phantoms as a result of variations in anatomy. These analyses illustrate the significant influence of choice of phantom on normalised organ doses and the need for standardisation to facilitate comparisons of dose. Further such dose simulations are needed in order to update the ImPACT CT Patient Dosimetry spreadsheet for contemporary CT practice. (author)

  12. Constructing optimized binary masks for reservoir computing with delay systems

    Science.gov (United States)

    Appeltant, Lennert; van der Sande, Guy; Danckaert, Jan; Fischer, Ingo

    2014-01-01

    Reservoir computing is a novel bio-inspired computing method, capable of solving complex tasks in a computationally efficient way. It has recently been successfully implemented using delayed feedback systems, allowing to reduce the hardware complexity of brain-inspired computers drastically. In this approach, the pre-processing procedure relies on the definition of a temporal mask which serves as a scaled time-mutiplexing of the input. Originally, random masks had been chosen, motivated by the random connectivity in reservoirs. This random generation can sometimes fail. Moreover, for hardware implementations random generation is not ideal due to its complexity and the requirement for trial and error. We outline a procedure to reliably construct an optimal mask pattern in terms of multipurpose performance, derived from the concept of maximum length sequences. Not only does this ensure the creation of the shortest possible mask that leads to maximum variability in the reservoir states for the given reservoir, it also allows for an interpretation of the statistical significance of the provided training samples for the task at hand.

  13. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  14. Advances and challenges in computational plasma science

    International Nuclear Information System (INIS)

    Tang, W M; Chan, V S

    2005-01-01

    Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

  15. Enhancing Trusted Cloud Computing Platform for Infrastructure as a Service

    Directory of Open Access Journals (Sweden)

    KIM, H.

    2017-02-01

    Full Text Available The characteristics of cloud computing including on-demand self-service, resource pooling, and rapid elasticity have made it grow in popularity. However, security concerns still obstruct widespread adoption of cloud computing in the industry. Especially, security risks related to virtual machine make cloud users worry about exposure of their private data in IaaS environment. In this paper, we propose an enhanced trusted cloud computing platform to provide confidentiality and integrity of the user's data and computation. The presented platform provides secure and efficient virtual machine management protocols not only to protect against eavesdropping and tampering during transfer but also to guarantee the virtual machine is hosted only on the trusted cloud nodes against inside attackers. The protocols utilize both symmetric key operations and public key operations together with efficient node authentication model, hence both the computational cost for cryptographic operations and the communication steps are significantly reduced. As a result, the simulation shows the performance of the proposed platform is approximately doubled compared to the previous platforms. The proposed platform eliminates cloud users' worry above by providing confidentiality and integrity of their private data with better performance, and thus it contributes to wider industry adoption of cloud computing.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  17. Significant change of local atomic configurations at surface of reduced activation Eurofer steels induced by hydrogenation treatments

    Energy Technology Data Exchange (ETDEWEB)

    Greculeasa, S.G.; Palade, P.; Schinteie, G. [National Institute for Materials Physics, P.O. Box MG-7, 77125, Bucharest-Magurele (Romania); Kuncser, A.; Stanciu, A. [National Institute for Materials Physics, P.O. Box MG-7, 77125, Bucharest-Magurele (Romania); University of Bucharest, Faculty of Physics, 77125, Bucharest-Magurele (Romania); Lungu, G.A. [National Institute for Materials Physics, P.O. Box MG-7, 77125, Bucharest-Magurele (Romania); Porosnicu, C.; Lungu, C.P. [National Institute for Laser, Plasma and Radiation Physics, 77125, Bucharest-Magurele (Romania); Kuncser, V., E-mail: kuncser@infim.ro [National Institute for Materials Physics, P.O. Box MG-7, 77125, Bucharest-Magurele (Romania)

    2017-04-30

    Highlights: • Engineering of Eurofer slab properties by hydrogenation treatments. • Hydrogenation modifies significantly the local atomic configurations at the surface. • Hydrogenation increases the expulsion of the Cr atoms toward the very surface. • Approaching binomial atomic distribution by hydrogenation in the next surface 100 nm. - Abstract: Reduced-activation steels such as Eurofer alloys are candidates for supporting plasma facing components in tokamak-like nuclear fusion reactors. In order to investigate the impact of hydrogen/deuterium insertion in their crystalline lattice, annealing treatments in hydrogen atmosphere have been applied on Eurofer slabs. The resulting samples have been analyzed with respect to local structure and atomic configuration both before and after successive annealing treatments, by X-ray diffractometry (XRD), scanning electron microscopy and energy dispersive spectroscopy (SEM-EDS), X-ray photoelectron spectroscopy (XPS) and conversion electron Mössbauer spectroscopy (CEMS). The corroborated data point out for a bcc type structure of the non-hydrogenated alloy, with an average alloy composition approaching Fe{sub 0.9}Cr{sub 0.1} along a depth of about 100 nm. EDS elemental maps do not indicate surface inhomogeneities in concentration whereas the Mössbauer spectra prove significant deviations from a homogeneous alloying. The hydrogenation increases the expulsion of the Cr atoms toward the surface layer and decreases their oxidation, with considerable influence on the surface properties of the steel. The hydrogenation treatment is therefore proposed as a potential alternative for a convenient engineering of the surface of different Fe-Cr based alloys.

  18. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  19. Soil nitrate reducing processes – drivers, mechanisms for spatial variation, and significance for nitrous oxide production

    Science.gov (United States)

    Giles, Madeline; Morley, Nicholas; Baggs, Elizabeth M.; Daniell, Tim J.

    2012-01-01

    The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA) are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate (NO3−) and production of the potent greenhouse gas, nitrous oxide (N2O). A number of factors are known to control these processes, including O2 concentrations and moisture content, N, C, pH, and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub centimeter areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location, and potential for N2O production from soils. PMID:23264770

  20. Soil nitrate reducing processes – drivers, mechanisms for spatial variation and significance for nitrous oxide production

    Directory of Open Access Journals (Sweden)

    Madeline Eleanore Giles

    2012-12-01

    Full Text Available The microbial processes of denitrification and dissimilatory nitrate reduction to ammonium (DNRA are two important nitrate reducing mechanisms in soil, which are responsible for the loss of nitrate (NO3-¬ and production of the potent greenhouse gas, nitrous oxide (N2O. A number of factors are known to control these processes, including O2 concentrations and moisture content, N, C, pH and the size and community structure of nitrate reducing organisms responsible for the processes. There is an increasing understanding associated with many of these controls on flux through the nitrogen cycle in soil systems. However, there remains uncertainty about how the nitrate reducing communities are linked to environmental variables and the flux of products from these processes. The high spatial variability of environmental controls and microbial communities across small sub cm areas of soil may prove to be critical in determining why an understanding of the links between biotic and abiotic controls has proved elusive. This spatial effect is often overlooked as a driver of nitrate reducing processes. An increased knowledge of the effects of spatial heterogeneity in soil on nitrate reduction processes will be fundamental in understanding the drivers, location and potential for N2O production from soils.

  1. Optical trapping of nanoparticles with significantly reduced laser powers by using counter-propagating beams (Presentation Recording)

    Science.gov (United States)

    Zhao, Chenglong; LeBrun, Thomas W.

    2015-08-01

    Gold nanoparticles (GNP) have wide applications ranging from nanoscale heating to cancer therapy and biological sensing. Optical trapping of GNPs as small as 18 nm has been successfully achieved with laser power as high as 855 mW, but such high powers can damage trapped particles (particularly biological systems) as well heat the fluid, thereby destabilizing the trap. In this article, we show that counter propagating beams (CPB) can successfully trap GNP with laser powers reduced by a factor of 50 compared to that with a single beam. The trapping position of a GNP inside a counter-propagating trap can be easily modulated by either changing the relative power or position of the two beams. Furthermore, we find that under our conditions while a single-beam most stably traps a single particle, the counter-propagating beam can more easily trap multiple particles. This (CPB) trap is compatible with the feedback control system we recently demonstrated to increase the trapping lifetimes of nanoparticles by more than an order of magnitude. Thus, we believe that the future development of advanced trapping techniques combining counter-propagating traps together with control systems should significantly extend the capabilities of optical manipulation of nanoparticles for prototyping and testing 3D nanodevices and bio-sensing.

  2. Computer-aided assessment in statistics: the CAMPUS project

    Directory of Open Access Journals (Sweden)

    Neville Hunt

    1998-12-01

    Full Text Available The relentless drive for 'efficiency' in higher education, and the consequent increase in workloads, has given university teachers a compelling incentive to investigate alternative forms of assessment. Some forms of assessment with a clear educational value can no longer be entertained because of the burden placed on the teacher. An added concern is plagiarism, which anecdotal evidence would suggest is on the increase yet which is difficult to detect in large modules with more than one assessor. While computer-aided assessment (CAA has an enthusiastic following, it is not clear to many teachers that it either reduces workloads or reduces the risk of cheating. In an ideal world, most teachers would prefer to give individual attention and personal feedback to each student when marking their work. In this sense CAA must be seen as second best and will therefore be used only if it is seen to offer significant benefits in terms of reduced workloads or increased validity.

  3. Gender Digital Divide and Challenges in Undergraduate Computer Science Programs

    Science.gov (United States)

    Stoilescu, Dorian; McDougall, Douglas

    2011-01-01

    Previous research revealed a reduced number of female students registered in computer science studies. In addition, the female students feel isolated, have reduced confidence, and underperform. This article explores differences between female and male students in undergraduate computer science programs in a mid-size university in Ontario. Based on…

  4. Legal issues of computer imaging in plastic surgery: a primer.

    Science.gov (United States)

    Chávez, A E; Dagum, P; Koch, R J; Newman, J P

    1997-11-01

    Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.

  5. Minimal ancilla mediated quantum computation

    International Nuclear Information System (INIS)

    Proctor, Timothy J.; Kendon, Viv

    2014-01-01

    Schemes of universal quantum computation in which the interactions between the computational elements, in a computational register, are mediated by some ancillary system are of interest due to their relevance to the physical implementation of a quantum computer. Furthermore, reducing the level of control required over both the ancillary and register systems has the potential to simplify any experimental implementation. In this paper we consider how to minimise the control needed to implement universal quantum computation in an ancilla-mediated fashion. Considering computational schemes which require no measurements and hence evolve by unitary dynamics for the global system, we show that when employing an ancilla qubit there are certain fixed-time ancilla-register interactions which, along with ancilla initialisation in the computational basis, are universal for quantum computation with no additional control of either the ancilla or the register. We develop two distinct models based on locally inequivalent interactions and we then discuss the relationship between these unitary models and the measurement-based ancilla-mediated models known as ancilla-driven quantum computation. (orig.)

  6. Brief: Managing computing technology

    International Nuclear Information System (INIS)

    Startzman, R.A.

    1994-01-01

    While computing is applied widely in the production segment of the petroleum industry, its effective application is the primary goal of computing management. Computing technology has changed significantly since the 1950's, when computers first began to influence petroleum technology. The ability to accomplish traditional tasks faster and more economically probably is the most important effect that computing has had on the industry. While speed and lower cost are important, are they enough? Can computing change the basic functions of the industry? When new computing technology is introduced improperly, it can clash with traditional petroleum technology. This paper examines the role of management in merging these technologies

  7. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  8. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  9. EFFECT OF AEROBIC EXERCISES ON CARDIOPULMONARY FITNESS IN COMPUTER USING SEDENTARY INDIVIDUALS

    Directory of Open Access Journals (Sweden)

    P. Chinna Reddemma

    2015-04-01

    Full Text Available Background: A sedentary life style is a type of life style with no or irregular physical activity. It is a leading cause for reduced cardio - respiratory fitness and physical activity. The most of the employees working as computer personnel’s for more than five hours in a day is leading to altered physical activity and fitness. To study the effect of aerobic exercise programme on the cardio - respiratory fitness in computer using sedentary individuals. Methods: As per the inclusion criteria the thirty subjects were selected for the study, all the selected subjects were given aerobic protocol for 12 weeks. Pre and post therapeutic outcome measures were assessed for Body Mass Index was calculated using weight (kg/ height (m2. Waist- hip ratio was measured using inch tape in centimeters, pre and post therapeutic values of Body Mass Index, Waist Hip Ratio, VO2 peak were assessed. Results: According to the data analysis, a significant difference was found between the pre and post test values of Body Mass Index, Waist- hip ratio and Vo2peak in both experimental and control groups (p<0.05, but comparatively more significant changes were found in the experimental rather than the control group(p<0.05. Conclusion: Aerobic exercise had significant influence in computer using sedentary individuals. There was a significant change in physical factors like body mass index, waist-hip ratio, and functional factors like vo2peak. Hence, it is concluded that 12weeks of aerobic exercises is effective in improving cardio vascular factors computer using sedentary individuals.

  10. Impaired bone formation in ovariectomized mice reduces implant integration as indicated by longitudinal in vivo micro-computed tomography.

    Science.gov (United States)

    Li, Zihui; Kuhn, Gisela; Schirmer, Michael; Müller, Ralph; Ruffoni, Davide

    2017-01-01

    Although osteoporotic bone, with low bone mass and deteriorated bone architecture, provides a less favorable mechanical environment than healthy bone for implant fixation, there is no general agreement on the impact of osteoporosis on peri-implant bone (re)modeling, which is ultimately responsible for the long term stability of the bone-implant system. Here, we inserted an implant in a mouse model mimicking estrogen deficiency-induced bone loss and we monitored with longitudinal in vivo micro-computed tomography the spatio-temporal changes in bone (re)modeling and architecture, considering the separate contributions of trabecular, endocortical and periosteal surfaces. Specifically, 12 week-old C57BL/6J mice underwent OVX/SHM surgery; 9 weeks after we inserted special metal-ceramics implants into the 6th caudal vertebra and we measured bone response with in vivo micro-CT weekly for the following 6 weeks. Our results indicated that ovariectomized mice showed a reduced ability to increase the thickness of the cortical shell close to the implant because of impaired peri-implant bone formation, especially at the periosteal surface. Moreover, we observed that healthy mice had a significantly higher loss of trabecular bone far from the implant than estrogen depleted animals. Such behavior suggests that, in healthy mice, the substantial increase in peri-implant bone formation which rapidly thickened the cortex to secure the implant may raise bone resorption elsewhere and, specifically, in the trabecular network of the same bone but far from the implant. Considering the already deteriorated bone structure of estrogen depleted mice, further bone loss seemed to be hindered. The obtained knowledge on the dynamic response of diseased bone following implant insertion should provide useful guidelines to develop advanced treatments for osteoporotic fracture fixation based on local and selective manipulation of bone turnover in the peri-implant region.

  11. Impaired bone formation in ovariectomized mice reduces implant integration as indicated by longitudinal in vivo micro-computed tomography.

    Directory of Open Access Journals (Sweden)

    Zihui Li

    Full Text Available Although osteoporotic bone, with low bone mass and deteriorated bone architecture, provides a less favorable mechanical environment than healthy bone for implant fixation, there is no general agreement on the impact of osteoporosis on peri-implant bone (remodeling, which is ultimately responsible for the long term stability of the bone-implant system. Here, we inserted an implant in a mouse model mimicking estrogen deficiency-induced bone loss and we monitored with longitudinal in vivo micro-computed tomography the spatio-temporal changes in bone (remodeling and architecture, considering the separate contributions of trabecular, endocortical and periosteal surfaces. Specifically, 12 week-old C57BL/6J mice underwent OVX/SHM surgery; 9 weeks after we inserted special metal-ceramics implants into the 6th caudal vertebra and we measured bone response with in vivo micro-CT weekly for the following 6 weeks. Our results indicated that ovariectomized mice showed a reduced ability to increase the thickness of the cortical shell close to the implant because of impaired peri-implant bone formation, especially at the periosteal surface. Moreover, we observed that healthy mice had a significantly higher loss of trabecular bone far from the implant than estrogen depleted animals. Such behavior suggests that, in healthy mice, the substantial increase in peri-implant bone formation which rapidly thickened the cortex to secure the implant may raise bone resorption elsewhere and, specifically, in the trabecular network of the same bone but far from the implant. Considering the already deteriorated bone structure of estrogen depleted mice, further bone loss seemed to be hindered. The obtained knowledge on the dynamic response of diseased bone following implant insertion should provide useful guidelines to develop advanced treatments for osteoporotic fracture fixation based on local and selective manipulation of bone turnover in the peri-implant region.

  12. Determining casting defects in near-net shape casting aluminum parts by computed tomography

    Science.gov (United States)

    Li, Jiehua; Oberdorfer, Bernd; Habe, Daniel; Schumacher, Peter

    2018-03-01

    Three types of near-net shape casting aluminum parts were investigated by computed tomography to determine casting defects and evaluate quality. The first, second, and third parts were produced by low-pressure die casting (Al-12Si-0.8Cu-0.5Fe-0.9Mg-0.7Ni-0.2Zn alloy), die casting (A356, Al-7Si-0.3Mg), and semi-solid casting (A356, Al-7Si-0.3Mg), respectively. Unlike die casting (second part), low-pressure die casting (first part) significantly reduced the formation of casting defects (i.e., porosity) due to its smooth filling and solidification under pressure. No significant casting defect was observed in the third part, and this absence of defects indicates that semi-solid casting could produce high-quality near-net shape casting aluminum parts. Moreover, casting defects were mostly distributed along the eutectic grain boundaries. This finding reveals that refinement of eutectic grains is necessary to optimize the distribution of casting defects and reduce their size. This investigation demonstrated that computed tomography is an efficient method to determine casting defects in near-net shape casting aluminum parts.

  13. Cluster computing software for GATE simulations

    International Nuclear Information System (INIS)

    Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

    2007-01-01

    Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values

  14. Computer-associated health complaints and sources of ergonomic instructions in computer-related issues among Finnish adolescents: A cross-sectional study

    Science.gov (United States)

    2010-01-01

    Background The use of computers has increased among adolescents, as have musculoskeletal symptoms. There is evidence that these symptoms can be reduced through an ergonomics approach and through education. The purpose of this study was to examine where adolescents had received ergonomic instructions related to computer use, and whether receiving these instructions was associated with a reduced prevalence of computer-associated health complaints. Methods Mailed survey with nationally representative sample of 12 to 18-year-old Finns in 2001 (n = 7292, response rate 70%). In total, 6961 youths reported using a computer. We tested the associations of computer use time and received ergonomic instructions (predictor variables) with computer-associated health complaints (outcome variables) using logistic regression analysis. Results To prevent computer-associated complaints, 61.2% reported having been instructed to arrange their desk/chair/screen in the right position, 71.5% to take rest breaks. The older age group (16-18 years) reported receiving instructions or being self-instructed more often than the 12- to 14-year-olds (p ergonomic instructions on how to prevent computer-related musculoskeletal problems fail to reach a substantial number of children. Furthermore, the reported sources of instructions vary greatly in terms of reliability. PMID:20064250

  15. Computed Tomography (CT) -- Head

    Medline Plus

    Full Text Available ... to reduce the risk of an allergic reaction. These medications must be taken 12 hours prior to ... planes, and can even generate three-dimensional images. These images can be viewed on a computer monitor, ...

  16. Computed Tomography (CT) -- Sinuses

    Medline Plus

    Full Text Available ... to reduce the risk of an allergic reaction. These medications must be taken 12 hours prior to ... planes, and can even generate three-dimensional images. These images can be viewed on a computer monitor, ...

  17. A hierarchical approach to reducing communication in parallel graph algorithms

    KAUST Repository

    Harshvardhan,

    2015-01-01

    Large-scale graph computing has become critical due to the ever-increasing size of data. However, distributed graph computations are limited in their scalability and performance due to the heavy communication inherent in such computations. This is exacerbated in scale-free networks, such as social and web graphs, which contain hub vertices that have large degrees and therefore send a large number of messages over the network. Furthermore, many graph algorithms and computations send the same data to each of the neighbors of a vertex. Our proposed approach recognizes this, and reduces communication performed by the algorithm without change to user-code, through a hierarchical machine model imposed upon the input graph. The hierarchical model takes advantage of locale information of the neighboring vertices to reduce communication, both in message volume and total number of bytes sent. It is also able to better exploit the machine hierarchy to further reduce the communication costs, by aggregating traffic between different levels of the machine hierarchy. Results of an implementation in the STAPL GL shows improved scalability and performance over the traditional level-synchronous approach, with 2.5 × - 8× improvement for a variety of graph algorithms at 12, 000+ cores.

  18. Offline computing and networking

    International Nuclear Information System (INIS)

    Appel, J.A.; Avery, P.; Chartrand, G.

    1985-01-01

    This note summarizes the work of the Offline Computing and Networking Group. The report is divided into two sections; the first deals with the computing and networking requirements and the second with the proposed way to satisfy those requirements. In considering the requirements, we have considered two types of computing problems. The first is CPU-intensive activity such as production data analysis (reducing raw data to DST), production Monte Carlo, or engineering calculations. The second is physicist-intensive computing such as program development, hardware design, physics analysis, and detector studies. For both types of computing, we examine a variety of issues. These included a set of quantitative questions: how much CPU power (for turn-around and for through-put), how much memory, mass-storage, bandwidth, and so on. There are also very important qualitative issues: what features must be provided by the operating system, what tools are needed for program design, code management, database management, and for graphics

  19. Simple and Effective Algorithms: Computer-Adaptive Testing.

    Science.gov (United States)

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  20. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim

    2011-11-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop\\'s task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application\\'s latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  1. IHadoop: Asynchronous iterations for MapReduce

    KAUST Repository

    Elnikety, Eslam Mohamed Ibrahim; El Sayed, Tamer S.; Ramadan, Hany E.

    2011-01-01

    MapReduce is a distributed programming frame-work designed to ease the development of scalable data-intensive applications for large clusters of commodity machines. Most machine learning and data mining applications involve iterative computations over large datasets, such as the Web hyperlink structures and social network graphs. Yet, the MapReduce model does not efficiently support this important class of applications. The architecture of MapReduce, most critically its dataflow techniques and task scheduling, is completely unaware of the nature of iterative applications; tasks are scheduled according to a policy that optimizes the execution for a single iteration which wastes bandwidth, I/O, and CPU cycles when compared with an optimal execution for a consecutive set of iterations. This work presents iHadoop, a modified MapReduce model, and an associated implementation, optimized for iterative computations. The iHadoop model schedules iterations asynchronously. It connects the output of one iteration to the next, allowing both to process their data concurrently. iHadoop's task scheduler exploits inter-iteration data locality by scheduling tasks that exhibit a producer/consumer relation on the same physical machine allowing a fast local data transfer. For those iterative applications that require satisfying certain criteria before termination, iHadoop runs the check concurrently during the execution of the subsequent iteration to further reduce the application's latency. This paper also describes our implementation of the iHadoop model, and evaluates its performance against Hadoop, the widely used open source implementation of MapReduce. Experiments using different data analysis applications over real-world and synthetic datasets show that iHadoop performs better than Hadoop for iterative algorithms, reducing execution time of iterative applications by 25% on average. Furthermore, integrating iHadoop with HaLoop, a variant Hadoop implementation that caches

  2. Fast Evaluation of Segmentation Quality with Parallel Computing

    Directory of Open Access Journals (Sweden)

    Henry Cruz

    2017-01-01

    Full Text Available In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one.

  3. Vanderbilt University: Campus Computing Environment.

    Science.gov (United States)

    CAUSE/EFFECT, 1988

    1988-01-01

    Despite the decentralized nature of computing at Vanderbilt, there is significant evidence of cooperation and use of each other's resources by the various computing entities. Planning for computing occurs in every school and department. Caravan, a campus-wide network, is described. (MLW)

  4. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  5. Evaluation of the potential benefit of computer-aided diagnosis (CAD) for lung cancer screening using photofluorography

    International Nuclear Information System (INIS)

    Matsumoto, Tsuneo; Nakamura, Hiroshi; Nakanishi, Takashi; Doi, Kunio; Kano, Akiko.

    1993-01-01

    To evaluate the potential benefit of computer-aided diagnosis (CAD) in lung cancer screenings using photofluorographic films, we performed an observer test with 12 radiologists. We used 60 photofluorographic films obtained from a lung cancer screening program in Yamaguchi Prefecture (30 contained cancerous nodules and 30 had no nodules). In these cases, our current automated detection scheme achieved a sensitivity of 80%, but yielded an average of 11 false-positives per image. The observer study consisted of three viewing conditions: 1) only the original image (single reading), 2) the original image and computer output obtained from the current CAD scheme (CAD 1), 3) the original image and computer output obtained from a simulated improved CAD scheme with the same 80% true-positive rate, but with an average of one false-positive per image (CAD 2). Compared with double reading using independent interpretations, which is based on a higher score between two single readings, CAD 2 was more sensitive in subtle cases. The specificity of CAD was superior to that of double reading. Although CAD 1 (Az=0.805) was inferior to double reading (Az=0.837) in terms of the ROC curve, CAD 2 (Az=0.872) significantly improved the ROC curve and also significantly reduced observation time (p<0.05). If the number of false positives can be reduced, computer-aided diagnosis may play an important role in lung cancer screening programs. (author)

  6. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  7. Software designs of image processing tasks with incremental refinement of computation.

    Science.gov (United States)

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  8. Green Cloud Computing: A Literature Survey

    Directory of Open Access Journals (Sweden)

    Laura-Diana Radu

    2017-11-01

    Full Text Available Cloud computing is a dynamic field of information and communication technologies (ICTs, introducing new challenges for environmental protection. Cloud computing technologies have a variety of application domains, since they offer scalability, are reliable and trustworthy, and offer high performance at relatively low cost. The cloud computing revolution is redesigning modern networking, and offering promising environmental protection prospects as well as economic and technological advantages. These technologies have the potential to improve energy efficiency and to reduce carbon footprints and (e-waste. These features can transform cloud computing into green cloud computing. In this survey, we review the main achievements of green cloud computing. First, an overview of cloud computing is given. Then, recent studies and developments are summarized, and environmental issues are specifically addressed. Finally, future research directions and open problems regarding green cloud computing are presented. This survey is intended to serve as up-to-date guidance for research with respect to green cloud computing.

  9. Recycling potential of neodymium: the case of computer hard disk drives.

    Science.gov (United States)

    Sprecher, Benjamin; Kleijn, Rene; Kramer, Gert Jan

    2014-08-19

    Neodymium, one of the more critically scarce rare earth metals, is often used in sustainable technologies. In this study, we investigate the potential contribution of neodymium recycling to reducing scarcity in supply, with a case study on computer hard disk drives (HDDs). We first review the literature on neodymium production and recycling potential. From this review, we find that recycling of computer HDDs is currently the most feasible pathway toward large-scale recycling of neodymium, even though HDDs do not represent the largest application of neodymium. We then use a combination of dynamic modeling and empirical experiments to conclude that within the application of NdFeB magnets for HDDs, the potential for loop-closing is significant: up to 57% in 2017. However, compared to the total NdFeB production capacity, the recovery potential from HDDs is relatively small (in the 1-3% range). The distributed nature of neodymium poses a significant challenge for recycling of neodymium.

  10. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  11. Model's sparse representation based on reduced mixed GMsFE basis methods

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn [Institute of Mathematics, Hunan University, Changsha 410082 (China); Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn [College of Mathematics and Econometrics, Hunan University, Changsha 410082 (China)

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in

  12. Participatory workplace interventions can reduce sedentary time for office workers--a randomised controlled trial.

    Science.gov (United States)

    Parry, Sharon; Straker, Leon; Gilson, Nicholas D; Smith, Anne J

    2013-01-01

    Occupational sedentary behaviour is an important contributor to overall sedentary risk. There is limited evidence for effective workplace interventions to reduce occupational sedentary time and increase light activity during work hours. The purpose of the study was to determine if participatory workplace interventions could reduce total sedentary time, sustained sedentary time (bouts >30 minutes), increase the frequency of breaks in sedentary time and promote light intensity activity and moderate/vigorous activity (MVPA) during work hours. A randomised controlled trial (ANZCTR NUMBER: ACTN12612000743864) was conducted using clerical, call centre and data processing workers (n = 62, aged 25-59 years) in 3 large government organisations in Perth, Australia. Three groups developed interventions with a participatory approach: 'Active office' (n = 19), 'Active Workstation' and promotion of incidental office activity; 'Traditional physical activity' (n = 14), pedometer challenge to increase activity between productive work time and 'Office ergonomics' (n = 29), computer workstation design and breaking up computer tasks. Accelerometer (ActiGraph GT3X, 7 days) determined sedentary time, sustained sedentary time, breaks in sedentary time, light intensity activity and MVPA on work days and during work hours were measured before and following a 12 week intervention period. For all participants there was a significant reduction in sedentary time on work days (-1.6%, p = 0.006) and during work hours (-1.7%, p = 0.014) and a significant increase in number of breaks/sedentary hour on work days (0.64, p = 0.005) and during work hours (0.72, p = 0.015); there was a concurrent significant increase in light activity during work hours (1.5%, p = 0.012) and MVPA on work days (0.6%, p = 0.012). This study explored novel ways to modify work practices to reduce occupational sedentary behaviour. Participatory workplace interventions can reduce

  13. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    Science.gov (United States)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  14. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  15. Molecular Magnets for Quantum Computation

    Science.gov (United States)

    Kuroda, Takayoshi

    2009-06-01

    We review recent progress in molecular magnets especially in the viewpoint of the application for quantum computing. After a brief introduction to single-molecule magnets (SMMs), a method for qubit manipulation by using non-equidistant spin sublevels of a SMM will be introduced. A weakly-coupled dimer of two SMMs is also a candidate for quantum computing, which shows no quantum tunneling of magnetization (QTM) at zero field. In the AF ring Cr7Ni system, the large tunnel splitting is a great advantage to reduce decoherence during manipulation, which can be a possible candidate to realize quantum computer devices in future.

  16. Nurses' computer literacy and attitudes towards the use of computers in health care.

    Science.gov (United States)

    Gürdaş Topkaya, Sati; Kaya, Nurten

    2015-05-01

    This descriptive and cross-sectional study was designed to address nurses' computer literacy and attitudes towards the use of computers in health care and to determine the correlation between these two variables. This study was conducted with the participation of 688 nurses who worked at two university-affiliated hospitals. These nurses were chosen using a stratified random sampling method. The data were collected using the Multicomponent Assessment of Computer Literacy and the Pretest for Attitudes Towards Computers in Healthcare Assessment Scale v. 2. The nurses, in general, had positive attitudes towards computers, and their computer literacy was good. Computer literacy in general had significant positive correlations with individual elements of computer competency and with attitudes towards computers. If the computer is to be an effective and beneficial part of the health-care system, it is necessary to help nurses improve their computer competency. © 2014 Wiley Publishing Asia Pty Ltd.

  17. Reducing multi-qubit interactions in adiabatic quantum computation without adding auxiliary qubits. Part 1: The "deduc-reduc" method and its application to quantum factorization of numbers

    OpenAIRE

    Tanburn, Richard; Okada, Emile; Dattani, Nike

    2015-01-01

    Adiabatic quantum computing has recently been used to factor 56153 [Dattani & Bryans, arXiv:1411.6758] at room temperature, which is orders of magnitude larger than any number attempted yet using Shor's algorithm (circuit-based quantum computation). However, this number is still vastly smaller than RSA-768 which is the largest number factored thus far on a classical computer. We address a major issue arising in the scaling of adiabatic quantum factorization to much larger numbers. Namely, the...

  18. Enabling Wide-Scale Computer Science Education through Improved Automated Assessment Tools

    Science.gov (United States)

    Boe, Bryce A.

    There is a proliferating demand for newly trained computer scientists as the number of computer science related jobs continues to increase. University programs will only be able to train enough new computer scientists to meet this demand when two things happen: when there are more primary and secondary school students interested in computer science, and when university departments have the resources to handle the resulting increase in enrollment. To meet these goals, significant effort is being made to both incorporate computational thinking into existing primary school education, and to support larger university computer science class sizes. We contribute to this effort through the creation and use of improved automated assessment tools. To enable wide-scale computer science education we do two things. First, we create a framework called Hairball to support the static analysis of Scratch programs targeted for fourth, fifth, and sixth grade students. Scratch is a popular building-block language utilized to pique interest in and teach the basics of computer science. We observe that Hairball allows for rapid curriculum alterations and thus contributes to wide-scale deployment of computer science curriculum. Second, we create a real-time feedback and assessment system utilized in university computer science classes to provide better feedback to students while reducing assessment time. Insights from our analysis of student submission data show that modifications to the system configuration support the way students learn and progress through course material, making it possible for instructors to tailor assignments to optimize learning in growing computer science classes.

  19. [Influence of coping material selection and porcelain firing on marginal and internal fit of computer-aided design/computer- aided manufacturing of zirconia and titanium ceramic implant-supported crowns].

    Science.gov (United States)

    Cuiling, Liu; Liyuan, Yang; Xu, Gao; Hong, Shang

    2016-06-01

    This study aimed to investigate the influence of coping material and porcelain firing on the marginal and internal fit of computer-aided design/computer-aided manufacturing (CAD/CAM) of zirconia ceramic implant- and titanium ceramic implant-supported crowns. Zirconia ceramic implant (group A, n = 8) and titanium metal ceramic implant-supported crowns (group B, n = 8) were produced from copings using the CAD/CAM system. The marginal and internal gaps of the copings and crowns were measured by using a light-body silicone replica technique combined with micro-computed tomography scanning to obtain a three-dimensional image. Marginal gap (MG), horizontal marginal discrepancy (HMD), and axial wall (AW) were measured. Statistical analyses were performed using SPSS 17.0. Prior to porcelain firing, the measurements for MG, HMD, and AW of copings in group A were significantly larger than those in group B (P 0.05). Porcelain firing significantly reduced MG (P 0.05). The marginal fits of CAD/CAM zirconia ceramic implant-supported crowns were superior to those of CAD/CAM titanium ceramic-supported crowns. The fits of both the CAD/CAM zirconia ceramic implant- and titanium ceramic implant-supported crowns were obviously influenced by porcelain firing.

  20. Application of computers in a Radiological Survey Program

    International Nuclear Information System (INIS)

    Berven, B.A.; Blair, M.S.; Doane, R.W.; Little, C.A.; Perdue, P.T.

    1984-01-01

    A brief description of some of the applications of computers in a radiological survey program is presented. It has been our experience that computers and computer software have allowed our staff personnel to more productively use their time by using computers to perform the mechanical acquisition, analyses, and storage of data. It is hoped that other organizations may similarly profit from this experience. This effort will ultimately minimize errors and reduce program costs

  1. Quantum analogue computing.

    Science.gov (United States)

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  2. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Science.gov (United States)

    Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W

    2014-01-01

    A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  3. Benchmarking undedicated cloud computing providers for analysis of genomic datasets.

    Directory of Open Access Journals (Sweden)

    Seyhan Yazar

    Full Text Available A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR on Amazon EC2 instances and Google Compute Engine (GCE, using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2 for E.coli and 53.5% (95% CI: 34.4-72.6 for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1 and 173.9% (95% CI: 134.6-213.1 more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.

  4. Defining Spaces of Potential Art: The significance of representation in computer-aided creativity

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2005-01-01

    One way of looking at the creative process is as a search in a space of possible answers. One way of simulating such a process is through evolutionary algorithms, i.e., simulated evolution by random variation and selection. The search space is defined by the chosen genetic representation, a kind...... of formal description, and the ways of navigating the space are defined by the choice of genetic operators (e.g., mutations). In creative systems, such as computer-aided music composition tools, these choices determine the efficiency of the system, in terms of the diversity of the results, the degree...... of novelty and the coherence within the material. Based on various implementations developed during five years of research, and experiences from real-life artistic applications, I will explain and discuss these mechanisms, from a perspective of the creative artist....

  5. Accuracy evaluation of metal copings fabricated by computer-aided milling and direct metal laser sintering systems.

    Science.gov (United States)

    Park, Jong-Kyoung; Lee, Wan-Sun; Kim, Hae-Young; Kim, Woong-Chul; Kim, Ji-Hwan

    2015-04-01

    To assess the marginal and internal gaps of the copings fabricated by computer-aided milling and direct metal laser sintering (DMLS) systems in comparison to casting method. Ten metal copings were fabricated by casting, computer-aided milling, and DMLS. Seven mesiodistal and labiolingual positions were then measured, and each of these were divided into the categories; marginal gap (MG), cervical gap (CG), axial wall at internal gap (AG), and incisal edge at internal gap (IG). Evaluation was performed by a silicone replica technique. A digital microscope was used for measurement of silicone layer. Statistical analyses included one-way and repeated measure ANOVA to test the difference between the fabrication methods and categories of measured points (α=.05), respectively. The mean gap differed significantly with fabrication methods (P<.001). Casting produced the narrowest gap in each of the four measured positions, whereas CG, AG, and IG proved narrower in computer-aided milling than in DMLS. Thus, with the exception of MG, all positions exhibited a significant difference between computer-aided milling and DMLS (P<.05). Although the gap was found to vary with fabrication methods, the marginal and internal gaps of the copings fabricated by computer-aided milling and DMLS fell within the range of clinical acceptance (<120 µm). However, the statistically significant difference to conventional casting indicates that the gaps in computer-aided milling and DMLS fabricated restorations still need to be further reduced.

  6. Studi Perbandingan Layanan Cloud Computing

    Directory of Open Access Journals (Sweden)

    Afdhal Afdhal

    2014-03-01

    Full Text Available In the past few years, cloud computing has became a dominant topic in the IT area. Cloud computing offers hardware, infrastructure, platform and applications without requiring end-users knowledge of the physical location and the configuration of providers who deliver the services. It has been a good solution to increase reliability, reduce computing cost, and make opportunities to IT industries to get more advantages. The purpose of this article is to present a better understanding of cloud delivery service, correlation and inter-dependency. This article compares and contrasts the different levels of delivery services and the development models, identify issues, and future directions on cloud computing. The end-users comprehension of cloud computing delivery service classification will equip them with knowledge to determine and decide which business model that will be chosen and adopted securely and comfortably. The last part of this article provides several recommendations for cloud computing service providers and end-users.

  7. Non-invasive imaging of myocardial bridge by coronary computed tomography angiography: the value of transluminal attenuation gradient to predict significant dynamic compression

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yuehua; Yu, Mengmeng; Zhang, Jiayin; Li, Minghua [Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Institute of Diagnostic and Interventional Radiology, Shanghai (China); Lu, Zhigang; Wei, Meng [Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Department of Cardiology, Shanghai (China)

    2017-05-15

    To study the diagnostic value of transluminal attenuation gradient (TAG) measured by coronary computed tomography angiography (CCTA) for identifying relevant dynamic compression of myocardial bridge (MB). Patients with confirmed MB who underwent both CCTA and ICA within one month were retrospectively included. TAG was defined as the linear regression coefficient between luminal attenuation and distance. The TAG of MB vessel, length and depth of MB were measured and correlated with the presence and degree of dynamic compression observed at ICA. Systolic compression ≥50 % was considered significant. 302 patients with confirmed MB lesions were included. TAG was lowest (-17.4 ± 6.7 HU/10 mm) in patients with significant dynamic compression and highest in patients without MB compression (-9.5 ± 4.3 HU/10 mm, p < 0.001). Linear correlation revealed relation between the percentage of systolic compression and TAG (Pearson correlation, r = -0.52, p < 0.001) and no significant relation between the percentage of systolic compression and MB depth or length. ROC curve analysis determined the best cut-off value of TAG as -14.8HU/10 mm (area under curve = 0.813, 95 % confidence interval = 0.764-0.855, p < 0.001), which yielded high diagnostic accuracy (82.1 %, 248/302). The degree of ICA-assessed systolic compression of MB significantly correlates with TAG but not MB depth or length. (orig.)

  8. The DYD-RCT protocol: an on-line randomised controlled trial of an interactive computer-based intervention compared with a standard information website to reduce alcohol consumption among hazardous drinkers

    Directory of Open Access Journals (Sweden)

    Godfrey Christine

    2007-10-01

    Full Text Available Abstract Background Excessive alcohol consumption is a significant public health problem throughout the world. Although there are a range of effective interventions to help heavy drinkers reduce their alcohol consumption, these have little proven population-level impact. Researchers internationally are looking at the potential of Internet interventions in this area. Methods/Design In a two-arm randomised controlled trial, an on-line psychologically enhanced interactive computer-based intervention is compared with a flat, text-based information web-site. Recruitment, consent, randomisation and data collection are all on-line. The primary outcome is total past-week alcohol consumption; secondary outcomes include hazardous or harmful drinking, dependence, harm caused by alcohol, and mental health. A health economic analysis is included. Discussion This trial will provide information on the effectiveness and cost-effectiveness of an on-line intervention to help heavy drinkers drink less. Trial registration International Standard Randomised Controlled Trial Number Register ISRCTN31070347

  9. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  10. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  11. Computational Procedures for a Class of GI/D/k Systems in Discrete Time

    Directory of Open Access Journals (Sweden)

    Md. Mostafizur Rahman

    2009-01-01

    Full Text Available A class of discrete time GI/D/k systems is considered for which the interarrival times have finite support and customers are served in first-in first-out (FIFO order. The system is formulated as a single server queue with new general independent interarrival times and constant service duration by assuming cyclic assignment of customers to the identical servers. Then the queue length is set up as a quasi-birth-death (QBD type Markov chain. It is shown that this transformed GI/D/1 system has special structures which make the computation of the matrix R simple and efficient, thereby reducing the number of multiplications in each iteration significantly. As a result we were able to keep the computation time very low. Moreover, use of the resulting structural properties makes the computation of the distribution of queue length of the transformed system efficient. The computation of the distribution of waiting time is also shown to be simple by exploiting the special structures.

  12. A comparative study of the tail ion distribution with reduced Fokker-Planck models

    Science.gov (United States)

    McDevitt, C. J.; Tang, Xian-Zhu; Guo, Zehua; Berk, H. L.

    2014-03-01

    A series of reduced models are used to study the fast ion tail in the vicinity of a transition layer between plasmas at disparate temperatures and densities, which is typical of the gas and pusher interface in inertial confinement fusion targets. Emphasis is placed on utilizing progressively more comprehensive models in order to identify the essential physics for computing the fast ion tail at energies comparable to the Gamow peak. The resulting fast ion tail distribution is subsequently used to compute the fusion reactivity as a function of collisionality and temperature. While a significant reduction of the fusion reactivity in the hot spot compared to the nominal Maxwellian case is present, this reduction is found to be partially recovered by an increase of the fusion reactivity in the neighboring cold region.

  13. A Hybrid Online Intervention for Reducing Sedentary Behavior in Obese Women

    Directory of Open Access Journals (Sweden)

    Melanie M Adams

    2013-10-01

    Full Text Available Sedentary behavior (SB has emerged as an independent risk factor for cardiovascular disease and type 2 diabetes. While exercise is known to reduce these risks, reducing SB through increases in non-structured PA and breaks from sitting may appeal to obese women who have lower self-efficacy for PA. This study examined effects of a combined face-to-face and online intervention to reduce SB in overweight and obese women. A two-group quasi-experimental study was used with measures taken pre and post. Female volunteers (M age=58.5, SD=12.5 yrs were enrolled in the intervention (n=40 or waitlisted (n=24. The intervention, based on the Social Cognitive Theory, combined group sessions with email messages over 6 weeks. Individualized feedback to support mastery and peer models of active behaviors were included in the emails. Participants self-monitored PA with a pedometer. Baseline and post measures of PA and SB were assessed by accelerometer and self-report. Standard measures of height, weight and waist circumference were conducted. Repeated measures ANOVA was used for analyses. Self-reported SB and light PA in the intervention group (I changed significantly over time [SB, F(1,2= 3.81, p=.03, light PA, F(1,2=3.39, p=.04]. Significant Group x Time interactions were found for light PA, F(1,63=5.22, p=.03, moderate PA, F(1, 63=3.90, p=.05, and for waist circumference, F(1,63=16.0, p=.001. The I group decreased significantly while the comparison group was unchanged. Hybrid computer interventions to reduce SB may provide a non-exercise alternative for increasing daily PA and potentially reduce waist circumference, a risk factor for type 2 diabetes. Consumer-grade accelerometers may aide improvements to PA and SB and should be tested as part of future interventions.

  14. Computer vision syndrome: a review of ocular causes and potential treatments.

    Science.gov (United States)

    Rosenfield, Mark

    2011-09-01

    Computer vision syndrome (CVS) is the combination of eye and vision problems associated with the use of computers. In modern western society the use of computers for both vocational and avocational activities is almost universal. However, CVS may have a significant impact not only on visual comfort but also occupational productivity since between 64% and 90% of computer users experience visual symptoms which may include eyestrain, headaches, ocular discomfort, dry eye, diplopia and blurred vision either at near or when looking into the distance after prolonged computer use. This paper reviews the principal ocular causes for this condition, namely oculomotor anomalies and dry eye. Accommodation and vergence responses to electronic screens appear to be similar to those found when viewing printed materials, whereas the prevalence of dry eye symptoms is greater during computer operation. The latter is probably due to a decrease in blink rate and blink amplitude, as well as increased corneal exposure resulting from the monitor frequently being positioned in primary gaze. However, the efficacy of proposed treatments to reduce symptoms of CVS is unproven. A better understanding of the physiology underlying CVS is critical to allow more accurate diagnosis and treatment. This will enable practitioners to optimize visual comfort and efficiency during computer operation. Ophthalmic & Physiological Optics © 2011 The College of Optometrists.

  15. Cryptography, quantum computation and trapped ions

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, Richard J.

    1998-03-01

    The significance of quantum computation for cryptography is discussed. Following a brief survey of the requirements for quantum computational hardware, an overview of the ion trap quantum computation project at Los Alamos is presented. The physical limitations to quantum computation with trapped ions are analyzed and an assessment of the computational potential of the technology is made.

  16. A Computer-Based Intervention to Reduce Internalized Heterosexism in Men

    Science.gov (United States)

    Lin, Yen-Jui; Israel, Tania

    2012-01-01

    Internalized heterosexism (IH) is a strong predictor of the psychological well-being of lesbian, gay, bisexual (LGB), or other same-sex attracted individuals. To respond to the call for interventions to address IH, the current study developed and tested an online intervention to reduce IH among gay, bisexual, and other same-sex attracted men. A…

  17. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar

    2012-01-01

    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...

  18. Multi-objective reverse logistics model for integrated computer waste management.

    Science.gov (United States)

    Ahluwalia, Poonam Khanijo; Nema, Arvind K

    2006-12-01

    This study aimed to address the issues involved in the planning and design of a computer waste management system in an integrated manner. A decision-support tool is presented for selecting an optimum configuration of computer waste management facilities (segregation, storage, treatment/processing, reuse/recycle and disposal) and allocation of waste to these facilities. The model is based on an integer linear programming method with the objectives of minimizing environmental risk as well as cost. The issue of uncertainty in the estimated waste quantities from multiple sources is addressed using the Monte Carlo simulation technique. An illustrated example of computer waste management in Delhi, India is presented to demonstrate the usefulness of the proposed model and to study tradeoffs between cost and risk. The results of the example problem show that it is possible to reduce the environmental risk significantly by a marginal increase in the available cost. The proposed model can serve as a powerful tool to address the environmental problems associated with exponentially growing quantities of computer waste which are presently being managed using rudimentary methods of reuse, recovery and disposal by various small-scale vendors.

  19. Prenatal prochloraz treatment significantly increases pregnancy length and reduces offspring weight but does not affect social-olfactory memory in rats.

    Science.gov (United States)

    Dmytriyeva, Oksana; Klementiev, Boris; Berezin, Vladimir; Bock, Elisabeth

    2013-07-01

    Metabolites of the commonly used imidazole fungicide prochloraz are androgen receptor antagonists. They have been shown to block androgen-driven development and compromise reproductive function. We tested the effect of prochloraz on cognitive behavior following exposure to this fungicide during the perinatal period. Pregnant Wistar rats were administered a 200 mg/kg dose of prochloraz on gestational day (GD) 7, GD11, and GD15. The social recognition test (SRT) was performed on 7-week-old male rat offspring. We found an increase in pregnancy length and a significantly reduced pup weight on PND15 and PND40 but no effect of prenatal prochloraz exposure on social investigation or acquisition of social-olfactory memory. Copyright © 2012 Elsevier GmbH. All rights reserved.

  20. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  1. Technological significances to reduce the material problems. Feasibility of heat flux reduction

    International Nuclear Information System (INIS)

    Yamazaki, Seiichiro; Shimada, Michiya.

    1994-01-01

    For a divertor plate in a fusion power reactor, a high temperature coolant must be used for heat removal to keep thermal efficiency high. It makes the temperature and thermal stress of wall materials higher than the design limits. Issues of the coolant itself, e.g. burnout of high temperature water, will also become a serious problem. Sputtering erosion of the surface material will be a great concern of its lifetime. Therefore, it is necessary to reduce the heat and particle loads to the divertor plate technologically. The feasibility of some technological methods of heat reduction, such as separatrix sweeping, is discussed. As one of the most promising ideas, the methods of radiative cooling of the divertor plasma are summarized based on the recent results of large tokamaks. The feasibility of remote radiative cooling and gas divertor is discussed. The ideas are considered in recent design studies of tokamak power reactors and experimental reactors. By way of example, conceptual designs of divertor plate for the steady state tokamak power reactor are described. (author)

  2. Computer science handbook

    CERN Document Server

    Tucker, Allen B

    2004-01-01

    Due to the great response to the famous Computer Science Handbook edited by Allen B. Tucker, … in 2004 Chapman & Hall/CRC published a second edition of this comprehensive reference book. Within more than 70 chapters, every one new or significantly revised, one can find any kind of information and references about computer science one can imagine. … All in all, there is absolute nothing about computer science that can not be found in the encyclopedia with its 110 survey articles …-Christoph Meinel, Zentralblatt MATH

  3. Ceiling personalized ventilation combined with desk fans for reduced direct and indirect cross-contamination and efficient use of office space

    International Nuclear Information System (INIS)

    Habchi, Carine; Ghali, Kamel; Ghaddar, Nesreen; Chakroun, Walid; Alotaibi, Sorour

    2016-01-01

    Highlights: • Cross-infection occurs by direct inhalation and contact of contaminated surfaces. • Mixing ventilation performance is degraded at reduced distance between occupants. • Ceiling personalized ventilation reduces significantly cross-contamination. • The optimized system induces large energy savings compared to mixing ventilation. • The optimized system improves the occupation density from 12 to 8 m 2 per occupant. - Abstract: Crowded offices with short distances separating workers’ stations increase the probability of respiratory cross-infection via two different paths. One path is the contaminant transmission through air by direct inhalation and the other is through the body contact of contaminated surfaces and walls. Mixed ventilation principles used today reduces the probability of cross contamination by increasing the distance between the stations challenging the efficient use of the space or by supplying more fresh air in the space which is energy inefficient. In this work, new cooling and ventilation configuration is studied by modeling using computational fluid dynamics with consideration of space occupancy density while providing good indoor air quality. The configuration considers a ceiling personalized ventilation system equipped with desk fans. The ability of the computational fluid dynamics model in computing the thermal, velocity and concentration fields was validated by experiments and published data. The main objective of the performed experiments was to ensure that the developed computational fluid dynamics model can capture the effect of the desk fan flow rate on particle behavior. The studied system is found to provide acceptable indoor air quality at shorter distance between the occupants compared to the mixing system at considerable energy savings. By optimizing the design of the proposed personalized ventilation system, the occupancy density in an office is enhanced to 8 m 2 per occupant compared to 12 m 2 per occupant for

  4. A cost modelling system for cloud computing

    OpenAIRE

    Ajeh, Daniel; Ellman, Jeremy; Keogh, Shelagh

    2014-01-01

    An advance in technology unlocks new opportunities for organizations to increase their productivity, efficiency and process automation while reducing the cost of doing business as well. The emergence of cloud computing addresses these prospects through the provision of agile systems that are scalable, flexible and reliable as well as cost effective. Cloud computing has made hosting and deployment of computing resources cheaper and easier with no up-front charges but pay per-use flexible payme...

  5. Computer tablet distraction reduces pain and anxiety in pediatric burn patients undergoing hydrotherapy: A randomized trial.

    Science.gov (United States)

    Burns-Nader, Sherwood; Joe, Lindsay; Pinion, Kelly

    2017-09-01

    Distraction is often used in conjunction with analgesics to minimize pain in pediatric burn patients during treatment procedures. Computer tablets provide many options for distraction items in one tool and are often used during medical procedures. Few studies have examined the effectiveness of tablet distraction in improving the care of pediatric burn patients. This study examines the effectiveness of tablet distraction provided by a child life specialist to minimize pain and anxiety in pediatric burn patients undergoing hydrotherapy. Thirty pediatric patients (4-12) undergoing hydrotherapy for the treatment of burns participated in this randomized clinical trial. The tablet distraction group received tablet distraction provided by a child life specialist while those in the control group received standard care. Pain was assessed through self-reports and observation reports. Anxiety was assessed through behavioral observations. Length of procedure was also recorded. Nurses reported significantly less pain for the tablet distraction group compared to the control group. There was no significant difference between groups on self-reported pain. The tablet distraction group displayed significantly less anxiety during the procedure compared to the control group. Also, the tablet distraction group returned to baseline after the procedure while those in the control group displayed higher anxiety post-procedure. There was no difference in the length of the procedure between groups. These findings suggest tablet distraction provided by a child life specialist may be an effective method for improving pain and anxiety in children undergoing hydrotherapy treatment for burns. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.

  6. Evaluating Active Interventions to Reduce Student Procrastination

    OpenAIRE

    Martin, Joshua Deckert

    2015-01-01

    Procrastination is a pervasive problem in education. In computer science, procrastination and lack of necessary time management skills to complete programming projects are viewed as primary causes of student attrition. The most effective techniques known to reduce procrastination are resource-intensive and do not scale well to large classrooms. In this thesis, we examine three course interventions designed to both reduce procrastination and be scalable for large classrooms. Reflective writ...

  7. MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control

    Science.gov (United States)

    Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming

    2017-09-01

    Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.

  8. Breast Cancer Detection with Reduced Feature Set

    Directory of Open Access Journals (Sweden)

    Ahmet Mert

    2015-01-01

    Full Text Available This paper explores feature reduction properties of independent component analysis (ICA on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC dataset is reduced to one-dimensional feature vector computing an independent component (IC. The original data with 30 features and reduced one feature (IC are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN, artificial neural network (ANN, radial basis function neural network (RBFNN, and support vector machine (SVM. The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations and partitioning (20%–40% methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden’s index, discriminant power, and the receiver operating characteristic (ROC curve with its criterion values including area under curve (AUC and 95% confidential interval (CI. This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  9. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  10. The Evolution of Polymer Composition during PHA Accumulation: The Significance of Reducing Equivalents

    Directory of Open Access Journals (Sweden)

    Liliana Montano-Herrera

    2017-03-01

    Full Text Available This paper presents a systematic investigation into monomer development during mixed culture Polyhydroxyalkanoates (PHA accumulation involving concurrent active biomass growth and polymer storage. A series of mixed culture PHA accumulation experiments, using several different substrate-feeding strategies, was carried out. The feedstock comprised volatile fatty acids, which were applied as single carbon sources, as mixtures, or in series, using a fed-batch feed-on-demand controlled bioprocess. A dynamic trend in active biomass growth as well as polymer composition was observed. The observations were consistent over replicate accumulations. Metabolic flux analysis (MFA was used to investigate metabolic activity through time. It was concluded that carbon flux, and consequently copolymer composition, could be linked with how reducing equivalents are generated.

  11. Interface between computational fluid dynamics (CFD) and plant analysis computer codes

    International Nuclear Information System (INIS)

    Coffield, R.D.; Dunckhorst, F.F.; Tomlinson, E.T.; Welch, J.W.

    1993-01-01

    Computational fluid dynamics (CFD) can provide valuable input to the development of advanced plant analysis computer codes. The types of interfacing discussed in this paper will directly contribute to modeling and accuracy improvements throughout the plant system and should result in significant reduction of design conservatisms that have been applied to such analyses in the past

  12. Iterative reconstruction reduces abdominal CT dose

    International Nuclear Information System (INIS)

    Martinsen, Anne Catrine Trægde; Sæther, Hilde Kjernlie; Hol, Per Kristian; Olsen, Dag Rune; Skaane, Per

    2012-01-01

    Objective: In medical imaging, lowering radiation dose from computed tomography scanning, without reducing diagnostic performance is a desired achievement. Iterative image reconstruction may be one tool to achieve dose reduction. This study reports the diagnostic performance using a blending of 50% statistical iterative reconstruction (ASIR) and filtered back projection reconstruction (FBP) compared to standard FBP image reconstruction at different dose levels for liver phantom examinations. Methods: An anthropomorphic liver phantom was scanned at 250, 185, 155, 140, 120 and 100 mA s, on a 64-slice GE Lightspeed VCT scanner. All scans were reconstructed with ASIR and FBP. Four readers evaluated independently on a 5-point scale 21 images, each containing 32 test sectors. In total 672 areas were assessed. ROC analysis was used to evaluate the differences. Results: There was a difference in AUC between the 250 mA s FBP images and the 120 and 100 mA s FBP images. ASIR reconstruction gave a significantly higher diagnostic performance compared to standard reconstruction at 100 mA s. Conclusion: A blending of 50–90% ASIR and FBP may improve image quality of low dose CT examinations of the liver, and thus give a potential for reducing radiation dose.

  13. A REVIEW ON SECURITY AND PRIVACY ISSUES IN CLOUD COMPUTING

    OpenAIRE

    Gulshan Kumar*, Dr.Vijay Laxmi

    2017-01-01

    Cloud computing is an upcoming paradigm that offers tremendous advantages in economical aspects, such as reduced time to market, flexible computing capabilities, and limitless computing power. To use the full potential of cloud computing, data is transferred, processed and stored by external cloud providers. However, data owners are very skeptical to place their data outside their own control sphere. Cloud computing is a new development of grid, parallel, and distributed computing with visual...

  14. A Computer-Based Glucose Management System Reduces the Incidence of Forgotten Glucose Measurements: A Retrospective Observational Study.

    Science.gov (United States)

    Okura, Tsuyoshi; Teramoto, Kei; Koshitani, Rie; Fujioka, Yohei; Endo, Yusuke; Ueki, Masaru; Kato, Masahiko; Taniguchi, Shin-Ichi; Kondo, Hiroshi; Yamamoto, Kazuhiro

    2018-04-17

    Frequent glucose measurements are needed for good blood glucose control in hospitals; however, this requirement means that measurements can be forgotten. We developed a novel glucose management system using an iPod ® and electronic health records. A time schedule system for glucose measurement was developed using point-of-care testing, an iPod ® , and electronic health records. The system contains the glucose measurement schedule and an alarm sounds if a measurement is forgotten. The number of times measurements were forgotten was analyzed. Approximately 7000 glucose measurements were recorded per month. Before implementation of the system, the average number of times measurements were forgotten was 4.8 times per month. This significantly decreased to 2.6 times per month after the system started. We also analyzed the incidence of forgotten glucose measurements as a proportion of the total number of measurements for each period and found a significant difference between the two 9-month periods (43/64,049-24/65,870, P = 0.014, chi-squared test). This computer-based blood glucose monitoring system is useful for the management of glucose monitoring in hospitals. Johnson & Johnson Japan.

  15. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  16. Improving Quality of Service and Reducing Power Consumption with WAN accelerator in Cloud Computing Environments

    OpenAIRE

    Shin-ichi Kuribayashi

    2013-01-01

    The widespread use of cloud computing services is expected to deteriorate a Quality of Service and toincrease the power consumption of ICT devices, since the distance to a server becomes longer than before. Migration of virtual machines over a wide area can solve many problems such as load balancing and power saving in cloud computing environments. This paper proposes to dynamically apply WAN accelerator within the network when a virtual machine is moved to a distant center, in order to preve...

  17. Interactive Computer Graphics

    Science.gov (United States)

    Kenwright, David

    2000-01-01

    Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.

  18. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  19. Thrombolysis significantly reduces transient myocardial ischaemia following first acute myocardial infarction

    DEFF Research Database (Denmark)

    Mickley, H; Pless, P; Nielsen, J R

    1992-01-01

    In order to investigate whether thrombolysis affects residual myocardial ischaemia, we prospectively performed a predischarge maximal exercise test and early out-of-hospital ambulatory ST segment monitoring in 123 consecutive men surviving a first acute myocardial infarction (AMI). Seventy......-four patients fulfilled our criteria for thrombolysis, but only the last 35 patients included received thrombolytic therapy. As thrombolysis was not available in our Department at the start of the study, the first 39 patients included were conservatively treated (controls). No significant differences...... in baseline clinical characteristics were found between the two groups. In-hospital atrial fibrillation and digoxin therapy was more prevalent in controls (P less than 0.05). During exercise, thrombolysed patients reached a higher maximal work capacity compared with controls: 160 +/- 41 vs 139 +/- 34 W (P...

  20. CHPS IN CLOUD COMPUTING ENVIRONMENT

    OpenAIRE

    K.L.Giridas; A.Shajin Nargunam

    2012-01-01

    Workflow have been utilized to characterize a various form of applications concerning high processing and storage space demands. So, to make the cloud computing environment more eco-friendly,our research project was aiming in reducing E-waste accumulated by computers. In a hybrid cloud, the user has flexibility offered by public cloud resources that can be combined to the private resources pool as required. Our previous work described the process of combining the low range and mid range proce...

  1. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  2. A Revolution in Information Technology - Cloud Computing

    OpenAIRE

    Divya BHATT

    2012-01-01

    What is the Internet? It is collection of “interconnected networks” represented as a Cloud in network diagrams and Cloud Computing is a metaphor for certain parts of the Internet. The IT enterprises and individuals are searching for a way to reduce the cost of computation, storage and communication. Cloud Computing is an Internet-based technology providing “On-Demand” solutions for addressing these scenarios that should be flexible enough for adaptation and responsive to requirements. The hug...

  3. FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

    Directory of Open Access Journals (Sweden)

    Yanni Li

    2012-01-01

    Full Text Available Searching for the multiple longest common subsequences (MLCS has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

  4. Customization of user interfaces to reduce errors and enhance user acceptance.

    Science.gov (United States)

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    Science.gov (United States)

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders.

  6. Finite-element model evaluation of barrier configurations to reduce infiltration into waste-disposal structures: preliminary results and design considerations

    International Nuclear Information System (INIS)

    Lu, A.H.; Phillips, S.J.; Adams, M.R.

    1982-09-01

    Barriers to reduce infiltration into waste burial disposal structures (trenches, pits, etc.) may be required to provide adequate waste confinement. The preliminary engineering design of these barriers should consider interrelated barrier performance factors. This paper summarizes preliminary computer simulation activities to further engineering barrier design efforts. Several barrier configurations were conceived and evaluated. Models were simulated for each barrier configuration using a finite element computer code. Results of this preliminary evaluation indicate that barrier configurations, depending on their morphology and materials, may significantly influence infiltration, flux, drainage, and storage of water through and within waste disposal structures. 9 figures

  7. Region-oriented CT image representation for reducing computing time of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Sarrut, David; Guigues, Laurent

    2008-01-01

    Purpose. We propose a new method for efficient particle transportation in voxelized geometry for Monte Carlo simulations. We describe its use for calculating dose distribution in CT images for radiation therapy. Material and methods. The proposed approach, based on an implicit volume representation named segmented volume, coupled with an adapted segmentation procedure and a distance map, allows us to minimize the number of boundary crossings, which slows down simulation. The method was implemented with the GEANT4 toolkit and compared to four other methods: One box per voxel, parameterized volumes, octree-based volumes, and nested parameterized volumes. For each representation, we compared dose distribution, time, and memory consumption. Results. The proposed method allows us to decrease computational time by up to a factor of 15, while keeping memory consumption low, and without any modification of the transportation engine. Speeding up is related to the geometry complexity and the number of different materials used. We obtained an optimal number of steps with removal of all unnecessary steps between adjacent voxels sharing a similar material. However, the cost of each step is increased. When the number of steps cannot be decreased enough, due for example, to the large number of material boundaries, such a method is not considered suitable. Conclusion. This feasibility study shows that optimizing the representation of an image in memory potentially increases computing efficiency. We used the GEANT4 toolkit, but we could potentially use other Monte Carlo simulation codes. The method introduces a tradeoff between speed and geometry accuracy, allowing computational time gain. However, simulations with GEANT4 remain slow and further work is needed to speed up the procedure while preserving the desired accuracy

  8. The potential benefits of photonics in the computing platform

    Science.gov (United States)

    Bautista, Jerry

    2005-03-01

    The increase in computational requirements for real-time image processing, complex computational fluid dynamics, very large scale data mining in the health industry/Internet, and predictive models for financial markets are driving computer architects to consider new paradigms that rely upon very high speed interconnects within and between computing elements. Further challenges result from reduced power requirements, reduced transmission latency, and greater interconnect density. Optical interconnects may solve many of these problems with the added benefit extended reach. In addition, photonic interconnects provide relative EMI immunity which is becoming an increasing issue with a greater dependence on wireless connectivity. However, to be truly functional, the optical interconnect mesh should be able to support arbitration, addressing, etc. completely in the optical domain with a BER that is more stringent than "traditional" communication requirements. Outlined are challenges in the advanced computing environment, some possible optical architectures and relevant platform technologies, as well roughly sizing these opportunities which are quite large relative to the more "traditional" optical markets.

  9. On nonlinear reduced order modeling

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.

    2011-01-01

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  10. Computer code for quantitative ALARA evaluations

    International Nuclear Information System (INIS)

    Voilleque, P.G.

    1984-01-01

    A FORTRAN computer code has been developed to simplify the determination of whether dose reduction actions meet the as low as is reasonably achievable (ALARA) criterion. The calculations are based on the methodology developed for the Atomic Industrial Forum. The code is used for analyses of eight types of dose reduction actions, characterized as follows: reduce dose rate, reduce job frequency, reduce productive working time, reduce crew size, increase administrative dose limit for the task, and increase the workers' time utilization and dose utilization through (a) improved working conditions, (b) basic skill training, or (c) refresher training for special skills. For each type of action, two analysis modes are available. The first is a generic analysis in which the program computes potential benefits (in dollars) for a range of possible improvements, e.g., for a range of lower dose rates. Generic analyses are most useful in the planning stage and for evaluating the general feasibility of alternative approaches. The second is a specific analysis in which the potential annual benefits of a specific level of improvement and the annual implementation cost are compared. The potential benefits reflect savings in operational and societal costs that can be realized if occupational radiation doses are reduced. Because the potential benefits depend upon many variables which characterize the job, the workplace, and the workers, there is no unique relationship between the potential dollar savings and the dose savings. The computer code permits rapid quantitative analyses of alternatives and is a tool that supplements the health physicist's professional judgment. The program output provides a rational basis for decision-making and a record of the assumptions employed

  11. Computer Vision Syndrome in Eleven to Eighteen-Year-Old Students in Qazvin

    Directory of Open Access Journals (Sweden)

    Khalaj

    2015-08-01

    Full Text Available Background Prolonged use of computers can lead to complications such as eye strain, eye and head aches, double and blurred vision, tired eyes, irritation, burning and itching eyes, eye redness, light sensitivity, dry eyes, muscle strains, and other problems. Objectives The aim of the present study was to evaluate visual problems and major symptoms, and their associations among computer users, aged between 11 and 18 years old, residing in the Qazvin city of Iran, during year 2010. Patients and Methods This cross-sectional study was done on 642 secondary to pre university students who had referred to the eye clinic of Buali hospital of Qazvin during year 2013. A questionnaire consisting of demographic information and 26 questions on visual effects of the computer was used to gather information. Participants answered all questions and then underwent complete eye examinations and in some cases cycloplegic refraction. Visual acuity (VA was measured with a logMAR in six meters. Refraction errors were determined using an auto refractometer (Potece and Heine retinoscope. The collected data was then analyzed using the SPSS statistical software. Results The results of this study indicated that 63.86% of the subjects had refractive errors. Refractive errors were significantly different in children of different genders (P < 0.05. The most common complaints associated with the continuous use of computers were eyestrain, eye pain, eye redness, headache, and blurred vision. The most prevalent (81.8% eye-related problem in computer users was eyestrain and the least prevalent was dry eyes (7.84%. In order to reduce computer related problems 54.2% of the participants suggested taking enough rest, 37.9% recommended use of computers only for necessary tasks, while 24.4% and 19.1% suggested the use of monitor shields and proper working distance, respectively. Conclusions Our findings revealed that using computers for prolonged periods of time can lead to eye

  12. Quantum plug n’ play: modular computation in the quantum regime

    Science.gov (United States)

    Thompson, Jayne; Modi, Kavan; Vedral, Vlatko; Gu, Mile

    2018-01-01

    Classical computation is modular. It exploits plug n’ play architectures which allow us to use pre-fabricated circuits without knowing their construction. This bestows advantages such as allowing parts of the computational process to be outsourced, and permitting individual circuit components to be exchanged and upgraded. Here, we introduce a formal framework to describe modularity in the quantum regime. We demonstrate a ‘no-go’ theorem, stipulating that it is not always possible to make use of quantum circuits without knowing their construction. This has significant consequences for quantum algorithms, forcing the circuit implementation of certain quantum algorithms to be rebuilt almost entirely from scratch after incremental changes in the problem—such as changing the number being factored in Shor’s algorithm. We develop a workaround capable of restoring modularity, and apply it to design a modular version of Shor’s algorithm that exhibits increased versatility and reduced complexity. In doing so we pave the way to a realistic framework whereby ‘quantum chips’ and remote servers can be invoked (or assembled) to implement various parts of a more complex quantum computation.

  13. Applications and issues in automotive computational aeroacoustics

    International Nuclear Information System (INIS)

    Karbon, K.J.; Kumarasamy, S.; Singh, R.

    2002-01-01

    Automotive aeroacoustics is the noise generated due to the airflow around a moving vehicle. Previously regarded as a minor contributor, wind noise is now recognized as one of the dominant vehicle sound sources, since significant progress has been made in suppressing engine and tire noise. Currently, almost all aeroacoustic development work is performed experimentally on a full-scale vehicle in the wind tunnel. Any reduction in hardware models is recognized as one of the major enablers to quickly bring the vehicle to market. In addition, prediction of noise sources and characteristics at the early stages of vehicle design will help in reducing the costly fixes at the later stages. However, predictive methods such as Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA) are still under development and are not considered mainstream design tools. This paper presents some initial applications and findings of CFD and CAA analysis towards vehicle aeroacoustics. Transient Reynolds Averaged Navier Stokes (RANS) and Lighthill-Curle methods are used to model low frequency buffeting and high frequency wind rush noise. Benefits and limitations of the approaches are described. (author)

  14. A fast method for the unit scheduling problem with significant renewable power generation

    International Nuclear Information System (INIS)

    Osório, G.J.; Lujano-Rojas, J.M.; Matias, J.C.O.; Catalão, J.P.S.

    2015-01-01

    Highlights: • A model to the scheduling of power systems with significant renewable power generation is provided. • A new methodology that takes information from the analysis of each scenario separately is proposed. • Based on a probabilistic analysis, unit scheduling and corresponding economic dispatch are estimated. • A comparison with others methodologies is in favour of the proposed approach. - Abstract: Optimal operation of power systems with high integration of renewable power sources has become difficult as a consequence of the random nature of some sources like wind energy and photovoltaic energy. Nowadays, this problem is solved using Monte Carlo Simulation (MCS) approach, which allows considering important statistical characteristics of wind and solar power production such as the correlation between consecutive observations, the diurnal profile of the forecasted power production, and the forecasting error. However, MCS method requires the analysis of a representative amount of trials, which is an intensive calculation task that increases considerably with the number of scenarios considered. In this paper, a model to the scheduling of power systems with significant renewable power generation based on scenario generation/reduction method, which establishes a proportional relationship between the number of scenarios and the computational time required to analyse them, is proposed. The methodology takes information from the analysis of each scenario separately to determine the probabilistic behaviour of each generator at each hour in the scheduling problem. Then, considering a determined significance level, the units to be committed are selected and the load dispatch is determined. The proposed technique was illustrated through a case study and the comparison with stochastic programming approach was carried out, concluding that the proposed methodology can provide an acceptable solution in a reduced computational time

  15. The role of dedicated data computing centers in the age of cloud computing

    Science.gov (United States)

    Caramarcu, Costin; Hollowell, Christopher; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2017-10-01

    Brookhaven National Laboratory (BNL) anticipates significant growth in scientific programs with large computing and data storage needs in the near future and has recently reorganized support for scientific computing to meet these needs. A key component is the enhanced role of the RHIC-ATLAS Computing Facility (RACF) in support of high-throughput and high-performance computing (HTC and HPC) at BNL. This presentation discusses the evolving role of the RACF at BNL, in light of its growing portfolio of responsibilities and its increasing integration with cloud (academic and for-profit) computing activities. We also discuss BNL’s plan to build a new computing center to support the new responsibilities of the RACF and present a summary of the cost benefit analysis done, including the types of computing activities that benefit most from a local data center vs. cloud computing. This analysis is partly based on an updated cost comparison of Amazon EC2 computing services and the RACF, which was originally conducted in 2012.

  16. Design of Computer Experiments

    DEFF Research Database (Denmark)

    Dehlendorff, Christian

    The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...

  17. Numbers for reducible cubic scrolls

    Directory of Open Access Journals (Sweden)

    Israel Vainsencher

    2004-12-01

    Full Text Available We show how to compute the number of reducible cubic scrolls of codimension 2 in (math blackboard symbol Pn incident to the appropriate number of linear spaces.Mostramos como calcular o número de rolos cúbicos redutíveis de codimensão 2 em (math blackboard symbol Pn incidentes a espaços lineares apropriados.

  18. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  19. Computational materials design

    International Nuclear Information System (INIS)

    Snyder, R.L.

    1999-01-01

    Full text: Trial and error experimentation is an extremely expensive route to the development of new materials. The coming age of reduced defense funding will dramatically alter the way in which advanced materials have developed. In the absence of large funding we must concentrate on reducing the time and expense that the R and D of a new material consumes. This may be accomplished through the development of computational materials science. Materials are selected today by comparing the technical requirements to the materials databases. When existing materials cannot meet the requirements we explore new systems to develop a new material using experimental databases like the PDF. After proof of concept, the scaling of the new material to manufacture requires evaluating millions of parameter combinations to optimize the performance of the new device. Historically this process takes 10 to 20 years and requires hundreds of millions of dollars. The development of a focused set of computational tools to predict the final properties of new materials will permit the exploration of new materials systems with only a limited amount of materials characterization. However, to bound computational extrapolations, the experimental formulations and characterization will need to be tightly coupled to the computational tasks. The required experimental data must be obtained by dynamic, in-situ, very rapid characterization. Finally, to evaluate the optimization matrix required to manufacture the new material, very rapid in situ analysis techniques will be essential to intelligently monitor and optimize the formation of a desired microstructure. Techniques and examples for the rapid real-time application of XRPD and optical microscopy will be shown. Recent developments in the cross linking of the world's structural and diffraction databases will be presented as the basis for the future Total Pattern Analysis by XRPD. Copyright (1999) Australian X-ray Analytical Association Inc

  20. Computer Self-Efficacy: A Practical Indicator of Student Computer Competency in Introductory IS Courses

    Directory of Open Access Journals (Sweden)

    Rex Karsten

    1998-01-01

    Full Text Available Students often receive their first college-level computer training in introductory information systems courses. Students and faculty frequently expect this training to develop a level of student computer competence that will support computer use in future courses. In this study, we applied measures of computer self-efficacy to students in a typical introductory IS course. The measures provided useful evidence that student perceptions of their ability to use computers effectively in the future significantly improved as a result of their training experience. The computer self-efficacy measures also provided enhanced insight into course-related factors of practical concern to IS educators. Study results also suggest computer self-efficacy measures may be a practical and informative means of assessing computer-training outcomes in the introductory IS course context

  1. Artificial Neural Networks for Reducing Computational Effort in Active Truncated Model Testing of Mooring Lines

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan Becker

    2015-01-01

    simultaneously, this method is very demanding in terms of numerical efficiency and computational power. Therefore, this method has not yet proved to be feasible. It has recently been shown how a hybrid method combining classical numerical models and artificial neural networks (ANN) can provide a dramatic...... prior to the experiment and with a properly trained ANN it is no problem to obtain accurate simulations much faster than real time-without any need for large computational capacity. The present study demonstrates how this hybrid method can be applied to the active truncated experiments yielding a system...

  2. Computer games and prosocial behaviour.

    Science.gov (United States)

    Mengel, Friederike

    2014-01-01

    We relate different self-reported measures of computer use to individuals' propensity to cooperate in the Prisoner's dilemma. The average cooperation rate is positively related to the self-reported amount participants spend playing computer games. None of the other computer time use variables (including time spent on social media, browsing internet, working etc.) are significantly related to cooperation rates.

  3. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  4. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    Science.gov (United States)

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  5. Reducing Conservatism of Analytic Transient Response Bounds via Shaping Filters

    Science.gov (United States)

    Kwan, Aiyueh; Bedrossian, Nazareth; Jan, Jiann-Woei; Grigoriadis, Karolos; Hua, Tuyen (Technical Monitor)

    1999-01-01

    Recent results show that the peak transient response of a linear system to bounded energy inputs can be computed using the energy-to-peak gain of the system. However, analytically computed peak response bound can be conservative for a class of class bounded energy signals, specifically pulse trains generated from jet firings encountered in space vehicles. In this paper, shaping filters are proposed as a Methodology to reduce the conservatism of peak response analytic bounds. This Methodology was applied to a realistic Space Station assembly operation subject to jet firings. The results indicate that shaping filters indeed reduce the predicted peak response bounds.

  6. Development of Boundary Condition Independent Reduced Order Thermal Models using Proper Orthogonal Decomposition

    Science.gov (United States)

    Raghupathy, Arun; Ghia, Karman; Ghia, Urmila

    2008-11-01

    Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.

  7. Securing the Data Storage and Processing in Cloud Computing Environment

    Science.gov (United States)

    Owens, Rodney

    2013-01-01

    Organizations increasingly utilize cloud computing architectures to reduce costs and energy consumption both in the data warehouse and on mobile devices by better utilizing the computing resources available. However, the security and privacy issues with publicly available cloud computing infrastructures have not been studied to a sufficient depth…

  8. Computation of the MTF with a microdensitometer and a computer on-line system

    International Nuclear Information System (INIS)

    Sukenobu, Yoshiharu; Sasagaki, Michihiro; Asada, Tomohiro; Yamaguchi, Kazuya; Takigawa, Atsushi

    1984-01-01

    One of the most popular evaluation methods of radiographic images is the measurement of modulation transfer function (MTF) using the slit method, which is very useful to compare similar characters of film-screen systems. However, it is difficult ot measure the MTF plainly without a loss in accuracy. So, we connected a microdensitometer to the computer and wrote a computer program which was able to measure the MTF automatically. To reduce the fluctuations and uncertainties in the calculated MTF, we examined the process from the film scanning by microdensitometer to the MTF calculating by Fourier transformation. We firmly believe that the linearization process is the most important process to reduce MTF fluctuations. As a result of using the SPLINE function (H D curve approximate expression) in the process of linearization decreases fluctuations and uncertainties of the MTF were able to be when it was included in the manual measurement. We were able to save a great deal of work when we measured the MTF. (author)

  9. Cluster Computing for Embedded/Real-Time Systems

    Science.gov (United States)

    Katz, D.; Kepner, J.

    1999-01-01

    Embedded and real-time systems, like other computing systems, seek to maximize computing power for a given price, and thus can significantly benefit from the advancing capabilities of cluster computing.

  10. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  11. Computational mathematics in China

    CERN Document Server

    Shi, Zhong-Ci

    1994-01-01

    This volume describes the most significant contributions made by Chinese mathematicians over the past decades in various areas of computational mathematics. Some of the results are quite important and complement Western developments in the field. The contributors to the volume range from noted senior mathematicians to promising young researchers. The topics include finite element methods, computational fluid mechanics, numerical solutions of differential equations, computational methods in dynamical systems, numerical algebra, approximation, and optimization. Containing a number of survey articles, the book provides an excellent way for Western readers to gain an understanding of the status and trends of computational mathematics in China.

  12. COMPUTATIONAL THINKING

    Directory of Open Access Journals (Sweden)

    Evgeniy K. Khenner

    2016-01-01

    Full Text Available Abstract. The aim of the research is to draw attention of the educational community to the phenomenon of computational thinking which actively discussed in the last decade in the foreign scientific and educational literature, to substantiate of its importance, practical utility and the right on affirmation in Russian education.Methods. The research is based on the analysis of foreign studies of the phenomenon of computational thinking and the ways of its formation in the process of education; on comparing the notion of «computational thinking» with related concepts used in the Russian scientific and pedagogical literature.Results. The concept «computational thinking» is analyzed from the point of view of intuitive understanding and scientific and applied aspects. It is shown as computational thinking has evolved in the process of development of computers hardware and software. The practice-oriented interpretation of computational thinking which dominant among educators is described along with some ways of its formation. It is shown that computational thinking is a metasubject result of general education as well as its tool. From the point of view of the author, purposeful development of computational thinking should be one of the tasks of the Russian education.Scientific novelty. The author gives a theoretical justification of the role of computational thinking schemes as metasubject results of learning. The dynamics of the development of this concept is described. This process is connected with the evolution of computer and information technologies as well as increase of number of the tasks for effective solutions of which computational thinking is required. Author substantiated the affirmation that including «computational thinking » in the set of pedagogical concepts which are used in the national education system fills an existing gap.Practical significance. New metasubject result of education associated with

  13. Computing the Gromov hyperbolicity of a discrete metric space

    KAUST Repository

    Fournier, Hervé ; Ismail, Anas; Vigneron, Antoine E.

    2015-01-01

    We give exact and approximation algorithms for computing the Gromov hyperbolicity of an n-point discrete metric space. We observe that computing the Gromov hyperbolicity from a fixed base-point reduces to a (max,min) matrix product. Hence, using

  14. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  15. Computing nucleon EDM on a lattice

    Science.gov (United States)

    Abramczyk, Michael; Aoki, Sinya; Blum, Tom; Izubuchi, Taku; Ohki, Hiroshi; Syritsyn, Sergey

    2018-03-01

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  16. Computing nucleon EDM on a lattice

    Energy Technology Data Exchange (ETDEWEB)

    Abramczyk, Michael; Izubuchi, Taku

    2017-06-18

    I will discuss briefly recent changes in the methodology of computing the baryon EDM on a lattice. The associated correction substantially reduces presently existing lattice values for the proton and neutron theta-induced EDMs, so that even the most precise previous lattice results become consistent with zero. On one hand, this change removes previous disagreements between these lattice results and the phenomenological estimates of the nucleon EDM. On the other hand, the nucleon EDM becomes much harder to compute on a lattice. In addition, I will review the progress in computing quark chromo-EDM-induced nucleon EDM using chiral quark action.

  17. a Recursive Approach to Compute Normal Forms

    Science.gov (United States)

    HSU, L.; MIN, L. J.; FAVRETTO, L.

    2001-06-01

    Normal forms are instrumental in the analysis of dynamical systems described by ordinary differential equations, particularly when singularities close to a bifurcation are to be characterized. However, the computation of a normal form up to an arbitrary order is numerically hard. This paper focuses on the computer programming of some recursive formulas developed earlier to compute higher order normal forms. A computer program to reduce the system to its normal form on a center manifold is developed using the Maple symbolic language. However, it should be stressed that the program relies essentially on recursive numerical computations, while symbolic calculations are used only for minor tasks. Some strategies are proposed to save computation time. Examples are presented to illustrate the application of the program to obtain high order normalization or to handle systems with large dimension.

  18. Oxidation of naturally reduced uranium in aquifer sediments by dissolved oxygen and its potential significance to uranium plume persistence

    Science.gov (United States)

    Davis, J. A.; Smith, R. L.; Bohlke, J. K.; Jemison, N.; Xiang, H.; Repert, D. A.; Yuan, X.; Williams, K. H.

    2015-12-01

    The occurrence of naturally reduced zones is common in alluvial aquifers in the western U.S.A. due to the burial of woody debris in flood plains. Such reduced zones are usually heterogeneously dispersed in these aquifers and characterized by high concentrations of organic carbon, reduced mineral phases, and reduced forms of metals, including uranium(IV). The persistence of high concentrations of dissolved uranium(VI) at uranium-contaminated aquifers on the Colorado Plateau has been attributed to slow oxidation of insoluble uranium(IV) mineral phases found in association with these reducing zones, although there is little understanding of the relative importance of various potential oxidants. Four field experiments were conducted within an alluvial aquifer adjacent to the Colorado River near Rifle, CO, wherein groundwater associated with the naturally reduced zones was pumped into a gas-impermeable tank, mixed with a conservative tracer (Br-), bubbled with a gas phase composed of 97% O2 and 3% CO2, and then returned to the subsurface in the same well from which it was withdrawn. Within minutes of re-injection of the oxygenated groundwater, dissolved uranium(VI) concentrations increased from less than 1 μM to greater than 2.5 μM, demonstrating that oxygen can be an important oxidant for uranium in such field systems if supplied to the naturally reduced zones. Dissolved Fe(II) concentrations decreased to the detection limit, but increases in sulfate could not be detected due to high background concentrations. Changes in nitrogen species concentrations were variable. The results contrast with other laboratory and field results in which oxygen was introduced to systems containing high concentrations of mackinawite (FeS), rather than the more crystalline iron sulfides found in aged, naturally reduced zones. The flux of oxygen to the naturally reduced zones in the alluvial aquifers occurs mainly through interactions between groundwater and gas phases at the water table

  19. Use of a hybrid iterative reconstruction technique to reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography.

    Science.gov (United States)

    Kligerman, Seth; Mehta, Dhruv; Farnadesh, Mahmmoudreza; Jeudy, Jean; Olsen, Kathryn; White, Charles

    2013-01-01

    To determine whether an iterative reconstruction (IR) technique (iDose, Philips Healthcare) can reduce image noise and improve image quality in obese patients undergoing computed tomographic pulmonary angiography (CTPA). The study was Health Insurance Portability and Accountability Act compliant and approved by our institutional review board. A total of 33 obese patients (average body mass index: 42.7) underwent CTPA studies following standard departmental protocols. The data were reconstructed with filtered back projection (FBP) and 3 iDose strengths (iDoseL1, iDoseL3, and iDoseL5) for a total of 132 studies. FBP data were collected from 33 controls (average body mass index: 22) undergoing CTPA. Regions of interest were drawn at 6 identical levels in the pulmonary artery (PA), from the main PA to a subsegmental branch, in both the control group and study groups using each algorithm. Noise and attenuation were measured at all PA levels. Three thoracic radiologists graded each study on a scale of 1 (very poor) to 5 (ideal) by 4 categories: image quality, noise, PA enhancement, and "plastic" appearance. Statistical analysis was performed using an unpaired t test, 1-way analysis of variance, and linear weighted κ. Compared with the control group, there was significantly higher noise with FBP, iDoseL1, and iDoseL3 algorithms (Pnoise in the control group and iDoseL5 algorithm in the study group. Analysis within the study group showed a significant and progressive decrease in noise and increase in the contrast-to-noise ratio as the level of IR was increased (Pnoise and PA enhancement with increasing levels of iDose. The use of an IR technique leads to qualitative and quantitative improvements in image noise and image quality in obese patients undergoing CTPA.

  20. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  1. Sorting, Searching, and Simulation in the MapReduce Framework

    DEFF Research Database (Denmark)

    Goodrich, Michael T.; Sitchinava, Nodari; Zhang, Qin

    2011-01-01

    usefulness of our approach by designing and analyzing efficient MapReduce algorithms for fundamental sorting, searching, and simulation problems. This study is motivated by a goal of ultimately putting the MapReduce framework on an equal theoretical footing with the well-known PRAM and BSP parallel...... in parallel computational geometry for the MapReduce framework, which result in efficient MapReduce algorithms for sorting, 2- and 3-dimensional convex hulls, and fixed-dimensional linear programming. For the case when mappers and reducers have a memory/message-I/O size of M = (N), for a small constant > 0...

  2. Teaching programming to non-STEM novices: a didactical study of computational thinking and non-STEM computing education

    DEFF Research Database (Denmark)

    Spangsberg, Thomas Hvid

    research approach. Computational thinking plays a significant role in computing education but it is still unclear how it should be interpreted to best serve its purpose. Constructionism and Computational Making seems to be promising frameworks to do this. In regards to specific teaching activities...

  3. An assessment of future computer system needs for large-scale computation

    Science.gov (United States)

    Lykos, P.; White, J.

    1980-01-01

    Data ranging from specific computer capability requirements to opinions about the desirability of a national computer facility are summarized. It is concluded that considerable attention should be given to improving the user-machine interface. Otherwise, increased computer power may not improve the overall effectiveness of the machine user. Significant improvement in throughput requires highly concurrent systems plus the willingness of the user community to develop problem solutions for that kind of architecture. An unanticipated result was the expression of need for an on-going cross-disciplinary users group/forum in order to share experiences and to more effectively communicate needs to the manufacturers.

  4. Cloud computing for comparative genomics.

    Science.gov (United States)

    Wall, Dennis P; Kudtarkar, Parul; Fusaro, Vincent A; Pivovarov, Rimma; Patil, Prasad; Tonellato, Peter J

    2010-05-18

    Large comparative genomics studies and tools are becoming increasingly more compute-expensive as the number of available genome sequences continues to rise. The capacity and cost of local computing infrastructures are likely to become prohibitive with the increase, especially as the breadth of questions continues to rise. Alternative computing architectures, in particular cloud computing environments, may help alleviate this increasing pressure and enable fast, large-scale, and cost-effective comparative genomics strategies going forward. To test this, we redesigned a typical comparative genomics algorithm, the reciprocal smallest distance algorithm (RSD), to run within Amazon's Elastic Computing Cloud (EC2). We then employed the RSD-cloud for ortholog calculations across a wide selection of fully sequenced genomes. We ran more than 300,000 RSD-cloud processes within the EC2. These jobs were farmed simultaneously to 100 high capacity compute nodes using the Amazon Web Service Elastic Map Reduce and included a wide mix of large and small genomes. The total computation time took just under 70 hours and cost a total of $6,302 USD. The effort to transform existing comparative genomics algorithms from local compute infrastructures is not trivial. However, the speed and flexibility of cloud computing environments provides a substantial boost with manageable cost. The procedure designed to transform the RSD algorithm into a cloud-ready application is readily adaptable to similar comparative genomics problems.

  5. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  6. Improving the Eco-Efficiency of High Performance Computing Clusters Using EECluster

    Directory of Open Access Journals (Sweden)

    Alberto Cocaña-Fernández

    2016-03-01

    Full Text Available As data and supercomputing centres increase their performance to improve service quality and target more ambitious challenges every day, their carbon footprint also continues to grow, and has already reached the magnitude of the aviation industry. Also, high power consumptions are building up to a remarkable bottleneck for the expansion of these infrastructures in economic terms due to the unavailability of sufficient energy sources. A substantial part of the problem is caused by current energy consumptions of High Performance Computing (HPC clusters. To alleviate this situation, we present in this work EECluster, a tool that integrates with multiple open-source Resource Management Systems to significantly reduce the carbon footprint of clusters by improving their energy efficiency. EECluster implements a dynamic power management mechanism based on Computational Intelligence techniques by learning a set of rules through multi-criteria evolutionary algorithms. This approach enables cluster operators to find the optimal balance between a reduction in the cluster energy consumptions, service quality, and number of reconfigurations. Experimental studies using both synthetic and actual workloads from a real world cluster support the adoption of this tool to reduce the carbon footprint of HPC clusters.

  7. A Distributed OpenCL Framework using Redundant Computation and Data Replication

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Junghyun [Seoul National University, Korea; Gangwon, Jo [Seoul National University, Korea; Jaehoon, Jung [Seoul National University, Korea; Lee, Jaejin [Seoul National University, Korea

    2016-01-01

    Applications written solely in OpenCL or CUDA cannot execute on a cluster as a whole. Most previous approaches that extend these programming models to clusters are based on a common idea: designating a centralized host node and coordinating the other nodes with the host for computation. However, the centralized host node is a serious performance bottleneck when the number of nodes is large. In this paper, we propose a scalable and distributed OpenCL framework called SnuCL-D for large-scale clusters. SnuCL-D's remote device virtualization provides an OpenCL application with an illusion that all compute devices in a cluster are confined in a single node. To reduce the amount of control-message and data communication between nodes, SnuCL-D replicates the OpenCL host program execution and data in each node. We also propose a new OpenCL host API function and a queueing optimization technique that significantly reduce the overhead incurred by the previous centralized approaches. To show the effectiveness of SnuCL-D, we evaluate SnuCL-D with a microbenchmark and eleven benchmark applications on a large-scale CPU cluster and a medium-scale GPU cluster.

  8. Ontology Design for Solving Computationally-Intensive Problems on Heterogeneous Architectures

    Directory of Open Access Journals (Sweden)

    Hossam M. Faheem

    2018-02-01

    Full Text Available Viewing a computationally-intensive problem as a self-contained challenge with its own hardware, software and scheduling strategies is an approach that should be investigated. We might suggest assigning heterogeneous hardware architectures to solve a problem, while parallel computing paradigms may play an important role in writing efficient code to solve the problem; moreover, the scheduling strategies may be examined as a possible solution. Depending on the problem complexity, finding the best possible solution using an integrated infrastructure of hardware, software and scheduling strategy can be a complex job. Developing and using ontologies and reasoning techniques play a significant role in reducing the complexity of identifying the components of such integrated infrastructures. Undertaking reasoning and inferencing regarding the domain concepts can help to find the best possible solution through a combination of hardware, software and scheduling strategies. In this paper, we present an ontology and show how we can use it to solve computationally-intensive problems from various domains. As a potential use for the idea, we present examples from the bioinformatics domain. Validation by using problems from the Elastic Optical Network domain has demonstrated the flexibility of the suggested ontology and its suitability for use with any other computationally-intensive problem domain.

  9. Cloud Computing Databases: Latest Trends and Architectural Concepts

    OpenAIRE

    Tarandeep Singh; Parvinder S. Sandhu

    2011-01-01

    The Economic factors are leading to the rise of infrastructures provides software and computing facilities as a service, known as cloud services or cloud computing. Cloud services can provide efficiencies for application providers, both by limiting up-front capital expenses, and by reducing the cost of ownership over time. Such services are made available in a data center, using shared commodity hardware for computation and storage. There is a varied set of cloud services...

  10. Cloud Computing Application on Transport Dispatching Informational Support Systems

    Directory of Open Access Journals (Sweden)

    Dmitry Olegovich Gusenitsa

    2015-05-01

    Full Text Available Transport dispatching informational support systems has received widespread attention due to high quality information density, strong coherence and applicable visualization features. Nevertheless, because of large volume of data, complex integration requirements and the need for information exchange between different users, time costs of the development and implementation of the informational support systems, problems associated with various data formats compatibility, security protocols and high maintenance cost, the opportunities for the application of such systems are significantly reduced. This article reviews the possibility of creating a cloud storage data system for transport dispatching informational support system (TDIS using modern computer technology to meet the challenges of mass data processing, information security and reduce operational costs. The system is expected to make full use of the advantages offered by the technology of cloud storage. Integrated cloud will increase the amount of data available to the system, reduce the speed processing requirements and reduce the overall cost of system implementation. Creation and integration of cloud storage is one of the most important areas of TDIS development, which is stimulating and promoting the further development of TDIS to ensure the requirements of its users.

  11. Signal Amplification Technique (SAT): an approach for improving resolution and reducing image noise in computed tomography

    International Nuclear Information System (INIS)

    Phelps, M.E.; Huang, S.C.; Hoffman, E.J.; Plummer, D.; Carson, R.

    1981-01-01

    Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described

  12. Energy consumption program: A computer model simulating energy loads in buildings

    Science.gov (United States)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  13. Use of an Enactive Insole for Reducing the Risk of Falling on Different Types of Soil Using Vibrotactile Cueing for the Elderly.

    Directory of Open Access Journals (Sweden)

    Martin J-D Otis

    Full Text Available Our daily activities imply displacements on various types of soil. For persons with gait disorder or losing functional autonomy, walking on some types of soil could be challenging because of the risk of falling it represents.In this paper, we present, in a first part, the use of an enactive shoe for an automatic differentiation of several types of soil. In a second part, using a second improved prototype (an enactive insole, twelve participants with Parkinson's disease (PD and nine age-matched controls have performed the Timed Up and Go (TUG test on six types of soil with and without cueing. The frequency of the cueing was set at 10% above the cadence computed at the lower risk of falling (walking over the concrete. Depending on the cadence computed at the lower risk, the enactive insole activates a vibrotactile cueing aiming to improve gait and balance control. Finally, a risk index is computed using gait parameters in relation to given type of soil.The frequency analysis of the heel strike vibration allows the differentiation of various types of soil. The risk computed is associated to an appropriate rhythmic cueing in order to improve balance and gait impairment. The results show that a vibrotactile cueing could help to reduce the risk of falling.Firstly, this paper demonstrates the feasibility of reducing the risk of falling while walking on different types of soil using vibrotactile cueing. We found a significant difference and a significant decrease in the computed risks of falling for most of types of soil especially for deformable soils which can lead to fall. Secondly, heel strike provides an approximation of the impulse response of the soil that can be analyzed with time and frequency-domain modeling. From these analyses, an index is computed enabling differentiation the types of soil.

  14. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction

    Directory of Open Access Journals (Sweden)

    Michael M. Abdel-Sayed

    2016-11-01

    Full Text Available Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted ℓ1 minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal ℓ1 minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to ℓ1 minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  15. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    Science.gov (United States)

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  16. Some experience in applying the REDUCE algebraic system to the calculation of scattering processes in QED and QCD

    International Nuclear Information System (INIS)

    Mohring, H.J.; Schiller, A.

    1980-01-01

    The problems arising in the use of the REDUCE algebraic system for calculating traces of the Dirac matrix products describing scattering processes in quantum electrodynamics (QED) and quantum chromodynamics (QCD) are considered. Application of the REDUCE system for describing two-photon processes in e + e - reactions is discussed. An example of using the REDUCE system for calculating matrix elements of elementary processes of hard scattering is described. The calculations were performed by means of the REDUCE2 version on an EC1040 computer. The computations take almost 10 minutes of machine time and computer storage capacity of abo t 800 kiuobites

  17. Linear programming computation

    CERN Document Server

    PAN, Ping-Qi

    2014-01-01

    With emphasis on computation, this book is a real breakthrough in the field of LP. In addition to conventional topics, such as the simplex method, duality, and interior-point methods, all deduced in a fresh and clear manner, it introduces the state of the art by highlighting brand-new and advanced results, including efficient pivot rules, Phase-I approaches, reduced simplex methods, deficient-basis methods, face methods, and pivotal interior-point methods. In particular, it covers the determination of the optimal solution set, feasible-point simplex method, decomposition principle for solving large-scale problems, controlled-branch method based on generalized reduced simplex framework for solving integer LP problems.

  18. Segmentation process significantly influences the accuracy of 3D surface models derived from cone beam computed tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Schepers, Rutger H; Gerrits, Pieter; Ren, Yijin

    AIMS: To assess the accuracy of surface models derived from 3D cone beam computed tomography (CBCT) with two different segmentation protocols. MATERIALS AND METHODS: Seven fresh-frozen cadaver heads were used. There was no conflict of interests in this study. CBCT scans were made of the heads and 3D

  19. Reduced functional connectivity between V1 and inferior frontal cortex associated with visuomotor performance in autism.

    Science.gov (United States)

    Villalobos, Michele E; Mizuno, Akiko; Dahl, Branelle C; Kemmotsu, Nobuko; Müller, Ralph-Axel

    2005-04-15

    Some recent evidence has suggested abnormalities of the dorsal stream and possibly the mirror neuron system in autism, which may be responsible for impairments of joint attention, imitation, and secondarily for language delays. The current study investigates functional connectivity along the dorsal stream in autism, examining interregional blood oxygenation level dependent (BOLD) signal cross-correlation during visuomotor coordination. Eight high-functioning autistic men and eight handedness and age-matched controls were included. Visually prompted button presses were performed with the preferred hand. For each subject, functional connectivity was computed in terms of BOLD signal correlation with the mean time series in bilateral visual area 17. Our hypothesis of reduced dorsal stream connectivity in autism was only in part confirmed. Functional connectivity with superior parietal areas was not significantly reduced. However, the autism group showed significantly reduced connectivity with bilateral inferior frontal area 44, which is compatible with the hypothesis of mirror neuron defects in autism. More generally, our findings suggest that dorsal stream connectivity in autism may not be fully functional.

  20. Computer-assisted determination of left ventricular endocardial borders reduces variability in the echocardiographic assessment of ejection fraction

    Directory of Open Access Journals (Sweden)

    Lindstrom Lena

    2008-11-01

    Full Text Available Abstract Background Left ventricular size and function are important prognostic factors in heart disease. Their measurement is the most frequent reason for sending patients to the echo lab. These measurements have important implications for therapy but are sensitive to the skill of the operator. Earlier automated echo-based methods have not become widely used. The aim of our study was to evaluate an automatic echocardiographic method (with manual correction if needed for determining left ventricular ejection fraction (LVEF based on an active appearance model of the left ventricle (syngo®AutoEF, Siemens Medical Solutions. Comparisons were made with manual planimetry (manual Simpson, visual assessment and automatically determined LVEF from quantitative myocardial gated single photon emission computed tomography (SPECT. Methods 60 consecutive patients referred for myocardial perfusion imaging (MPI were included in the study. Two-dimensional echocardiography was performed within one hour of MPI at rest. Image quality did not constitute an exclusion criterion. Analysis was performed by five experienced observers and by two novices. Results LVEF (%, end-diastolic and end-systolic volume/BSA (ml/m2 were for uncorrected AutoEF 54 ± 10, 51 ± 16, 24 ± 13, for corrected AutoEF 53 ± 10, 53 ± 18, 26 ± 14, for manual Simpson 51 ± 11, 56 ± 20, 28 ± 15, and for MPI 52 ± 12, 67 ± 26, 35 ± 23. The required time for analysis was significantly different for all four echocardiographic methods and was for uncorrected AutoEF 79 ± 5 s, for corrected AutoEF 159 ± 46 s, for manual Simpson 177 ± 66 s, and for visual assessment 33 ± 14 s. Compared with the expert manual Simpson, limits of agreement for novice corrected AutoEF was lower than for novice manual Simpson (0.8 ± 10.5 vs. -3.2 ± 11.4 LVEF percentage points. Calculated for experts and with LVEF (% categorized into Conclusion Corrected AutoEF reduces the variation in measurements compared with