WorldWideScience

Sample records for high leverage design

  1. Leveraging Failure in Design Research

    Science.gov (United States)

    Lobato, Joanne; Walters, C. David; Hohensee, Charles; Gruver, John; Diamond, Jaime Marie

    2015-01-01

    Even in the resource-rich, more ideal conditions of many design-based classroom interventions, unexpected events can lead to disappointing results in student learning. However, if later iterations in a design research study are more successful, the previous failures can provide opportunities for comparisons to reveal subtle differences in…

  2. Leveraging advances in biology to design biomaterials

    Science.gov (United States)

    Darnell, Max; Mooney, David J.

    2017-12-01

    Biomaterials have dramatically increased in functionality and complexity, allowing unprecedented control over the cells that interact with them. From these engineering advances arises the prospect of improved biomaterial-based therapies, yet practical constraints favour simplicity. Tools from the biology community are enabling high-resolution and high-throughput bioassays that, if incorporated into a biomaterial design framework, could help achieve unprecedented functionality while minimizing the complexity of designs by identifying the most important material parameters and biological outputs. However, to avoid data explosions and to effectively match the information content of an assay with the goal of the experiment, material screens and bioassays must be arranged in specific ways. By borrowing methods to design experiments and workflows from the bioprocess engineering community, we outline a framework for the incorporation of next-generation bioassays into biomaterials design to effectively optimize function while minimizing complexity. This framework can inspire biomaterials designs that maximize functionality and translatability.

  3. On the Performance of the Measure for Diagnosing Multiple High Leverage Collinearity-Reducing Observations

    Directory of Open Access Journals (Sweden)

    Arezoo Bagheri

    2012-01-01

    Full Text Available There is strong evidence indicating that the existing measures which are designed to detect a single high leverage collinearity-reducing observation are not effective in the presence of multiple high leverage collinearity-reducing observations. In this paper, we propose a cutoff point for a newly developed high leverage collinearity-influential measure and two existing measures ( and to identify high leverage collinearity-reducing observations, the high leverage points which hide multicollinearity in a data set. It is important to detect these observations as they are responsible for the misleading inferences about the fitting of the regression model. The merit of our proposed measure and cutoff point in detecting high leverage collinearity-reducing observations is investigated by using engineering data and Monte Carlo simulations.

  4. Leveraging two-way probe-level block design for identifying differential gene expression with high-density oligonucleotide arrays.

    Science.gov (United States)

    Barrera, Leah; Benner, Chris; Tao, Yong-Chuan; Winzeler, Elizabeth; Zhou, Yingyao

    2004-04-20

    To identify differentially expressed genes across experimental conditions in oligonucleotide microarray experiments, existing statistical methods commonly use a summary of probe-level expression data for each probe set and compare replicates of these values across conditions using a form of the t-test or rank sum test. Here we propose the use of a statistical method that takes advantage of the built-in redundancy architecture of high-density oligonucleotide arrays. We employ parametric and nonparametric variants of two-way analysis of variance (ANOVA) on probe-level data to account for probe-level variation, and use the false-discovery rate (FDR) to account for simultaneous testing on thousands of genes (multiple testing problem). Using publicly available data sets, we systematically compared the performance of parametric two-way ANOVA and the nonparametric Mack-Skillings test to the t-test and Wilcoxon rank-sum test for detecting differentially expressed genes at varying levels of fold change, concentration, and sample size. Using receiver operating characteristic (ROC) curve comparisons, we observed that two-way methods with FDR control on sample sizes with 2-3 replicates exhibits the same high sensitivity and specificity as a t-test with FDR control on sample sizes with 6-9 replicates in detecting at least two-fold change. Our results suggest that the two-way ANOVA methods using probe-level data are substantially more powerful tests for detecting differential gene expression than corresponding methods for probe-set level data.

  5. Leveraging two-way probe-level block design for identifying differential gene expression with high-density oligonucleotide arrays

    Directory of Open Access Journals (Sweden)

    Tao Yong-Chuan

    2004-04-01

    Full Text Available Abstract Background To identify differentially expressed genes across experimental conditions in oligonucleotide microarray experiments, existing statistical methods commonly use a summary of probe-level expression data for each probe set and compare replicates of these values across conditions using a form of the t-test or rank sum test. Here we propose the use of a statistical method that takes advantage of the built-in redundancy architecture of high-density oligonucleotide arrays. Results We employ parametric and nonparametric variants of two-way analysis of variance (ANOVA on probe-level data to account for probe-level variation, and use the false-discovery rate (FDR to account for simultaneous testing on thousands of genes (multiple testing problem. Using publicly available data sets, we systematically compared the performance of parametric two-way ANOVA and the nonparametric Mack-Skillings test to the t-test and Wilcoxon rank-sum test for detecting differentially expressed genes at varying levels of fold change, concentration, and sample size. Using receiver operating characteristic (ROC curve comparisons, we observed that two-way methods with FDR control on sample sizes with 2–3 replicates exhibits the same high sensitivity and specificity as a t-test with FDR control on sample sizes with 6–9 replicates in detecting at least two-fold change. Conclusions Our results suggest that the two-way ANOVA methods using probe-level data are substantially more powerful tests for detecting differential gene expression than corresponding methods for probe-set level data.

  6. Leveraging design activism to guide public projects towards citizen inclusion

    DEFF Research Database (Denmark)

    Casciola, Lara; Götzen, Amalia De; Morelli, Nicola

    2017-01-01

    This paper explores a case wherein design activism was leveraged to guide the governance of a public project towards greater citizen inclusion. This exploration, part of a master’s thesis in Service Design at Aalborg university, centres on Copenhagen’s Street Lab – a living lab where technological...

  7. On risk, leverage and banks: do highly leveraged banks take on excessive risk?

    NARCIS (Netherlands)

    Koudstaal, M.; van Wijnbergen, S.

    2012-01-01

    This paper deals with the relation between excessive risk taking and capital structure in banks. Examining a quarterly dataset of U.S. banks between 1993 and 2010, we find that equity is valued higher when more risky portfolios are chosen when leverage is high, and that more risk taking has a

  8. Embedded Leverage

    DEFF Research Database (Denmark)

    Frazzini, Andrea; Heje Pedersen, Lasse

    Many financial instruments are designed with embedded leverage such as options and leveraged exchange traded funds (ETFs). Embedded leverage alleviates investors’ leverage constraints and, therefore, we hypothesize that embedded leverage lowers required returns. Consistent with this hypothesis, we......, with t-statistics of 8.6 for equity options, 6.3 for index options, and 2.5 for ETFs. We provide extensive robustness tests and discuss the broader implications of embedded leverage for financial economics....

  9. Learning Leverage: Designing Meaningful Professional Development for "All" Teachers

    Science.gov (United States)

    Hunzicker, Jana

    2008-01-01

    The leverage of National Board candidacy provides a unique opportunity for substantial teacher learning in a way that many professional development experiences do not. The key is learning leverage--an appropriate balance of rigor, reward, and risk. Learning leverage occurs naturally among teachers who choose to pursue National Board certification,…

  10. News from CEC: High-Leverage Practices in Special Education

    Science.gov (United States)

    TEACHING Exceptional Children, 2017

    2017-01-01

    In fall 2014, the Council for Exceptional Children's (CEC) Board of Directors approved a proposal from the Professional Standards and Practice Committee (PSPC) to develop a set of high-leverage practices (HLPs) for special education teachers. The CEEDAR Center at the University of Florida, which is funded by the U.S. Department of Education's…

  11. Monitoring Leverage

    DEFF Research Database (Denmark)

    Geanakoplos, John; Heje Pedersen, Lasse

    2014-01-01

    We argue that leverage is a central element of economic cycles and discuss how leverage can be properly monitored. While traditionally the interest rate has been regarded as the single key feature of a loan, we contend that the size of the loan, i.e., the leverage, is in fact a more important...... measure of systemic risk. Indeed, systemic crises tend to erupt when highly leveraged economic agents are forced to deleverage, sending the economy into recession. We emphasize the importance of measuring both the average leverage on old loans (which captures the economy's vulnerability) and the leverage...... and monitored....

  12. Trajectory Design for a Cislunar Cubesat Leveraging Dynamical Systems Techniques: The Lunar Icecube Mission

    Science.gov (United States)

    Bosanac, Natasha; Cox, Andrew; Howell, Kathleen C.; Folta, David C.

    2017-01-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, will be deployed from the upcoming Exploration Mission-1 vehicle in late 2018. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  13. Leveraging design thinking to build sustainable mobile health systems.

    Science.gov (United States)

    Eckman, Molly; Gorski, Irena; Mehta, Khanjan

    Mobile health, or mHealth, technology has the potential to improve health care access in the developing world. However, the majority of mHealth projects do not expand beyond the pilot stage. A core reason why is because they do not account for the individual needs and wants of those involved. A collaborative approach is needed to integrate the perspectives of all stakeholders into the design and operation of mHealth endeavours. Design thinking is a methodology used to develop and evaluate novel concepts for systems. With roots in participatory processes and self-determined pathways, design thinking provides a compelling framework to understand and apply the needs of diverse stakeholders to mHealth project development through a highly iterative process. The methodology presented in this article provides a structured approach to apply design thinking principles to assess the feasibility of novel mHealth endeavours during early conceptualisation.

  14. Leveraging human-centered design in chronic disease prevention.

    Science.gov (United States)

    Matheson, Gordon O; Pacione, Chris; Shultz, Rebecca K; Klügl, Martin

    2015-04-01

    Bridging the knowing-doing gap in the prevention of chronic disease requires deep appreciation and understanding of the complexities inherent in behavioral change. Strategies that have relied exclusively on the implementation of evidence-based data have not yielded the desired progress. The tools of human-centered design, used in conjunction with evidence-based data, hold much promise in providing an optimal approach for advancing disease prevention efforts. Directing the focus toward wide-scale education and application of human-centered design techniques among healthcare professionals will rapidly multiply their effective ability to bring the kind of substantial results in disease prevention that have eluded the healthcare industry for decades. This, in turn, would increase the likelihood of prevention by design. Copyright © 2015 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  15. Leveraging the Experimental Method to Inform Solar Cell Design

    Science.gov (United States)

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  16. Two-Step Robust Diagnostic Method for Identification of Multiple High Leverage Points

    OpenAIRE

    Arezoo Bagheri; Habshah Midi; A. H.M.R. Imon

    2009-01-01

    Problem statement: High leverage points are extreme outliers in the X-direction. In regression analysis, the detection of these leverage points becomes important due to their arbitrary large effects on the estimations as well as multicollinearity problems. Mahalanobis Distance (MD) has been used as a diagnostic tool for identification of outliers in multivariate analysis where it finds the distance between normal and abnormal groups of the data. Since the computation of MD relies on non-robus...

  17. Trajectory design for a cislunar CubeSat leveraging dynamical systems techniques: The Lunar IceCube mission

    Science.gov (United States)

    Bosanac, Natasha; Cox, Andrew D.; Howell, Kathleen C.; Folta, David C.

    2018-03-01

    Lunar IceCube is a 6U CubeSat that is designed to detect and observe lunar volatiles from a highly inclined orbit. This spacecraft, equipped with a low-thrust engine, is expected to be deployed from the upcoming Exploration Mission-1 vehicle. However, significant uncertainty in the deployment conditions for secondary payloads impacts both the availability and geometry of transfers that deliver the spacecraft to the lunar vicinity. A framework that leverages dynamical systems techniques is applied to a recently updated set of deployment conditions and spacecraft parameter values for the Lunar IceCube mission, demonstrating the capability for rapid trajectory design.

  18. Design considerations for micro- and nanopositioning: leveraging the latest for biophysical applications.

    Science.gov (United States)

    Jordan, S C; Anthony, P C

    2009-08-01

    Biophysical applications ranging from fluorescence microassays to single-molecule microscopy are increasingly dependent on automated nanoscale positional control and stability. A whirlwind of motion-industry innovation has resulted in an array of new motion options offering significant improvements in application performance, reproducibility and throughput. The challenge to leverage these developments depends on researchers, engineers and motion vendors acquiring a common language of specifications and a shared understanding of the challenges posed by application needs. To assist in building this shared understanding, this article reviews today's motion technologies, beginning with a concise review of key principles of motion control focusing on applications. It progresses through illustrations of sensor/encoder technologies and servo techniques. A spectrum of classical and recent motion technologies is explored, from stepper and servo actuation of conventional microscopy stages, to advanced piezo stack nanopositioners capable of picometer precision, to novel ultrasonic resonant piezomotors and piezo-ceramic-based mechanisms capable of high-force positioning over many millimeters while providing resolutions down into the sub-nanometer range. A special emphasis is placed on the effects of integrating multiple motion technologies into an application, such as stacking a fine nanopositioner atop a long-travel stage. Examples and data are presented to clarify these issues, including important and insightful new stability measurements taken directly from an advanced optical trapping application. The important topics of software and interfacing are also explored from an applications perspective, since design-and-debugging time, synchronization capabilities and overall throughput are heavily dependent on these often-overlooked aspects of motion system design. The discussion is designed to illuminate specifications-related topics that become increasingly important as

  19. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    Science.gov (United States)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  20. The effect of high leverage points on the maximum estimated likelihood for separation in logistic regression

    Science.gov (United States)

    Ariffin, Syaiba Balqish; Midi, Habshah; Arasan, Jayanthi; Rana, Md Sohel

    2015-02-01

    This article is concerned with the performance of the maximum estimated likelihood estimator in the presence of separation in the space of the independent variables and high leverage points. The maximum likelihood estimator suffers from the problem of non overlap cases in the covariates where the regression coefficients are not identifiable and the maximum likelihood estimator does not exist. Consequently, iteration scheme fails to converge and gives faulty results. To remedy this problem, the maximum estimated likelihood estimator is put forward. It is evident that the maximum estimated likelihood estimator is resistant against separation and the estimates always exist. The effect of high leverage points are then investigated on the performance of maximum estimated likelihood estimator through real data sets and Monte Carlo simulation study. The findings signify that the maximum estimated likelihood estimator fails to provide better parameter estimates in the presence of both separation, and high leverage points.

  1. A High-Leverage Language Teaching Practice: Leading an Open-Ended Group Discussion

    Science.gov (United States)

    Kearney, Erin

    2015-01-01

    In response to calls for more practice-based teacher education, this study investigated the way in which two high-performing novice world language teachers, one in Spanish and one in Latin, implemented a high-leverage teaching practice, leading an open-ended group discussion. Observational data revealed a number of constituent micro-practices. The…

  2. High School Vocational Counseling Role in Leveraging Students` Professional Inclinations

    Directory of Open Access Journals (Sweden)

    Gabriel Brătucu

    2014-08-01

    Full Text Available The experience of many countries with a well-educated workforce highlights the important role of vocational counselling services for advantageous youth professional orientation. Researchers manifest in their turn, a growing interest to study the role of vocational counselling, from the perspective of increasing the efficiency of investment in education and strengthening the capacity of enterprises to meet the challenges of the knowledge economy. In Romania, high school students have access to career guidance services, but there is little information on the extent to which they use or how useful they consider these services. Many times, there is a social conformism among high school graduates, which determines them to choose professions valued at a certain moment, without making a personal judgment. The aim of this paper is to analyse, as a good practice, the role of high school graduate vocational counselling in developing professional skills, in order to help them make the right career decision. In order to monitor the high school students` opinions on the vocational guidance and their perceptions of the integration in the labour market, a market research study has been conducted. This is a survey conducted on a sample of 2,364 high school students in their final year of study (twelve grade. The research has shown that a reduced percentage of the interviewed high school students have knowledge about the vocational guidance activity. From those who have used these services, most of them were satisfied. The study also highlighted the fact that the most important criteria for getting a job are the skills acquired during studies.

  3. Strategies for high-throughput comparative modeling: applications to leverage analysis in structural genomics and protein family organization.

    Science.gov (United States)

    Mirkovic, Nebojsa; Li, Zhaohui; Parnassa, Andrew; Murray, Diana

    2007-03-01

    The technological breakthroughs in structural genomics were designed to facilitate the solution of a sufficient number of structures, so that as many protein sequences as possible can be structurally characterized with the aid of comparative modeling. The leverage of a solved structure is the number and quality of the models that can be produced using the structure as a template for modeling and may be viewed as the "currency" with which the success of a structural genomics endeavor can be measured. Moreover, the models obtained in this way should be valuable to all biologists. To this end, at the Northeast Structural Genomics Consortium (NESG), a modular computational pipeline for automated high-throughput leverage analysis was devised and used to assess the leverage of the 186 unique NESG structures solved during the first phase of the Protein Structure Initiative (January 2000 to July 2005). Here, the results of this analysis are presented. The number of sequences in the nonredundant protein sequence database covered by quality models produced by the pipeline is approximately 39,000, so that the average leverage is approximately 210 models per structure. Interestingly, only 7900 of these models fulfill the stringent modeling criterion of being at least 30% sequence-identical to the corresponding NESG structures. This study shows how high-throughput modeling increases the efficiency of structure determination efforts by providing enhanced coverage of protein structure space. In addition, the approach is useful in refining the boundaries of structural domains within larger protein sequences, subclassifying sequence diverse protein families, and defining structure-based strategies specific to a particular family. (c) 2006 Wiley-Liss, Inc.

  4. User input in iterative design for prevention product development: leveraging interdisciplinary methods to optimize effectiveness.

    Science.gov (United States)

    Guthrie, Kate M; Rosen, Rochelle K; Vargas, Sara E; Guillen, Melissa; Steger, Arielle L; Getz, Melissa L; Smith, Kelley A; Ramirez, Jaime J; Kojic, Erna M

    2017-10-01

    The development of HIV-preventive topical vaginal microbicides has been challenged by a lack of sufficient adherence in later stage clinical trials to confidently evaluate effectiveness. This dilemma has highlighted the need to integrate translational research earlier in the drug development process, essentially applying behavioral science to facilitate the advances of basic science with respect to the uptake and use of biomedical prevention technologies. In the last several years, there has been an increasing recognition that the user experience, specifically the sensory experience, as well as the role of meaning-making elicited by those sensations, may play a more substantive role than previously thought. Importantly, the role of the user-their sensory perceptions, their judgements of those experiences, and their willingness to use a product-is critical in product uptake and consistent use post-marketing, ultimately realizing gains in global public health. Specifically, a successful prevention product requires an efficacious drug, an efficient drug delivery system, and an effective user. We present an integrated iterative drug development and user experience evaluation method to illustrate how user-centered formulation design can be iterated from the early stages of preclinical development to leverage the user experience. Integrating the user and their product experiences into the formulation design process may help optimize both the efficiency of drug delivery and the effectiveness of the user.

  5. PCA leverage: outlier detection for high-dimensional functional magnetic resonance imaging data.

    Science.gov (United States)

    Mejia, Amanda F; Nebel, Mary Beth; Eloyan, Ani; Caffo, Brian; Lindquist, Martin A

    2017-07-01

    Outlier detection for high-dimensional (HD) data is a popular topic in modern statistical research. However, one source of HD data that has received relatively little attention is functional magnetic resonance images (fMRI), which consists of hundreds of thousands of measurements sampled at hundreds of time points. At a time when the availability of fMRI data is rapidly growing-primarily through large, publicly available grassroots datasets-automated quality control and outlier detection methods are greatly needed. We propose principal components analysis (PCA) leverage and demonstrate how it can be used to identify outlying time points in an fMRI run. Furthermore, PCA leverage is a measure of the influence of each observation on the estimation of principal components, which are often of interest in fMRI data. We also propose an alternative measure, PCA robust distance, which is less sensitive to outliers and has controllable statistical properties. The proposed methods are validated through simulation studies and are shown to be highly accurate. We also conduct a reliability study using resting-state fMRI data from the Autism Brain Imaging Data Exchange and find that removal of outliers using the proposed methods results in more reliable estimation of subject-level resting-state networks using independent components analysis. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Leveraging health information exchange to improve population health reporting processes: lessons in using a collaborative-participatory design process.

    Science.gov (United States)

    Revere, Debra; Dixon, Brian E; Hills, Rebecca; Williams, Jennifer L; Grannis, Shaun J

    2014-01-01

    Surveillance, or the systematic monitoring of disease within a population, is a cornerstone function of public health. Despite significant investment in information technologies (IT) to improve the public's health, health care providers continue to rely on manual, spontaneous reporting processes that can result in incomplete and delayed surveillance activities. Participatory design principles advocate including real users and stakeholders when designing an information system to ensure high ecological validity of the product, incorporate relevance and context into the design, reduce misconceptions designers can make due to insufficient domain expertise, and ultimately reduce barriers to adoption of the system. This paper focuses on the collaborative and informal participatory design process used to develop enhanced, IT-enabled reporting processes that leverage available electronic health records in a health information exchange to prepopulate notifiable-conditions report forms used by public health authorities. Over nine months, public health stakeholders, technical staff, and informatics researchers were engaged in a multiphase participatory design process that included public health stakeholder focus groups, investigator-engineering team meetings, public health survey and census regarding high-priority data elements, and codesign of exploratory prototypes and final form mock-ups. A number of state-mandated report fields that are not highly used or desirable for disease investigation were eliminated, which allowed engineers to repurpose form space for desired and high-priority data elements and improve the usability of the forms. Our participatory design process ensured that IT development was driven by end user expertise and needs, resulting in significant improvements to the layout and functionality of the reporting forms. In addition to informing report form development, engaging with public health end users and stakeholders through the participatory design

  7. Protein leverage affects energy intake of high-protein diets in humans.

    Science.gov (United States)

    Martens, Eveline A; Lemmens, Sofie G; Westerterp-Plantenga, Margriet S

    2013-01-01

    The protein leverage hypothesis requires specific evidence that protein intake is regulated more strongly than energy intake. The objective was to determine ad libitum energy intake, body weight changes, and appetite profile in response to protein-to-carbohydrate + fat ratio over 12 consecutive days and in relation to age, sex, BMI, and type of protein. A 12-d randomized crossover study was performed in 40 men and 39 women [mean ± SD age: 34.0 ± 17.6 y; BMI (in kg/m(2)): 23.7 ± 3.4] with the use of diets containing 5%, 15%, and 30% of energy from protein from a milk or plant source. Protein-content effects did not differ by age, sex, BMI, or type of protein. Total energy intake was significantly lower in the high-protein (7.21 ± 3.08 MJ/d) condition than in the low-protein (9.33 ± 3.52 MJ/d) and normal-protein (9.62 ± 3.51 MJ/d) conditions (P = 0.001), which was predominantly the result of a lower energy intake from meals (P = 0.001). Protein intake varied directly according to the amount of protein in the diet (P = 0.001). The AUC of visual analog scale appetite ratings did not differ significantly, yet fluctuations in hunger (P = 0.019) and desire to eat (P = 0.026) over the day were attenuated in the high-protein condition compared with the normal-protein condition. We found evidence to support the protein leverage hypothesis in that individuals underate relative to energy balance from diets containing a higher protein-to-carbohydrate + fat ratio. No evidence for protein leverage effects from diets containing a lower ratio of protein to carbohydrate + fat was obtained. It remains to be shown whether a relatively low protein intake would cause overeating or would be the effect of overeating of carbohydrate and fat. The study was registered at clinicaltrials.gov as NCT01320189.

  8. Prospective Science Teachers' Field Experiences in K-12 STEM Academy Classrooms: Opportunities to Learn High-Leverage Science Teaching Practices

    Science.gov (United States)

    Carpenter, Stacey Lynn

    Science education reform efforts in the U.S. have emphasized shifting away from teacher-centered instruction and teaching science as isolated facts, to more student-centered instruction where students engage in disciplinary discourse and science and engineering practices to learn more connected concepts. As such, teachers need to be prepared to teach science in these reform-based ways; however, many teachers have neither experienced reform-based science instruction in their own science learning, nor witnessed reform-based science instruction in their preservice classroom field experiences. At the same time, there has been an emphasis in teacher education on organizing the preparation of new teachers around high-leverage teaching practices--equitable teaching practices that are known to result in student learning and form a strong base for future teacher learning. In this qualitative study, I investigated eight prospective secondary science teachers as they participated in the unique field experience contexts of classrooms in STEM-focused high school academies. Using a lens of situated learning theory, I examined how prospective teachers from two classroom-based field experiences engaged in high-leverage teaching practices and how their experiences in these classrooms shaped their own visions of science teaching. I analyzed video data of classroom instruction, along with prospective and mentor teacher interviews and surveys, to determine the instructional contexts of each academy and the science teaching strategies (including high-leverage practices) that prospective teachers had opportunities to observe and participate in. I also analyzed prospective teacher interviews and surveys to determine their visions of effective science teaching, what high-leverage science teaching practices prospective teachers included in their visions, and how their visions changed throughout the experience. I found that both academy contexts featured more student work, particularly

  9. Leveraging multi-layer imager detector design to improve low-dose performance for megavoltage cone-beam computed tomography

    Science.gov (United States)

    Hu, Yue-Houng; Rottmann, Joerg; Fueglistaller, Rony; Myronakis, Marios; Wang, Adam; Huber, Pascal; Shedlock, Daniel; Morf, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross

    2018-02-01

    While megavoltage cone-beam computed tomography (CBCT) using an electronic portal imaging device (EPID) provides many advantages over kilovoltage (kV) CBCT, clinical adoption is limited by its high doses. Multi-layer imager (MLI) EPIDs increase DQE(0) while maintaining high resolution. However, even well-designed, high-performance MLIs suffer from increased electronic noise from each readout, degrading low-dose image quality. To improve low-dose performance, shift-and-bin addition (ShiBA) imaging is proposed, leveraging the unique architecture of the MLI. ShiBA combines hardware readout-binning and super-resolution concepts, reducing electronic noise while maintaining native image sampling. The imaging performance of full-resolution (FR); standard, aligned binned (BIN); and ShiBA images in terms of noise power spectrum (NPS), electronic NPS, modulation transfer function (MTF), and the ideal observer signal-to-noise ratio (SNR)—the detectability index (d‧)—are compared. The FR 4-layer readout of the prototype MLI exhibits an electronic NPS magnitude 6-times higher than a state-of-the-art single layer (SLI) EPID. Although the MLI is built on the same readout platform as the SLI, with each layer exhibiting equivalent electronic noise, the multi-stage readout of the MLI results in electronic noise 50% higher than simple summation. Electronic noise is mitigated in both BIN and ShiBA imaging, reducing its total by ~12 times. ShiBA further reduces the NPS, effectively upsampling the image, resulting in a multiplication by a sinc2 function. Normalized NPS show that neither ShiBA nor BIN otherwise affects image noise. The LSF shows that ShiBA removes the pixilation artifact of BIN images and mitigates the effect of detector shift, but does not quantifiably improve the MTF. ShiBA provides a pre-sampled representation of the images, mitigating phase dependence. Hardware binning strategies lower the quantum noise floor, with 2  ×  2 implementation reducing the

  10. Leveraging High Performance Computing for Managing Large and Evolving Data Collections

    Directory of Open Access Journals (Sweden)

    Ritu Arora

    2014-10-01

    Full Text Available The process of developing a digital collection in the context of a research project often involves a pipeline pattern during which data growth, data types, and data authenticity need to be assessed iteratively in relation to the different research steps and in the interest of archiving. Throughout a project’s lifecycle curators organize newly generated data while cleaning and integrating legacy data when it exists, and deciding what data will be preserved for the long term. Although these actions should be part of a well-oiled data management workflow, there are practical challenges in doing so if the collection is very large and heterogeneous, or is accessed by several researchers contemporaneously. There is a need for data management solutions that can help curators with efficient and on-demand analyses of their collection so that they remain well-informed about its evolving characteristics. In this paper, we describe our efforts towards developing a workflow to leverage open science High Performance Computing (HPC resources for routinely and efficiently conducting data management tasks on large collections. We demonstrate that HPC resources and techniques can significantly reduce the time for accomplishing critical data management tasks, and enable a dynamic archiving throughout the research process. We use a large archaeological data collection with a long and complex formation history as our test case. We share our experiences in adopting open science HPC resources for large-scale data management, which entails understanding usage of the open source HPC environment and training users. These experiences can be generalized to meet the needs of other data curators working with large collections.

  11. Leveraging voice

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    researchers improve our practices and how could digital online video help offer more positive stories about research and higher education? How can academics in higher education be better to tell about our research, thereby reclaiming and leveraging our voice in a post-factual era? As higher education...

  12. A path to better healthcare simulation systems: leveraging the integrated systems design approach.

    Science.gov (United States)

    Scerbo, Mark W; Murray, W Bosseau; Alinier, Guillaume; Antonius, Tim; Caird, Jeff; Stricker, Eric; Rice, John; Kyle, Richard

    2011-08-01

    This article addresses the necessary steps in the design of simulation-based instructional systems. A model for designing instructional systems is presented which stipulates that the outcome metrics be defined before the simulation system is designed. This ensures integration of educational objectives and measures of competency into the design and development process. The article ends with a challenge to simulator users and instructors: become involved in the integrated system design process by the daily collection of standardized data and working with the simulation engineers throughout the design process.

  13. PD-atricians: Leveraging Physicians and Participatory Design to Develop Novel Clinical Information Tools.

    Science.gov (United States)

    Pollack, Ari H; Miller, Andrew; Mishra, Sonali R; Pratt, Wanda

    2016-01-01

    Participatory design, a method by which system users and stakeholders meaningfully contribute to the development of a new process or technology, has great potential to revolutionize healthcare technology, yet has seen limited adoption. We conducted a design session with eleven physicians working to create a novel clinical information tool utilizing participatory design methods. During the two-hour session, the physicians quickly engaged in the process and generated a large quantity of information, informing the design of a future tool. By utilizing facilitators experienced in design methodology, with detailed domain expertise, and well integrated into the healthcare organization, the participatory design session engaged a group of users who are often disenfranchised with existing processes as well as health information technology in general. We provide insight into why participatory design works with clinicians and provide guiding principles for how to implement these methods in healthcare organizations interested in advancing health information technology.

  14. Leverage bubble

    Science.gov (United States)

    Yan, Wanfeng; Woodard, Ryan; Sornette, Didier

    2012-01-01

    Leverage is strongly related to liquidity in a market and lack of liquidity is considered a cause and/or consequence of the recent financial crisis. A repurchase agreement is a financial instrument where a security is sold simultaneously with an agreement to buy it back at a later date. Repurchase agreement (repo) market size is a very important element in calculating the overall leverage in a financial market. Therefore, studying the behavior of repo market size can help to understand a process that can contribute to the birth of a financial crisis. We hypothesize that herding behavior among large investors led to massive over-leveraging through the use of repos, resulting in a bubble (built up over the previous years) and subsequent crash in this market in early 2008. We use the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles and behavioral finance to study the dynamics of the repo market that led to the crash. The JLS model qualifies a bubble by the presence of characteristic patterns in the price dynamics, called log-periodic power law (LPPL) behavior. We show that there was significant LPPL behavior in the market before that crash and that the predicted range of times predicted by the model for the end of the bubble is consistent with the observations.

  15. Leveraging Understanding of Flow of Variable Complex Fluid to Design Better Absorbent Hygiene Products

    Science.gov (United States)

    Krautkramer, C.; Rend, R. R.

    2014-12-01

    Menstrual flow, which is a result of shedding of uterus endometrium, occurs periodically in sync with a women's hormonal cycle. Management of this flow while allowing women to pursue their normal daily lives is the purpose of many commercial products. Some of these products, e.g. feminine hygiene pads and tampons, utilize porous materials in achieving their goal. In this paper we will demonstrate different phenomena that have been observed in flow of menstrual fluid through these porous materials, share some of the advances made in experimental and analytical study of these phenomena, and also present some of the unsolved challenges and difficulties encountered while studying this kind of flow. Menstrual fluid is generally composed of four main components: blood plasma, blood cells, cervical mucus, and tissue debris. This non-homogeneous, multiphase fluid displays very complex rheological behavior, e. g., yield stress, thixotropy, and visco-elasticity, that varies throughout and between menstrual cycles and among women due to various factors. Flow rates are also highly variable during menstruation and across the population and the rheological properties of the fluid change during the flow into and through the product. In addition to these phenomena, changes to the structure of the porous medium within the product can also be seen due to fouling and/or swelling of the material. This paper will, also, share how the fluid components impact the flow and the consequences for computer simulation, the creation of a simulant fluid and testing methods, and for designing products that best meet consumer needs. We hope to bring to light the challenges of managing this complex flow to meet a basic need of women all over the world. An opportunity exists to apply learnings from research in other disciplines to improve the scientific knowledge related to the flow of this complex fluid through the porous medium that is a sanitary product.

  16. Interrogating the Learning Sciences as a Design Science: Leveraging Insights from Chinese Philosophy and Chinese Medicine

    Science.gov (United States)

    Chee, Yam San

    2014-01-01

    Design research has been positioned as an important methodological contribution of the learning sciences. Despite the publication of a handbook on the subject, the practice of design research in education remains an eclectic collection of specific approaches implemented by different researchers and research groups. In this paper, I examine the…

  17. The Flipped Classroom in Systems Analysis & Design: Leveraging Technology to Increase Student Engagement

    Science.gov (United States)

    Saulnier, Bruce M.

    2015-01-01

    Problems associated with the ubiquitous presence of technology on college campuses are discussed and the concept of the flipped classroom is explained. Benefits of using the flipped classroom to offset issues associated with the presence of technology in the classroom are explored. Fink's Integrated Course Design is used to develop a flipped class…

  18. Leveraging Insights from Mainstream Gameplay to Inform STEM Game Design: Great Idea, but What Comes Next?

    Science.gov (United States)

    Biles, Melissa

    2012-01-01

    This response to Leah A. Bricker and Phillip Bell's paper, "GodMode is his video game name", examines their assertion that the social nexus of gaming practices is an important factor to consider for those looking to design STEM video games. I propose that we need to go beyond the investigation into which aspects of games play a role in learning,…

  19. Industrial Sponsor Perspective on Leveraging Capstone Design Projects to Enhance Their Business

    Science.gov (United States)

    Weissbach, Robert S.; Snyder, Joseph W.; Evans, Edward R., Jr.; Carucci, James R., Jr.

    2017-01-01

    Capstone design projects have become commonplace among engineering and engineering technology programs. These projects are valuable tools when assessing students, as they require students to work in teams, communicate effectively, and demonstrate technical competency. The use of industrial sponsors enhances these projects by giving these projects…

  20. MO-FG-BRA-04: Leveraging the Abscopal Effect Via New Design Radiotherapy Biomaterials Loaded with Immune Checkpoint Inhibitors

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Y; Cifter, G; Altundal, Y; Moreau, M; Sajo, E [Univ Massachusetts Lowell, Lowell, MA (United States); Sinha, N [Wentworth Institute of Technology, Boston, MA (United States); Makrigiorgos, G [Dana Farber Cancer Institute, Boston, MA (United States); Harvard Medical School, Boston, MA (United States); Ngwa, W [Univ Massachusetts Lowell, Lowell, MA (United States); Dana Farber Cancer Institute, Boston, MA (United States); Harvard Medical School, Boston, MA (United States)

    2015-06-15

    Purpose: Studies show that stereotactic body radiation therapy (SBRT) of a primary tumor in combination with immune checkpoint inhibitors (ICI) could Result in an immune-mediated regression of metastasis outside the radiation field, a phenomenon known as abscopal effect. However toxicities due to repeated systematic administration of ICI have been shown to be a major obstacle in clinical trials. Towards overcoming these toxicity limitations, we investigate a potential new approach whereby the ICI are administered via sustained in-situ release from radiotherapy (RT) biomaterials (e.g. fiducials) coated with a polymer containing the ICI. Methods: New design RT biomaterials were prepared by coating commercially available spacers/fiducials with a biocompatible polymer (PLGA) film containing fluorescent nanoparticles of size needed to load the ICI. The release of the nanoparticles was investigated in-vitro. Meanwhile, an experimentally determined in- vivo nanoparticle diffusion coefficient was employed in analytic calculations based on Fick’s second law to estimate the time for achieving the concentrations of ICI in the tumor draining lymph node (TDLN) that are needed to engender the abscopal effect during SBRT. The ICI investigated here was anti-CTLA-4 antibody (ipilimumab) at approved FDA concentrations. Results: Our in -vitro study results showed that RT biomaterials could be designed to achieve burst release of nanoparticles within one day. Meanwhile, our calculations indicate that for a 2 to 4 cm tumor it would take 4–22 days, respectively, following burst release, for the required concentration of ICI nanoparticles to accumulate in the TDLN during SBRT. Conclusion: Current investigations combining RT and immunotherapy involve repeated intravenous administration of ICI leading to significant systemic toxicities. Our preliminary results highlight a potential new approach for sustained in-situ release of the ICI from new design RT biomaterials. These results

  1. Leverage Aversion and Risk Parity

    DEFF Research Database (Denmark)

    Asness, Clifford; Frazzini, Andrea; Heje Pedersen, Lasse

    2012-01-01

    The authors show that leverage aversion changes the predictions of modern portfolio theory: Safer assets must offer higher risk-adjusted returns than riskier assets. Consuming the high risk-adjusted returns of safer assets requires leverage, creating an opportunity for investors with the ability...

  2. Book-to-Market Equity, Financial Leverage, and the Cross-Section of Stock Returns

    OpenAIRE

    Iulian Obreja

    2013-01-01

    I propose a new dynamic model of the firm that links operating leverage to both value premium and book-leverage premium in stock returns. Value firms are low-productivity firms with either high operating leverage or high financial leverage. Firms with high operating leverage maintain low book leverage ratios. When operating leverage is economically significant, both value firms and low book-leverage firms can have high equity risk premiums. In particular, value premium becomes positive while ...

  3. Financial intermediary leverage and value at risk

    OpenAIRE

    Adrian,Tobias; Shin, Hyun Song

    2008-01-01

    We study a contracting model for the determination of leverage and balance sheet size for financial intermediaries that fund their activities through collateralized borrowing. The model gives rise to two features: First, leverage is procyclical in the sense that leverage is high when the balance sheet is large. Second, leverage and balance sheet size are both determined by the riskiness of assets. For U.S. investment banks, we find empirical support for both features of our model, that is, le...

  4. Equitable Leadership on the Ground: Converging on High-Leverage Practices

    Science.gov (United States)

    Galloway, Mollie K.; Ishimaru, Ann M.

    2017-01-01

    What would leadership standards look like if developed through a lens and language of equity? We engaged with a group of 40 researchers, practitioners, and community leaders recognized as having expertise on equity in education to address this question. Using a Delphi technique, an approach designed to elicit expert feedback and measure…

  5. Can Studying Adolescents' Thinking Amplify High-Leverage Social Studies Teaching Practice? Challenges of Synthesizing Pedagogies of Investigation and Enactment in School-Institutional Contexts

    Science.gov (United States)

    Meuwissen, Kevin W.; Thomas, Andrew L.

    2016-01-01

    The notion that teacher education should emphasize high-leverage practice, which is research based, represents the complexity of the subject matter, bolsters teachers' understanding of student learning, is adaptable to different curricular circumstances, and can be mastered with regular use, has traction in scholarship. Nevertheless, how teacher…

  6. High Leverage Technologies for In-Space Assembly of Complex Structures

    Science.gov (United States)

    Hamill, Doris; Bowman, Lynn M.; Belvin, W. Keith; Gilman, David A.

    2016-01-01

    In-space assembly (ISA), the ability to build structures in space, has the potential to enable or support a wide range of advanced mission capabilities. Many different individual assembly technologies would be needed in different combinations to serve many mission concepts. The many-to-many relationship between mission needs and technologies makes it difficult to determine exactly which specific technologies should receive priority for development and demonstration. Furthermore, because enabling technologies are still immature, no realistic, near-term design reference mission has been described that would form the basis for flowing down requirements for such development and demonstration. This broad applicability without a single, well-articulated mission makes it difficult to advance the technology all the way to flight readiness. This paper reports on a study that prioritized individual technologies across a broad field of possible missions to determine priority for future technology investment.

  7. Analog circuit design designing high performance amplifiers

    CERN Document Server

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  8. Dynamic leverage asset pricing

    OpenAIRE

    Adrian,Tobias; Moench, Emanuel; Shin, Hyun Song

    2013-01-01

    We investigate intermediary asset pricing theories empirically and find strong support for models that have intermediary leverage as the relevant state variable. A parsimonious model that uses detrended dealer leverage as a price-of-risk variable, and innovations to dealer leverage as a pricing factor, is shown to perform well in time series and cross-sectional tests of a wide variety of equity and bond portfolios. The model outperforms alternative specifications of intermediary pricing model...

  9. Leverage points for sustainability transformation.

    Science.gov (United States)

    Abson, David J; Fischer, Joern; Leventon, Julia; Newig, Jens; Schomerus, Thomas; Vilsmaier, Ulli; von Wehrden, Henrik; Abernethy, Paivi; Ives, Christopher D; Jager, Nicolas W; Lang, Daniel J

    2017-02-01

    Despite substantial focus on sustainability issues in both science and politics, humanity remains on largely unsustainable development trajectories. Partly, this is due to the failure of sustainability science to engage with the root causes of unsustainability. Drawing on ideas by Donella Meadows, we argue that many sustainability interventions target highly tangible, but essentially weak, leverage points (i.e. using interventions that are easy, but have limited potential for transformational change). Thus, there is an urgent need to focus on less obvious but potentially far more powerful areas of intervention. We propose a research agenda inspired by systems thinking that focuses on transformational 'sustainability interventions', centred on three realms of leverage: reconnecting people to nature, restructuring institutions and rethinking how knowledge is created and used in pursuit of sustainability. The notion of leverage points has the potential to act as a boundary object for genuinely transformational sustainability science.

  10. A User-Centered Framework for Deriving A Conceptual Design From User Experiences: Leveraging Personas and Patterns to Create Usable Designs

    Science.gov (United States)

    Javahery, Homa; Deichman, Alexander; Seffah, Ahmed; Taleb, Mohamed

    Patterns are a design tool to capture best practices, tackling problems that occur in different contexts. A user interface (UI) design pattern spans several levels of design abstraction ranging from high-level navigation to low-level idioms detailing a screen layout. One challenge is to combine a set of patterns to create a conceptual design that reflects user experiences. In this chapter, we detail a user-centered design (UCD) framework that exploits the novel idea of using personas and patterns together. Personas are used initially to collect and model user experiences. UI patterns are selected based on personas pecifications; these patterns are then used as building blocks for constructing conceptual designs. Through the use of a case study, we illustrate how personas and patterns can act as complementary techniques in narrowing the gap between two major steps in UCD: capturing users and their experiences, and building an early design based on that information. As a result of lessons learned from the study and by refining our framework, we define a more systematic process called UX-P (User Experiences to Pattern), with a supporting tool. The process introduces intermediate analytical steps and supports designers in creating usable designs.

  11. Seven "Chilis": Making Visible the Complexities in Leveraging Cultural Repertories of Practice in a Designed Teaching and Learning Environment

    Science.gov (United States)

    DiGiacomo, Daniela Kruel; Gutiérrez, Kris D.

    2017-01-01

    Drawing upon four years of research within a social design experiment, we focus on how teacher learning can be supported in designed environments that are organized around robust views of learning, culture, and equity. We illustrate both the possibility and difficulty of helping teachers disrupt the default teaching scripts that privilege…

  12. Risks of Leveraged Products

    NARCIS (Netherlands)

    A. Di Cesare (Antonio)

    2012-01-01

    textabstractLeveraged investments have become a fundamental feature of modern economies. The new financial products allow people to take greater-than-usual exposures to risk factors. This thesis analyzes several different aspects of the risks involved by some frequently used leveraged products:

  13. Responsive design high performance

    CERN Document Server

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  14. A Proxy Design to Leverage the Interconnection of CoAP Wireless Sensor Networks with Web Applications

    Science.gov (United States)

    Ludovici, Alessandro; Calveras, Anna

    2015-01-01

    In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling. PMID:25585107

  15. A proxy design to leverage the interconnection of CoAP Wireless Sensor Networks with Web applications.

    Science.gov (United States)

    Ludovici, Alessandro; Calveras, Anna

    2015-01-09

    In this paper, we present the design of a Constrained Application Protocol (CoAP) proxy able to interconnect Web applications based on Hypertext Transfer Protocol (HTTP) and WebSocket with CoAP based Wireless Sensor Networks. Sensor networks are commonly used to monitor and control physical objects or environments. Smart Cities represent applications of such a nature. Wireless Sensor Networks gather data from their surroundings and send them to a remote application. This data flow may be short or long lived. The traditional HTTP long-polling used by Web applications may not be adequate in long-term communications. To overcome this problem, we include the WebSocket protocol in the design of the CoAP proxy. We evaluate the performance of the CoAP proxy in terms of latency and memory consumption. The tests consider long and short-lived communications. In both cases, we evaluate the performance obtained by the CoAP proxy according to the use of WebSocket and HTTP long-polling.

  16. The challenge of causal inference in gene-environment interaction research: leveraging research designs from the social sciences.

    Science.gov (United States)

    Fletcher, Jason M; Conley, Dalton

    2013-10-01

    The integration of genetics and the social sciences will lead to a more complex understanding of the articulation between social and biological processes, although the empirical difficulties inherent in this integration are large. One key challenge is the implications of moving "outside the lab" and away from the experimental tools available for research with model organisms. Social science research methods used to examine human behavior in nonexperimental, real-world settings to date have not been fully taken advantage of during this disciplinary integration, especially in the form of gene-environment interaction research. This article outlines and provides examples of several prominent research designs that should be used in gene-environment research and highlights a key benefit to geneticists of working with social scientists.

  17. Designing High-Trust Organizations

    DEFF Research Database (Denmark)

    Jagd, Søren

    The specific problem considered in this paper is what are the key issues to consider for managers involved in designing high-trust organizations, a design problem still not properly explored. This paper intends to take the first step in filling this lacuna. In the paper, I first present...... the existing management and research literature on building high-trust organizations. Then I explore Alan Fox’s (1974) analysis of low-trust vs. high-trust dynamics which, I argue, may serve as a theoretically stronger basis for understanding the issues management have to consider when designing hightrust...

  18. High mobility and quantum well transistors design and TCAD simulation

    CERN Document Server

    Hellings, Geert

    2013-01-01

    For many decades, the semiconductor industry has miniaturized transistors, delivering increased computing power to consumers at decreased cost. However, mere transistor downsizing does no longer provide the same improvements. One interesting option to further improve transistor characteristics is to use high mobility materials such as germanium and III-V materials. However, transistors have to be redesigned in order to fully benefit from these alternative materials. High Mobility and Quantum Well Transistors: Design and TCAD Simulation investigates planar bulk Germanium pFET technology in chapters 2-4, focusing on both the fabrication of such a technology and on the process and electrical TCAD simulation. Furthermore, this book shows that Quantum Well based transistors can leverage the benefits of these alternative materials, since they confine the charge carriers to the high-mobility material using a heterostructure. The design and fabrication of one particular transistor structure - the SiGe Implant-Free Qu...

  19. Solving the Present Crisis and Managing the Leverage Cycle

    OpenAIRE

    John Geanakoplos

    2010-01-01

    The present crisis is the bottom of a recurring problem that I call the leverage cycle, in which leverage gradually rises too high then suddenly falls much too low. The government must manage the leverage cycle in normal times by monitoring and regulating leverage to keep it from getting too high. In the crisis stage the government must stem the scary bad news that brought on the crisis, which often will entail coordinated write downs of principal; it must restore sane leverage by going aroun...

  20. Binding leverage as a molecular basis for allosteric regulation.

    Science.gov (United States)

    Mitternacht, Simon; Berezovsky, Igor N

    2011-09-01

    Allosteric regulation involves conformational transitions or fluctuations between a few closely related states, caused by the binding of effector molecules. We introduce a quantity called binding leverage that measures the ability of a binding site to couple to the intrinsic motions of a protein. We use Monte Carlo simulations to generate potential binding sites and either normal modes or pairs of crystal structures to describe relevant motions. We analyze single catalytic domains and multimeric allosteric enzymes with complex regulation. For the majority of the analyzed proteins, we find that both catalytic and allosteric sites have high binding leverage. Furthermore, our analysis of the catabolite activator protein, which is allosteric without conformational change, shows that its regulation involves other types of motion than those modulated at sites with high binding leverage. Our results point to the importance of incorporating dynamic information when predicting functional sites. Because it is possible to calculate binding leverage from a single crystal structure it can be used for characterizing proteins of unknown function and predicting latent allosteric sites in any protein, with implications for drug design.

  1. Systemic Risk, Bank's Capital Buffer, and Leverage

    OpenAIRE

    Wibowo, Buddi

    2017-01-01

    This paper measures individual bank's impact on banking systemic risk and examines the effect of individual bank's capital buffer and leverage to bank's systemic risk impact in Indonesia during 2010-2014. Using Merton's distance-to-default to measure systemic risk, the study shows a significant negative relationship between bank's capital buffer and systemic risk. High capital buffer tends to lowering bank's impact on systemic risk. Bank's leverage level also influences its contribution to sy...

  2. Leveraging the geospatial advantage

    Science.gov (United States)

    Ben Butler; Andrew Bailey

    2013-01-01

    The Wildland Fire Decision Support System (WFDSS) web-based application leverages geospatial data to inform strategic decisions on wildland fires. A specialized data team, working within the Wildland Fire Management Research Development and Application group (WFM RD&A), assembles authoritative national-level data sets defining values to be protected. The use of...

  3. Hedge Ratios for short and leveraged ETFs

    Directory of Open Access Journals (Sweden)

    Leo Schubert

    2011-06-01

    Full Text Available Exchange Traded Funds (ETFs exist for stock-, bond- and commodity markets. In most cases the underlying of an ETF is an index. Fund management today uses the active and passive way to construct a portfolio. ETFs can be used for passive portfolio management. Then ETFs with positive leverage factors are preferred. In the frame of active portfolio also the ETFs with negative leverage factors can be applied for the hedge or cross hedge of a portfolio. These hedging possibilities will be analyzed in this paper. Short ETFs exist with different leverage factors. In Europe, the leverage factors 1 (e.g. ShortDAX ETF and 2 (e.g. DJ STOXX 600 Double Short are offered while in the financial markets of the United States factors from 1 to 4 can be found. To investigate the effect of the different leverage factors and other parameters Monte Carlo Simulation was used. The results show e.g. that higher leverage factors achieve higher profits as well as losses. In the case, that a bearish market is supposed, minimizing the variance of the hedge seem not to be until to get better hedging results, due to a very skewed return distribution of the hedge. The risk measure target-shortfall-probability confirms the use of the standard hedge weightings which depend only on the leverage factor. This characteristic remains, when a portfolio has to be hedged instead of the underlying index of the short ETF. For portfolios which have a low correlation with the index return should not be used high leverage factors for hedging, due to the higher volatility and target-shortfall-probability.

  4. Leveraging the water-energy-food nexus for a sustainability transition: Institutional and policy design choices in a fragmented world (Invited)

    Science.gov (United States)

    Aggarwal, R.

    2013-12-01

    pricing and rationing policy on groundwater withdrawals and type of crops grown. Finally, we examine several emerging examples of innovative policies and institutions that have leveraged the synergies among sectors. Although these examples do not necessarily provide optimal solutions, these provide some clues as to how decision- making within individual sectors can be influenced through institutional and policy design to transition towards more sustainable pathways in a second best world. We conclude by exploring what lessons these cases might hold for navigating these tradeoffs in other contexts.

  5. Protein leverage and energy intake.

    Science.gov (United States)

    Gosby, A K; Conigrave, A D; Raubenheimer, D; Simpson, S J

    2014-03-01

    Increased energy intakes are contributing to overweight and obesity. Growing evidence supports the role of protein appetite in driving excess intake when dietary protein is diluted (the protein leverage hypothesis). Understanding the interactions between dietary macronutrient balance and nutrient-specific appetite systems will be required for designing dietary interventions that work with, rather than against, basic regulatory physiology. Data were collected from 38 published experimental trials measuring ad libitum intake in subjects confined to menus differing in macronutrient composition. Collectively, these trials encompassed considerable variation in percent protein (spanning 8-54% of total energy), carbohydrate (1.6-72%) and fat (11-66%). The data provide an opportunity to describe the individual and interactive effects of dietary protein, carbohydrate and fat on the control of total energy intake. Percent dietary protein was negatively associated with total energy intake (F = 6.9, P leverage in lean, overweight and obese humans. A better appreciation of the targets and regulatory priorities for protein, carbohydrate and fat intake will inform the design of effective and health-promoting weight loss diets, food labelling policies, food production systems and regulatory frameworks. © 2013 The Authors. obesity reviews © 2013 International Association for the Study of Obesity.

  6. Leveraging Formal Methods and Fuzzing to Verify Security and Reliability Properties of Large-Scale High-Consequence Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ruthruff, Joseph. R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Armstrong, Robert C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Davis, Benjamin Garry [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Mayo, Jackson R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Punnoose, Ratish J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2012-09-01

    Formal methods describe a class of system analysis techniques that seek to prove specific properties about analyzed designs, or locate flaws compromising those properties. As an analysis capability,these techniques are the subject of increased interest from both internal and external customers of Sandia National Laboratories. Given this lab's other areas of expertise, Sandia is uniquely positioned to advance the state-of-the-art with respect to several research and application areas within formal methods. This research project was a one-year effort funded by Sandia's CyberSecurity S&T Investment Area in its Laboratory Directed Research & Development program to investigate the opportunities for formal methods to impact Sandia's present mission areas, more fully understand the needs of the research community in the area of formal methods and where Sandia can contribute, and clarify from those potential research paths those that would best advance the mission-area interests of Sandia. The accomplishments from this project reinforce the utility of formal methods in Sandia, particularly in areas relevant to Cyber Security, and set the stage for continued Sandia investments to ensure this capabilityis utilized and advanced within this laboratory to serve the national interest.

  7. Leveraging the Semantic Web for Adaptive Education

    NARCIS (Netherlands)

    Kravcik, Milos; Gasevic, Dragan

    2007-01-01

    Kravčík M. & Gašević, D. (2007). Leveraging the Semantic Web for Adaptive Education. Journal of Interactive Media in Education (Adaptation and IMS Learning Design. Special Issue, ed. Daniel Burgos), 2007/06. [jime.open.ac.uk/2007/06

  8. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  9. Can Leverage Constraints Help Investors?

    OpenAIRE

    Heimer, Rawley

    2014-01-01

    This paper provides causal evidence that leverage constraints can reduce the underperformance of individual investors. In accordance with Dodd-Frank, the CFTC was given regulatory authority over the retail market for foreign exchange and capped the maximum permissible leverage available to U.S. traders. By comparing U.S. traders on the same brokerages with their unregulated European counterparts, I show that the leverage constraint reduces average per-trade losses even after adjusting for ris...

  10. High Integrity Can Design Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Shaber, E.L.

    1998-08-01

    The National Spent Nuclear Fuel Program is chartered with facilitating the disposition of DOE-owned spent nuclear fuel to allow disposal at a geologic repository. This is done through coordination with the repository program and by assisting DOE Site owners of SNF with needed information, standardized requirements, packaging approaches, etc. The High Integrity Can (HIC) will be manufactured to provide a substitute or barrier enhancement for normal fuel geometry and cladding. The can would be nested inside the DOE standardized canister which is designed to interface with the repository waste package. The HIC approach may provide the following benefits over typical canning approaches for DOE SNF. (a) It allows ready calculation and management of criticality issues for miscellaneous. (b) It segments and further isolates damaged or otherwise problem materials from normal SNF in the repository package. (c) It provides a very long term corrosion barrier. (d) It provides an extra internal pressure barrier for particulates, gaseous fission products, hydrogen, and water vapor. (e) It delays any potential release of fission products to the repository environment. (f) It maintains an additional level of fuel geometry control during design basis accidents, rock-fall, and seismic events. (g) When seal welded, it could provide the additional containment required for shipments involving plutonium content in excess of 20 Ci. (10 CFR 71.63.b) if integrated with an appropriate cask design. Long term corrosion protection is central to the HIC concept. The material selected for the HIC (Hastelloy C-22) has undergone extensive testing for repository service. The most severe theoretical interactions between iron, repository water containing chlorides and other repository construction materials have been tested. These expected chemical species have not been shown capable of corroding the selected HIC material. Therefore, the HIC should provide a significant barrier to DOE SNF dispersal

  11. Leveraging organisational cultural capital

    Directory of Open Access Journals (Sweden)

    R Scheel

    2007-01-01

    Full Text Available Organisational culture discourse mandates a linear approach of diagnosis, measurement and gap analysis as standard practice in relation to most culture change initiatives. Therefore, a problem solving framework geared toward “fixing�? and/or realigning an organisation’s culture is usually prescribed. The traditional problem solving model seeks to identify gaps between current and desired organisational cultural states, inhibiting the discovery of an organisation’s unique values and strengths, namely its cultural capital. In pursuit of discovering and leveraging organisational cultural capital, a descriptive case study is used to show how an Appreciative Inquiry process can rejuvenate the spirit of an organisation as a system-wide inquiry mobilises a workforce toward a shared vision.

  12. Leveraging External Sources of Innovation

    DEFF Research Database (Denmark)

    West, Joel; Bogers, Marcel

    2014-01-01

    suggests several gaps in prior research. One is a tendency to ignore the importance of business models, despite their central role in distinguishing open innovation from earlier research on interorganizational collaboration in innovation. Another gap is a tendency in open innovation to use “innovation......This paper reviews research on open innovation that considers how and why firms commercialize external sources of innovations. It examines both the “outside-in” and “coupled” modes of open innovation. From an analysis of prior research on how firms leverage external sources of innovation......, it suggests a four-phase model in which a linear process—(1) obtaining, (2) integrating, and (3) commercializing external innovations—is combined with (4) interaction between the firm and its collaborators. This model is used to classify papers taken from the top 25 innovation journals, complemented by highly...

  13. High Efficiency Hall Thruster Discharge Power Converter Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Busek leveraged previous, internally sponsored, high power, Hall thruster discharge converter development which allowed it to design, build, and test new printed...

  14. 17 CFR 31.10 - Repurchase and resale of leverage contracts by leverage transaction merchants.

    Science.gov (United States)

    2010-04-01

    ... leverage contracts by leverage transaction merchants. 31.10 Section 31.10 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS § 31.10 Repurchase and resale of leverage contracts by leverage transaction merchants. (a) No leverage transaction merchant shall offer to...

  15. Macroeconomic Dynamics of Assets, Leverage and Trust

    Science.gov (United States)

    Rozendaal, Jeroen C.; Malevergne, Yannick; Sornette, Didier

    A macroeconomic model based on the economic variables (i) assets, (ii) leverage (defined as debt over asset) and (iii) trust (defined as the maximum sustainable leverage) is proposed to investigate the role of credit in the dynamics of economic growth, and how credit may be associated with both economic performance and confidence. Our first notable finding is the mechanism of reward/penalty associated with patience, as quantified by the return on assets. In regular economies where the EBITA/Assets ratio is larger than the cost of debt, starting with a trust higher than leverage results in the highest long-term return on assets (which can be seen as a proxy for economic growth). Therefore, patient economies that first build trust and then increase leverage are positively rewarded. Our second main finding concerns a recommendation for the reaction of a central bank to an external shock that affects negatively the economic growth. We find that late policy intervention in the model economy results in the highest long-term return on assets. However, this comes at the cost of suffering longer from the crisis until the intervention occurs. The phenomenon that late intervention is most effective to attain a high long-term return on assets can be ascribed to the fact that postponing intervention allows trust to increase first, and it is most effective to intervene when trust is high. These results are derived from two fundamental assumptions underlying our model: (a) trust tends to increase when it is above leverage; (b) economic agents learn optimally to adjust debt for a given level of trust and amount of assets. Using a Markov Switching Model for the EBITA/Assets ratio, we have successfully calibrated our model to the empirical data of the return on equity of the EURO STOXX 50 for the time period 2000-2013. We find that dynamics of leverage and trust can be highly nonmonotonous with curved trajectories, as a result of the nonlinear coupling between the variables. This

  16. 12 CFR 567.8 - Leverage ratio.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Leverage ratio. 567.8 Section 567.8 Banks and... § 567.8 Leverage ratio. (a) The minimum leverage capital requirement for a savings association assigned... associations not meeting the conditions set forth in paragraph (a) of this section, the minimum leverage...

  17. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    Directory of Open Access Journals (Sweden)

    Amit Kawalia

    Full Text Available Next generation sequencing (NGS has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files.

  18. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    Science.gov (United States)

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  19. Leveraging the power of high performance computing for next generation sequencing data analysis: tricks and twists from a high throughput exome workflow.

    Science.gov (United States)

    Kawalia, Amit; Motameny, Susanne; Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files.

  20. Leveraged exchange-traded funds price dynamics and options valuation

    CERN Document Server

    Leung, Tim

    2016-01-01

    This book provides an analysis, under both discrete-time and continuous-time frameworks, on the price dynamics of leveraged exchange-traded funds (LETFs), with emphasis on the roles of leverage ratio, realized volatility, investment horizon, and tracking errors. This study provides new insights on the risks associated with LETFs. It also leads to the discussion of new risk management concepts, such as admissible leverage ratios and admissible risk horizon, as well as the mathematical and empirical analyses of several trading strategies, including static portfolios, pairs trading, and stop-loss strategies involving ETFs and LETFs. The final part of the book addresses the pricing of options written on LETFs. Since different LETFs are designed to track the same reference index, these funds and their associated options share very similar sources of randomness. The authors provide a no-arbitrage pricing approach that consistently value options on LETFs with different leverage ratios with stochastic volatility and ...

  1. Design Tech High School: d.tech

    Science.gov (United States)

    EDUCAUSE, 2015

    2015-01-01

    A Bay Area charter high school, d.tech develops "innovation-ready" students by combining content knowledge with the design thinking process while fostering a sense of autonomy and purpose. The academic model is grounded in self-paced learning through a flex schedule, high standards, and design thinking through a four-year design…

  2. Aerodynamic design on high-speed trains

    Science.gov (United States)

    Ding, San-San; Li, Qiang; Tian, Ai-Qin; Du, Jian; Liu, Jia-Li

    2016-04-01

    Compared with the traditional train, the operational speed of the high-speed train has largely improved, and the dynamic environment of the train has changed from one of mechanical domination to one of aerodynamic domination. The aerodynamic problem has become the key technological challenge of high-speed trains and significantly affects the economy, environment, safety, and comfort. In this paper, the relationships among the aerodynamic design principle, aerodynamic performance indexes, and design variables are first studied, and the research methods of train aerodynamics are proposed, including numerical simulation, a reduced-scale test, and a full-scale test. Technological schemes of train aerodynamics involve the optimization design of the streamlined head and the smooth design of the body surface. Optimization design of the streamlined head includes conception design, project design, numerical simulation, and a reduced-scale test. Smooth design of the body surface is mainly used for the key parts, such as electric-current collecting system, wheel truck compartment, and windshield. The aerodynamic design method established in this paper has been successfully applied to various high-speed trains (CRH380A, CRH380AM, CRH6, CRH2G, and the Standard electric multiple unit (EMU)) that have met expected design objectives. The research results can provide an effective guideline for the aerodynamic design of high-speed trains.

  3. The Evolution of Household Leverage during the Recovery

    OpenAIRE

    Whitaker, Stephan

    2014-01-01

    Recent research has shown that geographic areas that experienced greater household deleveraging during the recession also experienced relatively severe economic contractions and slower recoveries. This analysis explores geographic variations in household debt over the past recession and recovery. It fi nds that regions that had very high household leverage at the start of the recession have shifted back toward national norms, while the variation of leverage within metro areas has maintained s...

  4. Leveraging the Educational Outreach Efforts of Low-Cost Missions

    Science.gov (United States)

    Fisher, Diane K.; Leon, Nancy J.

    2000-01-01

    A small portion of the budget for every NASA mission must be devoted to education and public outreach. The question is, how can projects best leverage these funds to create a high-quality message and get it disseminated to the largest and most appropriate audience? This paper describes the approach taken by a small educational outreach team for NASA's New Millennium Program (NMP). The team's approach has been twofold: develop a highly desirable suite of products designed to appeal to, as well as enlighten, the target audience; then negotiate relationships with existing, often under-utilized channels for dissemination of these products. Starting with NMP missions as the base of support for these efforts, the team has invited participation by other missions. This approach has resulted in a richer and broader message, and has allowed the continuing development of the audience base.

  5. Leveraging Graduate Education for a More Relevant Future

    Science.gov (United States)

    Davis, Meredith

    2012-01-01

    Arguing that the 21st century context for design is significantly different from the previous century, a set of structural suggestions are posed that can leverage change. Administrative arrangements are questioned along with the lack of clear differentiation or performance expectation among design degrees. While widespread, confusing and…

  6. High-Rate Receiver Design Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose an initial architectural and preliminary hardware design study for a high-rate receiver capable of decoding modulation suites specified by CCSDS 413.0-G-1...

  7. Pengaruh Leverage Operasi, Leverage Keuangan Dan Leverage Total Terhadap Risiko Sistematis Saham Pada Perusahaan Manufaktur Yang Terdaftar Di Bei Periode Sebelum Dan Sesudah Konvergensi Ifrs

    OpenAIRE

    Pawestri, Septi Ika; Sari, Ratna Candra

    2014-01-01

    Penelitian ini bertujuan untuk mengetahui (1) pengaruh leverage operasi, leverage keuangan, leverage total terhadap risiko sistematis saham, (2) perbedaan pengaruh leverage operasi, leverage keuangan, dan leverage total terhadap risiko sistematis saham dan (3) perbedaan tingkat risiko sistematis saham pada Perusahaan manufaktur yang terdaftar di BEI periode sebelum dan sesudah konvergensi IFRS. Teknik pengumpulan data yang digunakan dalam penelitian ini adalah teknik dokumentasi. Teknik anali...

  8. Leveraging on Easy Java Simulation tool and open source computer simulation library to create interactive digital media for mass customization of high school physics curriculum

    CERN Document Server

    Wee, Loo Kang

    2012-01-01

    This paper highlights the diverse possibilities in the rich community of educators from the Conceptual Learning of Science (CoLoS) and Open Source Physics (OSP) movement to engage, enable and empower educators and students, to create interactive digital media through computer modeling. This concept revolves around a paradigmatic shift towards participatory learning through immersive computer modeling, as opposed to using technology for information transmission. We aim to engage high school educators to professionally develop themselves by creating and customizing simulations possible through Easy Java Simulation (Ejs) and its learning community. Ejs allows educators to be designers of learning environments through modifying source codes of the simulation. Educators can conduct lessons with students' using these interactive digital simulations and rapidly enhance the simulation through changing the source codes personally. Ejs toolkit, its library of simulations and growing community contributed simulation cod...

  9. Operating Leverage over the Business Cycle

    OpenAIRE

    Bhattacharjee, A.; Higson, C.; Holly, S.

    2015-01-01

    Operating leverage describes the extent to which a firm's operating costs are fixed in the short run. The effect of operating leverage is to amplify the impact on profit of a change in revenues; an effect which is further amplified by financial leverage and by asymmetry in the tax system. In this paper we provide empirical estimates of operating leverage at the firm level, using a long panel of data on UK quoted firms. We report sectoral differences in operating leverage around the business c...

  10. 17 CFR 31.22 - Prohibited trading in leverage contracts.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Prohibited trading in leverage... LEVERAGE TRANSACTIONS § 31.22 Prohibited trading in leverage contracts. No futures commission merchant or... orders for any leverage contract. ...

  11. High-Average, High-Peak Current Injector Design

    CERN Document Server

    Biedron, S G; Virgo, M

    2005-01-01

    There is increasing interest in high-average-power (>100 kW), um-range FELs. These machines require high peak current (~1 kA), modest transverse emittance, and beam energies of ~100 MeV. High average currents (~1 A) place additional constraints on the design of the injector. We present a design for an injector intended to produce the required peak currents at the injector, eliminating the need for magnetic compression within the linac. This reduces the potential for beam quality degradation due to CSR and space charge effects within magnetic chicanes.

  12. Exploring Customization in Higher Education: An Experiment in Leveraging Computer Spreadsheet Technology to Deliver Highly Individualized Online Instruction to Undergraduate Business Students

    Science.gov (United States)

    Kunzler, Jayson S.

    2012-01-01

    This dissertation describes a research study designed to explore whether customization of online instruction results in improved learning in a college business statistics course. The study involved utilizing computer spreadsheet technology to develop an intelligent tutoring system (ITS) designed to: a) collect and monitor individual real-time…

  13. Using high-throughput sequencing to leverage surveillance of genetic diversity and oseltamivir resistance: a pilot study during the 2009 influenza A(H1N1 pandemic.

    Directory of Open Access Journals (Sweden)

    Juan Téllez-Sosa

    Full Text Available BACKGROUND: Influenza viruses display a high mutation rate and complex evolutionary patterns. Next-generation sequencing (NGS has been widely used for qualitative and semi-quantitative assessment of genetic diversity in complex biological samples. The "deep sequencing" approach, enabled by the enormous throughput of current NGS platforms, allows the identification of rare genetic viral variants in targeted genetic regions, but is usually limited to a small number of samples. METHODOLOGY AND PRINCIPAL FINDINGS: We designed a proof-of-principle study to test whether redistributing sequencing throughput from a high depth-small sample number towards a low depth-large sample number approach is feasible and contributes to influenza epidemiological surveillance. Using 454-Roche sequencing, we sequenced at a rather low depth, a 307 bp amplicon of the neuraminidase gene of the Influenza A(H1N1 pandemic (A(H1N1pdm virus from cDNA amplicons pooled in 48 barcoded libraries obtained from nasal swab samples of infected patients (n  =  299 taken from May to November, 2009 pandemic period in Mexico. This approach revealed that during the transition from the first (May-July to second wave (September-November of the pandemic, the initial genetic variants were replaced by the N248D mutation in the NA gene, and enabled the establishment of temporal and geographic associations with genetic diversity and the identification of mutations associated with oseltamivir resistance. CONCLUSIONS: NGS sequencing of a short amplicon from the NA gene at low sequencing depth allowed genetic screening of a large number of samples, providing insights to viral genetic diversity dynamics and the identification of genetic variants associated with oseltamivir resistance. Further research is needed to explain the observed replacement of the genetic variants seen during the second wave. As sequencing throughput rises and library multiplexing and automation improves, we foresee that

  14. Using High-Throughput Sequencing to Leverage Surveillance of Genetic Diversity and Oseltamivir Resistance: A Pilot Study during the 2009 Influenza A(H1N1) Pandemic

    Science.gov (United States)

    Téllez-Sosa, Juan; Rodríguez, Mario Henry; Gómez-Barreto, Rosa E.; Valdovinos-Torres, Humberto; Hidalgo, Ana Cecilia; Cruz-Hervert, Pablo; Luna, René Santos; Carrillo-Valenzo, Erik; Ramos, Celso; García-García, Lourdes; Martínez-Barnetche, Jesús

    2013-01-01

    Background Influenza viruses display a high mutation rate and complex evolutionary patterns. Next-generation sequencing (NGS) has been widely used for qualitative and semi-quantitative assessment of genetic diversity in complex biological samples. The “deep sequencing” approach, enabled by the enormous throughput of current NGS platforms, allows the identification of rare genetic viral variants in targeted genetic regions, but is usually limited to a small number of samples. Methodology and Principal Findings We designed a proof-of-principle study to test whether redistributing sequencing throughput from a high depth-small sample number towards a low depth-large sample number approach is feasible and contributes to influenza epidemiological surveillance. Using 454-Roche sequencing, we sequenced at a rather low depth, a 307 bp amplicon of the neuraminidase gene of the Influenza A(H1N1) pandemic (A(H1N1)pdm) virus from cDNA amplicons pooled in 48 barcoded libraries obtained from nasal swab samples of infected patients (n  =  299) taken from May to November, 2009 pandemic period in Mexico. This approach revealed that during the transition from the first (May-July) to second wave (September-November) of the pandemic, the initial genetic variants were replaced by the N248D mutation in the NA gene, and enabled the establishment of temporal and geographic associations with genetic diversity and the identification of mutations associated with oseltamivir resistance. Conclusions NGS sequencing of a short amplicon from the NA gene at low sequencing depth allowed genetic screening of a large number of samples, providing insights to viral genetic diversity dynamics and the identification of genetic variants associated with oseltamivir resistance. Further research is needed to explain the observed replacement of the genetic variants seen during the second wave. As sequencing throughput rises and library multiplexing and automation improves, we foresee that the approach

  15. 17 CFR 31.6 - Registration of leverage commodities.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Registration of leverage... LEVERAGE TRANSACTIONS § 31.6 Registration of leverage commodities. (a) Registration of leverage commodities. Each leverage commodity upon which a leverage contract is offered for sale or purchase or is sold or...

  16. 7 CFR 4290.1130 - Leverage fees payable by RBIC.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Leverage fees payable by RBIC. 4290.1130 Section 4290...) PROGRAM Financial Assistance for RBICs (Leverage) General Information About Obtaining Leverage § 4290.1130 Leverage fees payable by RBIC. (a) Leverage fee. You must pay the Secretary a non-refundable leverage fee...

  17. Wireless Transceiver Design for High Velocity Scenarios

    NARCIS (Netherlands)

    Xu, T.

    2013-01-01

    This thesis is dedicated to transceiver designs for high data-rate wireless communication systems with rapidly moving terminals. The challenges are two-fold. On the one hand, more spectral bandwidth of the transmitted signals is required by future wireless systems to obtain higher transmission

  18. Leveraging Digital Innovation in Healthcare

    DEFF Research Database (Denmark)

    Brown, Carol V.; Jensen, Tina Blegind; Aanestad, Margun

    2014-01-01

    investments in digital infrastructures. New technologies are leveraged to achieve widespread 24x7 disease management, patients’ wellbeing, home-based healthcare and other patient-centric service innovations. Yet, digital innovations in healthcare face barriers in terms of standardization, data privacy...... and security concerns, fragmented markets, and misaligned incentives across stakeholders. The panel will focus on this apparent paradox and highlight the potential of big data, cloud and mobile computing for achieving better health. The panel co-chairs will introduce differences in healthcare delivery...

  19. The Leverage Effect on Wealth Distribution in a Controllable Laboratory Stock Market

    OpenAIRE

    Chenge Zhu; Guang Yang; Kenan An; Jiping Huang

    2014-01-01

    Wealth distribution has always been an important issue in our economic and social life, since it affects the harmony and stabilization of the society. Under the background of widely used financial tools to raise leverage these years, we studied the leverage effect on wealth distribution of a population in a controllable laboratory market in which we have conducted several human experiments, and drawn the conclusion that higher leverage leads to a higher Gini coefficient in the market. A highe...

  20. Heatsink Design of High Power Converter

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Ki [Chungang University (Korea)

    1999-04-01

    Various ways of designing heat sink are available for commercial high power converters and among them, the method of air cooling is the most popular and practical method than any other ones. In this paper, a practical method of cooling high power converter, which includes a method of reducing noise and vibration caused by the fan and a method of estimating the gap and contact resistances existing between the thyristor and heat sink, is presented. Finally, the heat transfer analysis and implementation methods of heat sink for high power converter is presented. (author). 14 refs., 11 figs., 3 tabs.

  1. Website Design Guidelines: High Power Distance and High Context Culture

    Directory of Open Access Journals (Sweden)

    Tanveer Ahmed

    2009-06-01

    Full Text Available This paper aims to address the question of offering a culturally adapted website for a local audience. So far, in the website design arena the vast majority of studies examined mainly Western and the American (low power distance and low context culture disregarding possible cultural discrepancies. This study fills this gap and explores the key cultural parameters that are likely to have an impact on local website design for Asian-Eastern culture high power distance and high context correlating with both Hofstede’s and Hall’s cultural dimensions. It also reviews how website localisation may be accomplished more effectively by extracting the guidelines from two different yet compatible cultural dimensions: high power distance and high context.

  2. Intermediary Leverage Cycles and Financial Stability

    OpenAIRE

    Adrian,Tobias; Boyarchenko, Nina

    2012-01-01

    We develop a theory of financial intermediary leverage cycles in the context of a dynamic model of the macroeconomy. The interaction between a production sector, a financial intermediation sector, and a household sector gives rise to amplification of fundamental shocks that affect real economic activity. The model features two state variables that represent the dynamics of the economy: the net worth and the leverage of financial intermediaries. The leverage of the intermediaries is procyclica...

  3. Leveraging the national cyberinfrastructure for biomedical research

    Science.gov (United States)

    LeDuc, Richard; Vaughn, Matthew; Fonner, John M; Sullivan, Michael; Williams, James G; Blood, Philip D; Taylor, James; Barnett, William

    2014-01-01

    In the USA, the national cyberinfrastructure refers to a system of research supercomputer and other IT facilities and the high speed networks that connect them. These resources have been heavily leveraged by scientists in disciplines such as high energy physics, astronomy, and climatology, but until recently they have been little used by biomedical researchers. We suggest that many of the ‘Big Data’ challenges facing the medical informatics community can be efficiently handled using national-scale cyberinfrastructure. Resources such as the Extreme Science and Discovery Environment, the Open Science Grid, and Internet2 provide economical and proven infrastructures for Big Data challenges, but these resources can be difficult to approach. Specialized web portals, support centers, and virtual organizations can be constructed on these resources to meet defined computational challenges, specifically for genomics. We provide examples of how this has been done in basic biology as an illustration for the biomedical informatics community. PMID:23964072

  4. Creating Leverage to Counter Threats to Neurosurgical Practice.

    Science.gov (United States)

    Dyer, E Hunter

    2017-04-01

    This article describes guiding principles utilized in practice. It is descriptive of the evolution of one of the largest neurosurgical practices in the United States. The objective is to identify and effectively create leverage in neurosurgical practice and to describe principles instrumental in the growth of this practice.Methods included data collection, responsiveness, recruitment, and innovation. Results demonstrate important strategies for creating and maintaining leverage, as well as principles that have enabled the practice to remain independent and continue to provide high-quality care.In conclusion, it is important to stay focused on potential sources of leverage, to gain advantage for the future, and maintain stability as healthcare changes occur. Quality data and outcomes will allow practice to continue to grow strategically. Copyright © 2016 by the Congress of Neurological Surgeons.

  5. Leveraging Synergiesn in Global Sourcing

    DEFF Research Database (Denmark)

    Englyst, Linda; Stegmann Mikkelsen, Ole; Johansen, John

    2005-01-01

    Leveraging synergies in global sourcing is not a straightforward task, and requires a balanced approach to organizing, taking into consideration a number of situational factors. These include, but are not limited to, strategic significance, product specificity, market complexity, coherency...... and the need to involve low-interest parties to obtain synergies. The research advances three dimensions of purchasing organisation, namely coordination structure, compliance mechanism and synergy content. It is argued that these dimensions may be combined to yield various hybrid organisational forms, while...... it is recognised that some combinations are probably likely to occur more naturally and beneficially than others, as contingency factors may impact more than one dimension. In that sense, the dimensions are not entirely independent. The research findings are based primarily on empirical evidence from an industrial...

  6. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems

    Science.gov (United States)

    Mahadevan, Vijay S.; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-01-01

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework. PMID:24982250

  7. High-resolution coupled physics solvers for analysing fine-scale nuclear reactor design problems.

    Science.gov (United States)

    Mahadevan, Vijay S; Merzari, Elia; Tautges, Timothy; Jain, Rajeev; Obabko, Aleksandr; Smith, Michael; Fischer, Paul

    2014-08-06

    An integrated multi-physics simulation capability for the design and analysis of current and future nuclear reactor models is being investigated, to tightly couple neutron transport and thermal-hydraulics physics under the SHARP framework. Over several years, high-fidelity, validated mono-physics solvers with proven scalability on petascale architectures have been developed independently. Based on a unified component-based architecture, these existing codes can be coupled with a mesh-data backplane and a flexible coupling-strategy-based driver suite to produce a viable tool for analysts. The goal of the SHARP framework is to perform fully resolved coupled physics analysis of a reactor on heterogeneous geometry, in order to reduce the overall numerical uncertainty while leveraging available computational resources. The coupling methodology and software interfaces of the framework are presented, along with verification studies on two representative fast sodium-cooled reactor demonstration problems to prove the usability of the SHARP framework.

  8. Library Design-Facilitated High-Throughput Sequencing of Synthetic Peptide Libraries.

    Science.gov (United States)

    Vinogradov, Alexander A; Gates, Zachary P; Zhang, Chi; Quartararo, Anthony J; Halloran, Kathryn H; Pentelute, Bradley L

    2017-11-13

    A methodology to achieve high-throughput de novo sequencing of synthetic peptide mixtures is reported. The approach leverages shotgun nanoliquid chromatography coupled with tandem mass spectrometry-based de novo sequencing of library mixtures (up to 2000 peptides) as well as automated data analysis protocols to filter away incorrect assignments, noise, and synthetic side-products. For increasing the confidence in the sequencing results, mass spectrometry-friendly library designs were developed that enabled unambiguous decoding of up to 600 peptide sequences per hour while maintaining greater than 85% sequence identification rates in most cases. The reliability of the reported decoding strategy was additionally confirmed by matching fragmentation spectra for select authentic peptides identified from library sequencing samples. The methods reported here are directly applicable to screening techniques that yield mixtures of active compounds, including particle sorting of one-bead one-compound libraries and affinity enrichment of synthetic library mixtures performed in solution.

  9. Premixer Design for High Hydrogen Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Benjamin P. Lacy; Keith R. McManus; Balachandar Varatharajan; Biswadip Shome

    2005-12-16

    This 21-month project translated DLN technology to the unique properties of high hydrogen content IGCC fuels, and yielded designs in preparation for a future testing and validation phase. Fundamental flame characterization, mixing, and flame property measurement experiments were conducted to tailor computational design tools and criteria to create a framework for predicting nozzle operability (e.g., flame stabilization, emissions, resistance to flashback/flame-holding and auto-ignition). This framework was then used to establish, rank, and evaluate potential solutions to the operability challenges of IGCC combustion. The leading contenders were studied and developed with the most promising concepts evaluated via computational fluid dynamics (CFD) modeling and using the design rules generated by the fundamental experiments, as well as using GE's combustion design tools and practices. Finally, the project scoped the necessary steps required to carry the design through mechanical and durability review, testing, and validation, towards full demonstration of this revolutionary technology. This project was carried out in three linked tasks with the following results. (1) Develop conceptual designs of premixer and down-select the promising options. This task defined the ''gap'' between existing design capabilities and the targeted range of IGCC fuel compositions and evaluated the current capability of DLN pre-mixer designs when operated at similar conditions. Two concepts (1) swirl based and (2) multiple point lean direct injection based premixers were selected via a QFD from 13 potential design concepts. (2) Carry out CFD on chosen options (1 or 2) to evaluate operability risks. This task developed the leading options down-selected in Task 1. Both a GE15 swozzle based premixer and a lean direct injection concept were examined by performing a detailed CFD study wherein the aerodynamics of the design, together with the chemical kinetics of the

  10. Design of High Efficient MPPT Solar Inverter

    Directory of Open Access Journals (Sweden)

    Sunitha K. A.

    2017-01-01

    Full Text Available This work aims to design a High Efficient Maximum Power Point Tracking (MPPT Solar Inverter. A boost converter is designed in the system to boost the power from the photovoltaic panel. By this experimental setup a room consisting of 500 Watts load (eight fluorescent tubes is completely controlled. It is aimed to decrease the maintenance cost. A microcontroller is introduced for tracking the P&O (Perturb and Observe algorithm used for tracking the maximum power point. The duty cycle for the operation of the boost convertor is optimally adjusted by using MPPT controller. There is a MPPT charge controller to charge the battery as well as fed to inverter which runs the load. Both the P&O scheme with the fixed variation for the reference current and the intelligent MPPT algorithm were able to identify the global Maximum power point, however the performance of the MPPT algorithm was better.

  11. Design of High Field Solenoids made of High Temperature Superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Bartalesi, Antonio; /Pisa U.

    2010-12-01

    This thesis starts from the analytical mechanical analysis of a superconducting solenoid, loaded by self generated Lorentz forces. Also, a finite element model is proposed and verified with the analytical results. To study the anisotropic behavior of a coil made by layers of superconductor and insulation, a finite element meso-mechanic model is proposed and designed. The resulting material properties are then used in the main solenoid analysis. In parallel, design work is performed as well: an existing Insert Test Facility (ITF) is adapted and structurally verified to support a coil made of YBa{sub 2}Cu{sub 3}O{sub 7}, a High Temperature Superconductor (HTS). Finally, a technological winding process was proposed and the required tooling is designed.

  12. Bioblendstocks that Enable High Efficiency Engine Designs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, Robert L.; Fioroni, Gina M.; Ratcliff, Matthew A.; Zigler, Bradley T.; Farrell, John

    2016-11-03

    The past decade has seen a high level of innovation in production of biofuels from sugar, lipid, and lignocellulose feedstocks. As discussed in several talks at this workshop, ethanol blends in the E25 to E50 range could enable more highly efficient spark-ignited (SI) engines. This is because of their knock resistance properties that include not only high research octane number (RON), but also charge cooling from high heat of vaporization, and high flame speed. Emerging alcohol fuels such as isobutanol or mixed alcohols have desirable properties such as reduced gasoline blend vapor pressure, but also have lower RON than ethanol. These fuels may be able to achieve the same knock resistance benefits, but likely will require higher blend levels or higher RON hydrocarbon blendstocks. A group of very high RON (>150) oxygenates such as dimethyl furan, methyl anisole, and related compounds are also produced from biomass. While providing no increase in charge cooling, their very high octane numbers may provide adequate knock resistance for future highly efficient SI engines. Given this range of options for highly knock resistant fuels there appears to be a critical need for a fuel knock resistance metric that includes effects of octane number, heat of vaporization, and potentially flame speed. Emerging diesel fuels include highly branched long-chain alkanes from hydroprocessing of fats and oils, as well as sugar-derived terpenoids. These have relatively high cetane number (CN), which may have some benefits in designing more efficient CI engines. Fast pyrolysis of biomass can produce diesel boiling range streams that are high in aromatic, oxygen and acid contents. Hydroprocessing can be applied to remove oxygen and consequently reduce acidity, however there are strong economic incentives to leave up to 2 wt% oxygen in the product. This oxygen will primarily be present as low CN alkyl phenols and aryl ethers. While these have high heating value, their presence in diesel fuel

  13. Determinants of Leverage and Agency problems

    NARCIS (Netherlands)

    de Jong, A.; Dijk, R.

    1998-01-01

    In this paper we empirically investigate the determinants of leverage and agency problems and we examine the relationships between leverage and agency problems. As in Titman and Wessels (1988) we use structural equations modeling with latent variables. In contrast to Titman and Wessels (1988), who

  14. 17 CFR 31.17 - Records of leverage transactions.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Records of leverage... LEVERAGE TRANSACTIONS § 31.17 Records of leverage transactions. (a) Each leverage transaction merchant receiving a leverage customer's order shall immediately upon receipt thereof prepare a written record of...

  15. 17 CFR 31.15 - Reporting to leverage customers.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Reporting to leverage... LEVERAGE TRANSACTIONS § 31.15 Reporting to leverage customers. Each leverage transaction merchant shall furnish in writing directly to each leverage customer: (a) Promptly upon the repurchase, resale...

  16. 7 CFR 4290.1120 - General eligibility requirements for Leverage.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false General eligibility requirements for Leverage. 4290... INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) General Information About Obtaining Leverage § 4290.1120 General eligibility requirements for Leverage. To be eligible for Leverage...

  17. ANALISIS PENGARUH FINANSIAL LEVERAGE DAN OPERATING LEVERAGE TERHADAP RENTABILITAS PERUSAHAAN PADA PT. PANCONIN CIPTA PERKASA DI MAKASSAR

    OpenAIRE

    NURINNA, RAHMA

    2008-01-01

    2013 Penelitian ini bertujuan untuk menjelaskan dan mengetahui serta menganalisis pengaruh finansial leverage dan operating leverage terhadap rentabilitas perusahaan pada PT. Panconin Cipta Perkasa di Makassar.??? Sedangkan metode analisis yang digunakan dalam penelitian ini adalah analisis financial leverage, analisis operating leverage, analisis rentabilitas, analisis regresi linear berganda, analisis korelasi, analisis determinasi. Hasil analisis mengenai financial leverage dan operatin...

  18. Design of high performance CMC brake discs

    Energy Technology Data Exchange (ETDEWEB)

    Krenkel, W.; Henke, T. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Stuttgart (Germany)

    1999-03-01

    Ceramic matrix composite (CMC) materials based on 2D-carbon fibre preforms show high heat-absorption capacities and good tribological as well as thermomechanical properties. To take advantage of the full lightweight potential of these new materials in high performance automotive brake discs, the thermal conductivity transverse to the friction surface has to be high in order to reduce the surface temperature. Experimental tests showed, that lower surface temperatures prevent overheating of the brake`s periphery and stabilizes the friction behaviour. In this study different design approaches with improved transverse heat conductivity have been investigated by finite element analysis. C/C-SiC bolts as well as SiC coatings and combinations of them have been investigated and compared with an orthotropic brake disc, showing a reduction of temperature of up to 50%. Original sized brake discs with C/C-SiC have been manufactured and tested under real conditions which verified the calculations. Using only low-cost CMC materials and avoiding any additional processing steps, the potential of C/C-SiC brake discs are very attractive under tribological as well as under economical aspects. (orig.) 4 refs.

  19. A graph theoretic analysis of leverage centrality

    Directory of Open Access Journals (Sweden)

    Roger Vargas, Jr.

    2017-12-01

    Full Text Available In 2010, Joyce et al. defined the leverage centrality of vertices in a graph as a means to analyze functional connections within the human brain. In this metric a degree of a vertex is compared to the degrees of all it neighbors. We investigate this property from a mathematical perspective. We first outline some of the basic properties and then compute leverage centralities of vertices in different families of graphs. In particular, we show there is a surprising connection between the number of distinct leverage centralities in the Cartesian product of paths and the triangle numbers.

  20. Leveraging OpenStudio's Application Programming Interfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Long, N.; Ball, B.; Goldwasser, D.; Parker, A.; Elling, J.; Davis, O.; Kruchten, D.

    2013-11-01

    OpenStudio development efforts have been focused on providing Application Programming Interfaces (APIs) where users are able to extend OpenStudio without the need to compile the open source libraries. This paper will discuss the basic purposes and functionalities of the core libraries that have been wrapped with APIs including the Building Model, Results Processing, Advanced Analysis, UncertaintyQuantification, and Data Interoperability through Translators. Several building energy modeling applications have been produced using OpenStudio's API and Software Development Kits (SDK) including the United States Department of Energy's Asset ScoreCalculator, a mobile-based audit tool, an energy design assistance reporting protocol, and a portfolio scale incentive optimization analysismethodology. Each of these software applications will be discussed briefly and will describe how the APIs were leveraged for various uses including high-level modeling, data transformations from detailed building audits, error checking/quality assurance of models, and use of high-performance computing for mass simulations.

  1. Financing drug discovery via dynamic leverage.

    Science.gov (United States)

    Montazerhodjat, Vahid; Frishkopf, John J; Lo, Andrew W

    2016-03-01

    We extend the megafund concept for funding drug discovery to enable dynamic leverage in which the portfolio of candidate therapeutic assets is predominantly financed initially by equity, and debt is introduced gradually as assets mature and begin generating cash flows. Leverage is adjusted so as to maintain an approximately constant level of default risk throughout the life of the fund. Numerical simulations show that applying dynamic leverage to a small portfolio of orphan drug candidates can boost the return on equity almost twofold compared with securitization with a static capital structure. Dynamic leverage can also add significant value to comparable all-equity-financed portfolios, enhancing the return on equity without jeopardizing debt performance or increasing risk to equity investors. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. High specific energy, high capacity nickel-hydrogen cell design

    Science.gov (United States)

    Wheeler, James R.

    1993-01-01

    A 3.5 inch rabbit-ear-terminal nickel-hydrogen cell was designed and tested to deliver high capacity at steady discharge rates up to and including a C rate. Its specific energy yield of 60.6 wh/kg is believed to be the highest yet achieved in a slurry-process nickel-hydrogen cell, and its 10 C capacity of 113.9 AH the highest capacity yet of any type in a 3.5 inch diameter size. The cell also demonstrated a pulse capability of 180 amps for 20 seconds. Specific cell parameters and performance are described. Also covered is an episode of capacity fading due to electrode swelling and its successful recovery by means of additional activation procedures.

  3. Test of a High Power Target Design

    CERN Multimedia

    2002-01-01

    %IS343 :\\\\ \\\\ A high power tantalum disc-foil target (RIST) has been developed for the proposed radioactive beam facility, SIRIUS, at the Rutherford Appleton Laboratory. The yield and release characteristics of the RIST target design have been measured at ISOLDE. The results indicate that the yields are at least as good as the best ISOLDE roll-foil targets and that the release curves are significantly faster in most cases. Both targets use 20 -25 $\\mu$m thick foils, but in a different internal geometry.\\\\ \\\\Investigations have continued at ISOLDE with targets having different foil thickness and internal geometries in an attempt to understand the release mechanisms and in particular to maximise the yield of short lived isotopes. A theoretical model has been developed which fits the release curves and gives physical values of the diffusion constants.\\\\ \\\\The latest target is constructed from 2 $\\mu$m thick tantalum foils (mass only 10 mg) and shows very short release times. The yield of $^{11}$Li (half-life of ...

  4. Creation of a Rapid High-Fidelity Aerodynamics Module for a Multidisciplinary Design Environment

    Science.gov (United States)

    Srinivasan, Muktha; Whittecar, William; Edwards, Stephen; Mavris, Dimitri N.

    2012-01-01

    surrogate model, which captures the relationships between input variables and responses into regression equations. Depending on the dimensionality of the problem and the fidelity of the code for which a surrogate model is being created, the initial DOE can itself be computationally prohibitive to run. Cokriging, a modeling approach from the field of geostatistics, provides a desirable compromise between computational expense and fidelity. To do this, cokriging leverages a large body of data generated by a low fidelity analysis, combines it with a smaller set of data from a higher fidelity analysis, and creates a kriging surrogate model with prediction fidelity approaching that of the higher fidelity analysis. When integrated into a multidisciplinary environment, a disciplinary analysis module employing cokriging can raise the analysis fidelity without drastically impacting the expense of design iterations. This is demonstrated through the creation of an aerodynamics analysis module in NASA s OpenMDAO framework. Aerodynamic analyses including Missile DATCOM, APAS, and USM3D are leveraged to create high fidelity aerodynamics decks for parametric vehicle geometries, which are created in NASA s Vehicle Sketch Pad (VSP). Several trade studies are performed to examine the achieved level of model fidelity, and the overall impact to vehicle design is quantified.

  5. Leveraging Distributions in Physical Unclonable Functions

    Directory of Open Access Journals (Sweden)

    Wenjie Che

    2017-10-01

    Full Text Available A special class of Physical Unclonable Functions (PUFs referred to as strong PUFs can be used in novel hardware-based authentication protocols. Strong PUFs are required for authentication because the bit strings and helper data are transmitted openly by the token to the verifier, and therefore are revealed to the adversary. This enables the adversary to carry out attacks against the token by systematically applying challenges and obtaining responses in an attempt to machine learn, and later predict, the token’s response to an arbitrary challenge. Therefore, strong PUFs must both provide an exponentially large challenge space and be resistant to machine-learning attacks in order to be considered secure. We investigate a transformation called temperature–voltage compensation (TVCOMP, which is used within the Hardware-Embedded Delay PUF (HELP bit string generation algorithm. TVCOMP increases the diversity and unpredictability of the challenge–response space, and therefore increases resistance to model-building attacks. HELP leverages within-die variations in path delays as a source of random information. TVCOMP is a linear transformation designed specifically for dealing with changes in delay introduced by adverse temperature–voltage (environmental variations. In this paper, we show that TVCOMP also increases entropy and expands the challenge–response space dramatically.

  6. A molecular leverage for helicity control and helix inversion.

    Science.gov (United States)

    Akine, Shigehisa; Hotate, Sayaka; Nabeshima, Tatsuya

    2011-09-07

    The helical tetranuclear complex [LZn(3)La(OAc)(3)] having two benzocrown moieties was designed and synthesized as a novel molecular leverage for helicity control and helix inversion. Short alkanediammonium guests H(3)N(+)(CH(2))(n)NH(3)(+) (n = 4, 6, 8) preferentially stabilized the P-helical isomer of [LZn(3)La(OAc)(3)], while the longer guest H(3)N(+)(CH(2))(12)NH(3)(+) caused a helix inversion to give the M-helical isomer as the major isomer. The differences in the molecular lengths were efficiently translated into helical handedness via the novel molecular leverage mechanism using the gauche/anti conversion of the trans-1,2-disubstituted ethylenediamine unit.

  7. Leveraging teamwork by Google+ in a lifelong learning perspective

    Directory of Open Access Journals (Sweden)

    Sabrina Leone

    2015-06-01

    Full Text Available The current affordances of ubiquitous global connections, of a large number of open resources, and of social and professional networks may boost innovation in open-minded organisations through their personnel’s empowerment. Lifelong and ubiquitous learning, cloud computing and smart working frameworks are the pillars of the change that is replacing the traditional work model and transforming the way crowds of people communicate, collaborate, teamwork, produce value and growth for the entities of which they are part. All this directly involves the smart city concept. The “cloudworker” virtually works, learns and socially participates effectively from anywhere anytime, and comfortably interacts in a knowledge society built on networked ecologies. Cloud teamwork applications, such as Google+ can be, enable teams to be more productive and organisations to devote more time to their core mission. Social networking and collaboration technologies draw renewed attention on the evidence that organisations are social entities above all; as such, they can turn into whole systems of leadership and learning, that is high-performance work systems. This paper aims to evaluate the effectiveness of Google+ as a leveraging teamwork tool in learning organisations. Results show that technology is not only a means of social exchange, but it turns into the joint design of learning and organisational strategies, and into the growth of learning communities.

  8. Performing high quality research into design practice

    NARCIS (Netherlands)

    Rianne Valkenburg; Maaike Kleinsmann

    2009-01-01

    This paper deals with the complexity of doing research in design practice. More and more projects and papers appear dealing with this topic and the time has come to draw up the balance sheet. This paper starts with explaining the status of design research until now, in which we indicate the

  9. VisualCommander for Rapid End-to-End Mission Design and Simulation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal is for the development of a highly extensible and user-configurable software application for end-to-end mission simulation and design. We will leverage...

  10. 13 CFR 108.1120 - General eligibility requirement for Leverage.

    Science.gov (United States)

    2010-01-01

    ... for Leverage. 108.1120 Section 108.1120 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM SBA Financial Assistance for NMVC Companies (Leverage) General Information About Obtaining Leverage § 108.1120 General eligibility requirement for Leverage. To...

  11. 17 CFR 31.8 - Cover of leverage contracts.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Cover of leverage contracts. 31.8 Section 31.8 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS § 31.8 Cover of leverage contracts. (a)(1) Each leverage transaction merchant must at all times...

  12. 13 CFR 107.1120 - General eligibility requirements for Leverage.

    Science.gov (United States)

    2010-01-01

    ... for Leverage. 107.1120 Section 107.1120 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) General Information About Obtaining Leverage § 107.1120 General eligibility requirements for Leverage. To be eligible...

  13. Design of High Compressive Strength Concrete Mix without Additives

    National Research Council Canada - National Science Library

    N, M, Akasha; Mohamed, Mansour Ahmed; Abdelrazig, Nasreen Maruiod

    2017-01-01

    .... The selected materials, with high specification using special production techniques, the properties ,the mix design procedure and mix proportion of the high strength concrete (HSC) were discussed...

  14. Petascale supercomputing to accelerate the design of high-temperature alloys.

    Science.gov (United States)

    Shin, Dongwon; Lee, Sangkeun; Shyam, Amit; Haynes, J Allen

    2017-01-01

    Recent progress in high-performance computing and data informatics has opened up numerous opportunities to aid the design of advanced materials. Herein, we demonstrate a computational workflow that includes rapid population of high-fidelity materials datasets via petascale computing and subsequent analyses with modern data science techniques. We use a first-principles approach based on density functional theory to derive the segregation energies of 34 microalloying elements at the coherent and semi-coherent interfaces between the aluminium matrix and the θ'-Al2Cu precipitate, which requires several hundred supercell calculations. We also perform extensive correlation analyses to identify materials descriptors that affect the segregation behaviour of solutes at the interfaces. Finally, we show an example of leveraging machine learning techniques to predict segregation energies without performing computationally expensive physics-based simulations. The approach demonstrated in the present work can be applied to any high-temperature alloy system for which key materials data can be obtained using high-performance computing.

  15. High-frequency analog integrated circuit design

    CERN Document Server

    1995-01-01

    To learn more about designing analog integrated circuits (ICs) at microwave frequencies using GaAs materials, turn to this text and reference. It addresses GaAs MESFET-based IC processing. Describes the newfound ability to apply silicon analog design techniques to reliable GaAs materials and devices which, until now, was only available through technical papers scattered throughout hundred of articles in dozens of professional journals.

  16. Leverage, the treatment relationship, and treatment participation.

    Science.gov (United States)

    McNiel, Dale E; Gormley, Barbara; Binder, Renée L

    2013-05-01

    OBJECTIVE Although many psychiatric patients experience various forms of pressure or leverage to participate in community treatment, the association between such experiences and treatment participation is controversial. This study evaluated the hypothesis that aspects of the treatment relationship, such as the working alliance, psychological reactance, and perceived coercion, could be important in understanding treatment adherence and satisfaction in a group of patients at risk of experiencing leverage. METHODS A total of 198 outpatients at two community mental health centers completed structured interviews including measures of the treatment relationship, treatment participation, experience of leverage, and clinical functioning. Regression analyses were used to assess associations between the treatment relationship and treatment adherence and satisfaction while concomitantly considering experiences of leverage, demographic characteristics, and clinical functioning. RESULTS Approximately four in ten participants reported experiencing some form of leverage to adhere to treatment during the previous six months, such as pressures related to the criminal justice system, money, housing, and outpatient commitment. Patients who perceived greater coercion to participate in treatment were more likely to report taking their medications as prescribed. Higher satisfaction with treatment was associated with lower perceived coercion, a better working alliance, and lower levels of psychological reactance. CONCLUSIONS Benefits in medication adherence associated with interventions that patients perceive as coercive may come at a cost of decreased satisfaction with treatment. Aspects of the treatment relationship hold promise for individualizing treatment planning in a way that addresses satisfaction as well as adherence.

  17. Highly optimized tolerance: robustness and design in complex systems

    Science.gov (United States)

    Carlson; Doyle

    2000-03-13

    Highly optimized tolerance (HOT) is a mechanism that relates evolving structure to power laws in interconnected systems. HOT systems arise where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We study the impact of incorporating increasing levels of design and find that even small amounts of design lead to HOT states in percolation.

  18. Design of a Highly Dependable Beamforming Chip

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Zhang, Xiao; Zhang, X.

    As CMOS process technology advances towards 32nm, SoC complexity continuously grows but its dependability significantly decreases. In this paper, a beamforming chip is designed using 64 reconfigurable Xentium tile processors. A functional dependability analysis for this application was carried out

  19. The high dynamic range pixel array detector (HDR-PAD): Concept and design

    Energy Technology Data Exchange (ETDEWEB)

    Shanks, Katherine S.; Philipp, Hugh T.; Weiss, Joel T.; Becker, Julian; Tate, Mark W. [Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY 14853 (United States); Gruner, Sol M., E-mail: smg26@cornell.edu [Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, NY 14853 (United States); Cornell High Energy Synchrotron Source (CHESS), Cornell University, Ithaca, NY 14853 (United States)

    2016-07-27

    Experiments at storage ring light sources as well as at next-generation light sources increasingly require detectors capable of high dynamic range operation, combining low-noise detection of single photons with large pixel well depth. XFEL sources in particular provide pulse intensities sufficiently high that a purely photon-counting approach is impractical. The High Dynamic Range Pixel Array Detector (HDR-PAD) project aims to provide a dynamic range extending from single-photon sensitivity to 10{sup 6} photons/pixel in a single XFEL pulse while maintaining the ability to tolerate a sustained flux of 10{sup 11} ph/s/pixel at a storage ring source. Achieving these goals involves the development of fast pixel front-end electronics as well as, in the XFEL case, leveraging the delayed charge collection due to plasma effects in the sensor. A first prototype of essential electronic components of the HDR-PAD readout ASIC, exploring different options for the pixel front-end, has been fabricated. Here, the HDR-PAD concept and preliminary design will be described.

  20. Leveraging e-learning in medical education.

    Science.gov (United States)

    Lewis, Kadriye O; Cidon, Michal J; Seto, Teresa L; Chen, Haiqin; Mahan, John D

    2014-07-01

    e-Learning has become a popular medium for delivering instruction in medical education. This innovative method of teaching offers unique learning opportunities for medical trainees. The purpose of this article is to define the present state of e-learning in pediatrics and how to best leverage e-learning for educational effectiveness and change in medical education. Through addressing under-examined and neglected areas in implementation strategies for e-learning, its usefulness in medical education can be expanded. This study used a systematic database review of published studies in the field of e-learning in pediatric training between 2003 and 2013. The search was conducted using educational and health databases: Scopus, ERIC, PubMed, and search engines Google and Hakia. A total of 72 reference articles were suitable for analysis. This review is supplemented by the use of "e-Learning Design Screening Questions" to define e-learning design and development in 10 randomly selected articles. Data analysis used template-based coding themes and counting of the categories using descriptive statistics.Our search for pediatric e-learning (using Google and Hakia) resulted in six well-defined resources designed to support the professional development of doctors, residents, and medical students. The majority of studies focused on instructional effectiveness and satisfaction. There were few studies about e-learning development, implementation, and needs assessments used to identify the institutional and learners' needs. Reviewed studies used various study designs, measurement tools, instructional time, and materials for e-learning interventions. e-Learning is a viable solution for medical educators faced with many challenges, including (1) promoting self-directed learning, (2) providing flexible learning opportunities that would offer continuous (24h/day/7 days a week) availability for learners, and (3) engaging learners through collaborative learning communities to gain

  1. On the Design of High Resolution Imaging Systems

    Science.gov (United States)

    Eckardt, A.; Reulke, R.

    2017-05-01

    The design of high-resolution systems is always a consideration of many parameters. Technological parameter of the imaging system, e.g. diameter of the imaging system, mass and power, as well as storage and data transfer, have an direct impact on spacecraft size and design. The paper describes the essential design parameters for the description of high-resolution systems.

  2. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2004-11-01

    Design guidelines outline high performance principles for the new or retrofit design of K-12 schools in tropical island climates. By incorporating energy improvements into construction or renovation plans, schools can reduce energy consumption and costs.

  3. High-resolution Modeling Assisted Design of Customized and Individualized Transcranial Direct Current Stimulation Protocols

    Science.gov (United States)

    Bikson, Marom; Rahman, Asif; Datta, Abhishek; Fregni, Felipe; Merabet, Lotfi

    2012-01-01

    Objectives Transcranial direct current stimulation (tDCS) is a neuromodulatory technique that delivers low-intensity currents facilitating or inhibiting spontaneous neuronal activity. tDCS is attractive since dose is readily adjustable by simply changing electrode number, position, size, shape, and current. In the recent past, computational models have been developed with increased precision with the goal to help customize tDCS dose. The aim of this review is to discuss the incorporation of high-resolution patient-specific computer modeling to guide and optimize tDCS. Methods In this review, we discuss the following topics: (i) The clinical motivation and rationale for models of transcranial stimulation is considered pivotal in order to leverage the flexibility of neuromodulation; (ii) The protocols and the workflow for developing high-resolution models; (iii) The technical challenges and limitations of interpreting modeling predictions, and (iv) Real cases merging modeling and clinical data illustrating the impact of computational models on the rational design of rehabilitative electrotherapy. Conclusions Though modeling for non-invasive brain stimulation is still in its development phase, it is predicted that with increased validation, dissemination, simplification and democratization of modeling tools, computational forward models of neuromodulation will become useful tools to guide the optimization of clinical electrotherapy. PMID:22780230

  4. Energy Design Guidelines for High Performance Schools: Tropical Island Climates

    Energy Technology Data Exchange (ETDEWEB)

    2004-11-01

    The Energy Design Guidelines for High Performance Schools--Tropical Island Climates provides school boards, administrators, and design staff with guidance to help them make informed decisions about energy and environmental issues important to school systems and communities. These design guidelines outline high performance principles for the new or retrofit design of your K-12 school in tropical island climates. By incorporating energy improvements into their construction or renovation plans, schools can significantly reduce energy consumption and costs.

  5. Leveraging Technology for Refugee Integration

    DEFF Research Database (Denmark)

    Abu Jarour, Safa'a; Krasnova, Hanna; Wenninger, Helena

    2016-01-01

    Spurred by the military conflicts, refugees’ crisis has swept Europe by surprise. With a challenge of integrating refugees into hosting societies comes the question about the role that ICTs could play in the ongoing integration efforts. Indeed, unprecedented reliance of refugees on technology......, especially smartphones, is an important distinction of the current refugees’ crisis. ICT may support integrative efforts undertaken by local authorities and other stakeholders. Nonetheless, the question how ICTs can be applied to support refugees and how detrimental effects for them and the hosting societies...... of ICT use by refugees on an operational level, and how ICT systems should be designed and culturally adapted....

  6. High-gravity brewing utilizing factorial design

    Directory of Open Access Journals (Sweden)

    R. B. Almeida

    2000-06-01

    Full Text Available A number of factors can influence the behavior of yeast during fermentation. Some of these factors (initial wort concentration, initial pH and percentage of corn syrup in the composition of the wort were studied in order to determine their influence on the productivity of fermentation. Fermentations were carried out at 25ºC utilizing a 2³ factorial design of these factors. The results showed that the percentage of corn syrup had no influence on process productivity, whereas initial pH and especially initial wort concentration did. It can be concluded that using pH and initial wort concentration values higher than those utilized in this work (5.5 and 20ºP, respectively will result in a higher productivity.

  7. Design and development of high voltage high power operational ...

    Indian Academy of Sciences (India)

    Normally power opamps can deliver current more than 50 mA and can operate on the supply voltage more than ±25 V. This paper gives the details of one of the power opamps developed to drive the Piezo Actuators for Active Vibration Control (AVC) of aircraft/aerospace structures. The designed power opamp will work on ...

  8. Leveraging Industry Relationships in the Academic Enterprise

    Science.gov (United States)

    Blackney, Kenneth S.; Papadakis, Constantine N.

    2004-01-01

    Drexel University has maintained a leadership role in academic technology by choosing technology initiatives wisely, timing them effectively and ensuring that they have the greatest value to the community at large while being affordable. Drexel has leveraged vendor relationships to help accomplish these initiatives, and has shared its expertise…

  9. 'Going to The Hague' as Coercive Leverage

    DEFF Research Database (Denmark)

    Schack, Marc

    2017-01-01

    (ICC) as leverage. This was the first time in history that an international actor used a possible recourse to the Court in such an explicitly coercive manner. Hence, this case enables us to conduct some preliminary analyses of this strategy’s effectiveness. Specifically, Palestine tried first to stop...

  10. The Complexity of Leveraging University Program Change

    Science.gov (United States)

    Crow, Gary M.; Arnold, Noelle Witherspoon; Reed, Cynthia J.; Shoho, Alan R.

    2012-01-01

    This article identifies four elements of complexity that influence how university educational leadership programs can leverage program change: faculty reward systems, faculty governance, institutional resources, and state-level influence on leadership preparation. Following the discussion of the elements of complexity, the article provides a…

  11. Topics in Finance Part III--Leverage

    Science.gov (United States)

    Laux, Judy

    2010-01-01

    This article investigates operating and financial leverage from the perspective of the financial manager, accenting the relationships to stockholder wealth maximization (SWM), risk and return, and potential agency problems. It also covers some of the pertinent literature related specifically to the implications of operating and financial risk and…

  12. High-activity liquid packaging design criteria

    Energy Technology Data Exchange (ETDEWEB)

    1994-05-01

    In recent studies, it has been acknowledged that there is an emerging need for packaging to transport high-activity liquid off the Hanford Site to support characterization and process development activities of liquid waste stored in underground tanks. These studies have dealt with specimen testing needs primarily at the Hanford Site; however, similar needs appear to be developing at other US Department of Energy (DOE) sites. The need to ship single and multiple specimens to offsite laboratories is anticipated because it is predicted that onsite laboratories will be overwhelmed by an increasing number and size (volume) of samples. Potentially, the specimen size could range from 250 mL to greater than 50 L. Presently, no certified Type-B packagings are available for transport of high-activity liquid radioactive specimens in sizes to support Site missions.

  13. Brittle Materials Design, High Temperature Gas Turbine

    Science.gov (United States)

    1981-03-01

    Modulus and Poisson’s Ratio were determined by sonic techniques: thermal expansion values were measured on a differential dilatometer and thermal...accumulation of potentially explosive gases. 4. Thermal conductivity of the nitriding atmosphere is important for production of high quality RBSN...of varying MgO content. Measurements were conducted on a differential dilatometer from room temperatures up to 900°C, and are shown in Figure 3.2.3

  14. Patient experiences of autonomy and coercion while receiving legal leverage in forensic assertive community treatment.

    Science.gov (United States)

    Lamberti, J Steven; Russ, Ann; Cerulli, Catherine; Weisman, Robert L; Jacobowitz, David; Williams, Geoffrey C

    2014-01-01

    Legal leverage is broadly defined as the use of legal authority to promote treatment adherence. It is widely utilized within mental health courts, drug courts, mandated outpatient treatment programs, and other intervention strategies for individuals with mental illness or chemical dependency who have contact with the criminal justice system. Nonetheless, the ethics of using legal authority to promote treatment adherence remains a hotly debated issue within public and professional circles alike. While critics characterize legal leverage as a coercive form of social control that undermines personal autonomy, advocates contend that it supports autonomy because treatment strategies using legal leverage are designed to promote health and independence. Despite the controversy, there is little evidence regarding the impact of legal leverage on patient autonomy as experienced and expressed by patients themselves. This report presents findings from a qualitative study involving six focus groups with severely mentally ill outpatients who received legal leverage through three forensic assertive community treatment (FACT) programs in Northeastern, Midwestern, and West Coast cities. Findings are discussed in the context of the self-determination theory of human motivation, and practical implications for the use of legal leverage are considered.

  15. Design High Efficiency PWM Boost Converter for Wind Power Generation

    National Research Council Canada - National Science Library

    SULAIMAN R. Diary; MUHAMMAD A. Aree

    2010-01-01

    ...; it is offer high efficiency performance andprovides power management circuit designers with theability to approach a broad range of designapplications with flexible and easy-to-implementsolutions...

  16. Heterogeneity in the Speed of Adjustment toward Target Leverage

    DEFF Research Database (Denmark)

    Elsas, Ralf; Florysiak, David

    2011-01-01

    Estimating the speed of adjustment toward target leverage using the standard partial adjustment model assumes that all firms within the sample adjust at the same (average) pace. Dynamic capital structure theory predicts heterogeneity in adjustment speed due to firm-specific adjustment costs...... speed of adjustment is the highest for firms with high default risk or expected bankruptcy costs, and if opportunity costs of deviating from a target are high. Our evidence is consistent with the general relevance of the trade-off theory....

  17. Leveraging Cognitive Technology Tools to Expand Opportunities for Critical Thinking in Elementary Mathematics

    Science.gov (United States)

    Suh, Jennifer

    2010-01-01

    The following study describes design research in an elementary school near the metropolitan D.C. area with a diverse student population. The goal of the project was to design tasks that leveraged technology and enhance the access to critical thinking in specific mathematical concepts: data analysis and probability. It highlights the opportunities…

  18. Leveraging Sociocultural Theory to Create a Mentorship Program for Doctoral Students

    Science.gov (United States)

    Crosslin, Matt; Wakefield, Jenny S.; Bennette, Phyllis; Black, James William, III

    2013-01-01

    This paper details a proposed doctoral student connections program that is based on sociocultural theory. It is designed to assist new students with starting their educational journey. This program is designed to leverage social interactions, peer mentorship, personal reflection, purposeful planning, and existing resources to assist students in…

  19. Conceptual design in a high-tech environment

    NARCIS (Netherlands)

    Bonnema, Gerrit Maarten; van Houten, Frederikus J.A.M.

    2003-01-01

    This article will give an overview over design process models before concentrating on the main subject: Conceptual Design, which has had less academic attention than the detail design phases. In high-tech environments specific conditions apply. This article will deal with these conditions. Some

  20. Design studies of a high-current radiofrequency quadrupole for ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 74; Issue 2. Design studies of a high-current radiofrequency quadrupole for accelerator-driven systems programme ... We have followed the conventional design technique with slight modifications and compared that with the equipartitioned (EP) type of design.

  1. Battery designs with high capacity anode materials and cathode materials

    Science.gov (United States)

    Masarapu, Charan; Anguchamy, Yogesh Kumar; Han, Yongbong; Deng, Haixia; Kumar, Sujeet; Lopez, Herman A.

    2017-10-03

    Improved high energy capacity designs for lithium ion batteries are described that take advantage of the properties of high specific capacity anode active compositions and high specific capacity cathode active compositions. In particular, specific electrode designs provide for achieving very high energy densities. Furthermore, the complex behavior of the active materials is used advantageously in a radical electrode balancing design that significantly reduced wasted electrode capacity in either electrode when cycling under realistic conditions of moderate to high discharge rates and/or over a reduced depth of discharge.

  2. Design and development of high voltage high power operational ...

    Indian Academy of Sciences (India)

    systems and the electron deflection systems. Power operational amplifiers have ... approach is cost and availability of high voltage devices in chip form. 2.2 Amplifier with opamp input stage .... power opamp, using chip passive components, semiconductor bare dice minimizes the size while increasing the reliability.

  3. Planning and Leveraging Event Portfolios: Towards a Holistic Theory

    OpenAIRE

    Ziakas, Vassilios

    2014-01-01

    This conceptual paper seeks to advance the discourse on the leveraging and legacies of events by examining the planning, management, and leveraging of event portfolios. This examination shifts the common focus from analyzing single events towards multiple events and purposes that can enable cross-leveraging among different events in pursuit of\\ud attainment and magnification of specific ends. The following frameworks are proposed:\\ud (1) event portfolio planning and leveraging, and (2) analyz...

  4. The Disciplining Role of Leverage in Dutch Firms

    NARCIS (Netherlands)

    de Jong, A.

    2001-01-01

    In this study we investigate the role of leverage in disciplining overinvestment problems.We measure the relationships between leverage, Tobin s q and corporate governance characteristics for Dutch listed firms.Besides, our empirical analysis tests for determinants of leverage from tax and

  5. The effect of financial leverage on profitability of manufacturing ...

    African Journals Online (AJOL)

    For many years many studies have focused on the effect of financial leverage on firm performance and yet there has been no specific result that can be generalized regarding the extent of the relationship between financial leverage and firm performance. This study examines the effect of financial leverage on profitability of ...

  6. Leverage and Deepening Business Cycle Skewness

    DEFF Research Database (Denmark)

    Jensen, Henrik; Petrella, Ivan; Ravn, Søren Hove

    2017-01-01

    a dynamic general equilibrium model with collateralized borrowing and occasionally binding credit constraints. Higher leverage increases the likelihood that constraints become slack in the face of expansionary shocks, while contractionary shocks are further amplied due to binding constraints. As a result......We document that the U.S. economy has been characterized by an increasingly negative business cycle asymmetry over the last three decades. This finding can be explained by the concurrent increase in the financial leverage of households and firms. To support this view, we devise and estimate......, booms become progressively smoother and more prolonged than busts. We are therefore able to reconcile a more negatively skewed business cycle with the Great Moderation in cyclical volatility. Finally, in line with recent empirical evidence, financially-driven expansions lead to deeper contractions...

  7. Systematic Approach for Design of Broadband, High Efficiency, High Power RF Amplifiers

    National Research Council Canada - National Science Library

    Mohadeskasaei, Seyed Alireza; An, Jianwei; Chen, Yueyun; Li, Zhi; Abdullahi, Sani Umar; Sun, Tie

    2017-01-01

    ...‐AB RF amplifiers with high gain flatness. It is usually difficult to simultaneously achieve a high gain flatness and high efficiency in a broadband RF power amplifier, especially in a high power design...

  8. Towards high performing hospital enterprise systems: an empirical and literature based design framework

    Science.gov (United States)

    dos Santos Fradinho, Jorge Miguel

    2014-05-01

    Our understanding of enterprise systems (ES) is gradually evolving towards a sense of design which leverages multidisciplinary bodies of knowledge that may bolster hybrid research designs and together further the characterisation of ES operation and performance. This article aims to contribute towards ES design theory with its hospital enterprise systems design (HESD) framework, which reflects a rich multidisciplinary literature and two in-depth hospital empirical cases from the US and UK. In doing so it leverages systems thinking principles and traditionally disparate bodies of knowledge to bolster the theoretical evolution and foundation of ES. A total of seven core ES design elements are identified and characterised with 24 main categories and 53 subcategories. In addition, it builds on recent work which suggests that hospital enterprises are comprised of multiple internal ES configurations which may generate different levels of performance. Multiple sources of evidence were collected including electronic medical records, 54 recorded interviews, observation, and internal documents. Both in-depth cases compare and contrast higher and lower performing ES configurations. Following literal replication across in-depth cases, this article concludes that hospital performance can be improved through an enriched understanding of hospital ES design.

  9. High voltage pulsed cable design: a practical example

    Energy Technology Data Exchange (ETDEWEB)

    Kewish, R.W. Jr.; Boicourt, G.P.

    1979-01-01

    The design of optimum high voltage pulse cable is difficult because very little emperical data are available on performance in pulsed applications. This paper follows the design and testing of one high voltage pulse cable, 40/100 trigger cable. The design was based on an unproven theory and the impressive outcome lends support to the theory. The theory is outlined and it is shown that there exists an inductance which gives a cable of minimum size for a given maximum stress. Test results on cable manufactured according to the design are presented and compared with the test results on the cable that 40/100 replaces.

  10. Philosophy of design for low cost and high reliability

    DEFF Research Database (Denmark)

    Jørgensen, John Leif; Liebe, Carl Christian

    1996-01-01

    The Ørsted Star Imager or Advanced Stellar Compass (ASC), includes the full functionallity of a traditional star tracker plus autonomy, i.e. it is able to quickly and autonomously solve "the lost in space" attitude problem, and determine its attitude with high precision. The design also provides ...... and system flexibility are addressed.KEY WORDS: Micro satellite, stellar compass, star tracker, attitude determination....... and process are described, starting with the system specifications and its derived design drivers, through the design process and its iterations, including the specification, design and capability of the prototyping facility, and ending with the final system design. The rationale for IC-level selection...

  11. Mechanical design of a high field common coil magnet

    CERN Document Server

    Caspi, S; Dietderich, D R; Gourlay, S A; Gupta, R; McInturff, A; Millos, G; Scanlan, R M

    1999-01-01

    A common coil design for high field 2-in-1 accelerator magnets has been previously presented as a "conductor-friendly" option for high field magnets applicable for a Very Large Hadron Collider. This paper presents the mechanical design for a 14 tesla 2-in-1 dipole based on the common coil design approach. The magnet will use a high current density Nb/sub 3/Sn conductor. The design addresses mechanical issues particular to the common coil geometry: horizontal support against coil edges, vertical preload on coil faces, end loading and support, and coil stresses and strains. The magnet is the second in a series of racetrack coil magnets that will provide experimental verification of the common coil design approach. (9 refs).

  12. Mechanical design of a high field common coil magnet

    Energy Technology Data Exchange (ETDEWEB)

    Caspi, S.; Chow, K.; Dietderich, D.; Gourlay, S.; Gupta, R.; McInturff, A.; Millos, G.; Scanlan, R.

    1999-03-18

    A common coil design for high field 2-in-1 accelerator magnets has been previously presented as a 'conductor-friendly' option for high field magnets applicable for a Very Large Hadron Collider. This paper presents the mechanical design for a 14 tesla 2-in-1 dipole based on the common coil design approach. The magnet will use a high current density Nb{sub 3}Sn conductor. The design addresses mechanical issues particular to the common coil geometry: horizontal support against coil edges, vertical preload on coil faces, end loading and support, and coil stresses and strains. The magnet is the second in a series of racetrack coil magnets that will provide experimental verification of the common coil design approach.

  13. High-Temperature Gas-Cooled Test Reactor Point Design

    Energy Technology Data Exchange (ETDEWEB)

    Sterbentz, James William [Idaho National Laboratory; Bayless, Paul David [Idaho National Laboratory; Nelson, Lee Orville [Idaho National Laboratory; Gougar, Hans David [Idaho National Laboratory; Kinsey, James Carl [Idaho National Laboratory; Strydom, Gerhard [Idaho National Laboratory; Kumar, Akansha [Idaho National Laboratory

    2016-04-01

    A point design has been developed for a 200 MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched UCO fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technological readiness level, licensing approach and costs.

  14. Highly Directive Reflect Array Antenna Design for Wireless Power Transfer

    Science.gov (United States)

    2017-04-14

    Journal Publications (under review) 1. A Pattanayak and SP Duttagupta, “A Novel Broadband Reflect-array Design with sub-wavelength ring resonators...AFRL-AFOSR-JP-TR-2017-0033 Highly Directive Reflect Array Antenna Design for Wireless Power Transfer Siddhartha Prakash Duttagupta INDIAN INSTITUTE...Directive Reflect Array Antenna Design for Wireless Power Transfer 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-14-1-4076 5c.  PROGRAM ELEMENT NUMBER

  15. Small, high pressure ratio compressor: Aerodynamic and mechanical design

    Science.gov (United States)

    Bryce, C. A.; Erwin, J. R.; Perrone, G. L.; Nelson, E. L.; Tu, R. K.; Bosco, A.

    1973-01-01

    The Small, High-Pressure-Ratio Compressor Program was directed toward the analysis, design, and fabrication of a centrifugal compressor providing a 6:1 pressure ratio and an airflow rate of 2.0 pounds per second. The program consists of preliminary design, detailed areodynamic design, mechanical design, and mechanical acceptance tests. The preliminary design evaluate radial- and backward-curved blades, tandem bladed impellers, impeller-and diffuser-passage boundary-layer control, and vane, pipe, and multiple-stage diffusers. Based on this evaluation, a configuration was selected for detailed aerodynamic and mechanical design. Mechanical acceptance test was performed to demonstrate that mechanical design objectives of the research package were met.

  16. High Powered Rocketry: Design, Construction, and Launching Experience and Analysis

    Science.gov (United States)

    Paulson, Pryce; Curtis, Jarret; Bartel, Evan; Cyr, Waycen Owens; Lamsal, Chiranjivi

    2018-01-01

    In this study, the nuts and bolts of designing and building a high powered rocket have been presented. A computer simulation program called RockSim was used to design the rocket. Simulation results are consistent with time variations of altitude, velocity, and acceleration obtained in the actual flight. The actual drag coefficient was determined…

  17. High-Speed Low Power Design in CMOS

    DEFF Research Database (Denmark)

    Ghani, Arfan; Usmani, S. H.; Stassen, Flemming

    2004-01-01

    Static CMOS design displays benefits such as low power consumption, dominated by dynamic power consumption. In contrast, MOS Current Mode Logic (MCML) displays static rather than dynamic power consumption. High-speed low-power design is one of the many application areas in VLSI that require...

  18. Design methodology to enhance high impedance surfaces performances

    Directory of Open Access Journals (Sweden)

    M. Grelier

    2014-04-01

    Full Text Available A methodology is introduced for designing wideband, compact and ultra-thin high impedance surfaces (HIS. A parametric study is carried out to examine the effect of the periodicity on the electromagnetic properties of an HIS. This approach allows designers to reach the best trade-off for HIS performances.

  19. Multidisciplinary Design Optimization for High Reliability and Robustness

    National Research Council Canada - National Science Library

    Grandhi, Ramana

    2005-01-01

    .... Over the last 3 years Wright State University has been applying analysis tools to predict the behavior of critical disciplines to produce highly robust torpedo designs using robust multi-disciplinary...

  20. Design of microchannels for cryostabilization of high temperature superconducting magnets

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Y.S.; Hull, J.R.; Niemann, R.C.

    1993-10-01

    Microchannel cooling using subcooled liquid nitrogen is proposed to cryogenically stabilize high-temperature superconducting magnets. Various design constraints and parameters are identified and summarized. A graphical method is proposed for the design of microchannel systems. This graphical method helps to reduce the amount of work towards achieving optimum design for a specific application because there are a large number of parameters involved in the design of a microchannel system. The proposed graphical method are illustrated by three examples. The results show that a design window may appear for a given application. Any point within this window is an acceptable design. Another advantage of the graphical method is that, by selecting a design point, the design margin against various design contrains can be easily identified. Any two of the design variables can be selected as the independent variables. The choice depends on specific application and, to a certain extent, on individual preference. The three examples revealed that, for high current density applications, the most scattering constraints are the coolant temperature rise and the fin tip temperatures provided that a moderate pressure drop can be tolerated.

  1. PENGARUH LEVERAGE, PROFITABILITAS DAN SIZE TERHADAP PENGUNGKAPAN CORPORATE SOCIAL RESPONSIBILITY PADA PERUSAHAAN DI BURSA EFEK INDONESIA

    Directory of Open Access Journals (Sweden)

    syailend eka saputra

    2016-10-01

    Full Text Available Penelitian ini bertujuan untuk mengetahui pengaruh leverage, profitabilitas dan size terhadap jumlah pengungkapan corporate social responsibility. Pada penelitian ini digunakan beberapa perusahaan high profile di Bursa Efek Indonesia. Periode penelitian yang digunakan dari tahun 2010 sampai 2014. Didalam penelitian ini variabel penelitian yang digunakan adalah leverage yang diukur dengan debt to equity ratio, profitabilitas diukur dengan menggunakan return on assets dan size diukur dengan LN total assets. Proses pengujian hipotesis dilakukan dengan menggunakan regresi panel yang diolah dengan menggunakan Eviews. Berdasarkan hasil pengujian ditemukan bahwa leverage dan profitabilitas berpengaruh sigbifikan terhadap pengungkapan corporate social responsibility pada perusahaan high profile di Bursa Efek Indonesia sedangkan size tidak berpengaruh signifikan terhadap jumlah pengungkapan corporate social responsibility pada perusahaan high profile di Bursa Efek Indonesia.

  2. Managing patient populations in primary care: points of leverage.

    Science.gov (United States)

    Eidus, Robert; Pace, Wilson D; Staton, Elizabeth W

    2012-01-01

    Common "quality" metrics may represent the quality of care for large populations; however, they do not adequately represent quality in individual primary care settings, especially as stand-alone indices. Using discreet threshold values to measure quality in primary care may result in physicians focusing on managing patients by the numbers at the expense of making individualized and nuanced clinical decisions. Current performance measures may be misapplied as proxies for both cost savings and quality. We posit that developing and focusing measurement on high-leverage activities will yield better clinical outcomes and potentially lower cost. As a starting point for further work in this area, we suggest the development of metrics that track identification and management of depression; management of transitions of care; care coordination; team-based care; identification and support of socially frail/isolated individuals; pharmacologic management, including optimizing medication and dealing with adherence issues; and establishment of a therapeutic environment. These processes, or others like them, will require infrastructure that may be costly and time-consuming, and measuring these processes will require thought and effort. Nevertheless, we believe developing metrics based on high-leverage activities will yield greater clinical and economic returns than relying on the metrics currently in place.

  3. Integrated design and manufacturing for the high speed civil transport

    Science.gov (United States)

    Lee, Jae Moon; Gupta, Anurag; Mueller, Craig; Morrisette, Monica; Dec, John; Brewer, Jason; Donofrio, Kevin; Sturisky, Hilton; Smick, Doug; An, Meng Lin

    1994-01-01

    In June 1992, the School of Aerospace Engineering at Georgia Tech was awarded a three year NASA University Space Research Association (USRA) Advanced Design Program (ADP) grant to address issues associated with the Integrated Design and Manufacturing of High Speed Civil Transport (HSCT) configurations in its graduate Aerospace Systems Design courses. This report provides an overview of the on-going Georgia Tech initiative to address these design/manufacturing issues during the preliminary design phases of an HSCT concept. The new design methodology presented here has been incorporated in the graduate aerospace design curriculum and is based on the concept of Integrated Product and Process Development (IPPD). The selection of the HSCT as a pilot project was motivated by its potential global transportation payoffs; its technological, environmental, and economic challenges; and its impact on U.S. global competitiveness. This pilot project was the focus of each of the five design courses that form the graduate level aerospace systems design curriculum. This year's main objective was the development of a systematic approach to preliminary design and optimization and its implementation to an HSCT wing/propulsion configuration. The new methodology, based on the Taguchi Parameter Design Optimization Method (PDOM), was established and was used to carry out a parametric study where various feasible alternative configurations were evaluated. The comparison criterion selected for this evaluation was the economic impact of this aircraft, measured in terms of average yield per revenue passenger mile ($/RPM).

  4. Novel design for a high power superconducting delay line

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.J.; Caporaso, G.J.

    1997-05-08

    Potential designs for a high power superconducting delay line of approximately 10ms duration are described. The transmitted signal should have low dispersion and little attenuation to recapture the original signal. Such demands cannot be met using conventional metal conductors. This paper outlines a proposal for a new transmission line design using low temperature superconducting material which meets system specifications. The 25W line is designed to carry pulsed signals with an approximate rise time of 8 nsec and a maximum voltage of 25kV. Predicted electrical design and performance of the line is presented.

  5. Designs for a high power superconducting delay line

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.J.; Caporaso, G.

    1997-06-26

    Potential designs for a high power superconducting delay line of approximately 10 microsecs duration are described. The transmitted signal should have low dispersion and little attenuation to recapture the original signal. Such demands cannot be met using conventional metal conductors. This paper outlines a proposal for a new transmission line design using low temperature superconducting material which meets system specifications. The 25 omega line is designed to carry pulsed signals with an approximate rise time of 8 nsec and a maximum voltage magnitude of 25 kV. Predicted electrical design and performance of the line will be presented.

  6. Investigation on Beam Dynamics Design of High-Intensity RFQs

    CERN Document Server

    Zhang, C

    2004-01-01

    Recently various potential uses of high-intensity beams bring new opportunities as well as challenges to RFQ accelerator research because of the new problems arising from the strong space-charge effects. Unconventional concepts of beam dynamics design, which surround the choice of basic parameters and the optimization of main dynamics parameters’ variation along the machine, are illustrated by the designing Peking University (PKU) Deuteron RFQ. An efficient tool of LANL RFQ Design Codes for beam dynamics simulation and analysis, RFQBAT, is introduced. Some quality criterions are also presented for evaluating design results.

  7. Design of high-bit-rate coherent communication links

    Science.gov (United States)

    Konyshev, V. A.; Leonov, A. V.; Nanii, O. E.; Novikov, A. G.; Treshchikov, V. N.; Ubaydullaev, R. R.

    2016-12-01

    We report an analysis of the problems encountered in the design of modern high-bit-rate coherent communication links. A phenomenological communication link model is described, which is suitable for solving applied tasks of the network design with nonlinear effects taken into account. We propose an engineering approach to the design that is based on the use of fundamental nonlinearity coefficients calculated in advance for the experimental configurations of communication links. An experimental method is presented for calculating the nonlinearity coefficient of communication links. It is shown that the proposed approach allows one to successfully meet the challenges in designing communication networks.

  8. Secondary Containment Design for a High Speed Centrifuge

    Energy Technology Data Exchange (ETDEWEB)

    Snyder, K.W.

    1999-03-01

    Secondary containment for high speed rotating machinery, such as a centrifuge, is extremely important for operating personnel safety. Containment techniques can be very costly, ungainly and time consuming to construct. A novel containment concept is introduced which is fabricated out of modular sections of polycarbonate glazed into a Unistrut metal frame. A containment study for a high speed centrifuge is performed which includes the development of parameters for secondary containment design. The Unistrut/polycarbonate shield framing concept is presented including design details and proof testing procedures. The economical fabrication and modularity of the design indicates a usefulness for this shielding system in a wide variety of containment scenarios.

  9. High Altitude Venus Operations Concept Trajectory Design, Modeling and Simulation

    Science.gov (United States)

    Lugo, Rafael A.; Ozoroski, Thomas A.; Van Norman, John W.; Arney, Dale C.; Dec, John A.; Jones, Christopher A.; Zumwalt, Carlie H.

    2015-01-01

    A trajectory design and analysis that describes aerocapture, entry, descent, and inflation of manned and unmanned High Altitude Venus Operation Concept (HAVOC) lighter-than-air missions is presented. Mission motivation, concept of operations, and notional entry vehicle designs are presented. The initial trajectory design space is analyzed and discussed before investigating specific trajectories that are deemed representative of a feasible Venus mission. Under the project assumptions, while the high-mass crewed mission will require further research into aerodynamic decelerator technology, it was determined that the unmanned robotic mission is feasible using current technology.

  10. Leveraging Social Networks to Support Reproductive Health and Economic Wellbeing among Guatemalan Maya Women

    Science.gov (United States)

    Prescott, Alexandra S.; Luippold-Roge, Genevieve P.; Gurman, Tilly A.

    2016-01-01

    Objective: Maya women in Guatemala are disproportionately affected by poverty and negative reproductive health outcomes. Although social networks are valued in many Indigenous cultures, few studies have explored whether health education programmes can leverage these networks to improve reproductive health and economic wellbeing. Design: This…

  11. 17 CFR 31.23 - Limited right to rescind first leverage contract.

    Science.gov (United States)

    2010-04-01

    ... leverage contract. 31.23 Section 31.23 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS § 31.23 Limited right to rescind first leverage contract. (a) A leverage customer who is entering a leverage contract or contracts for the first time with a particular leverage...

  12. Preliminary design of nine high speed civil transports

    Science.gov (United States)

    Sandlin, Doral; Vantriet, Robert; Soban, Dani; Hoang, TY

    1992-01-01

    Sixty senior design students at Cal Poly, SLO have completed a year-long project to design the next generation of High Speed Civil Transports (HSCT). The design process was divided up into three distinct phases. The first third of the project was devoted entirely to research into the special problems associated with an HSCT. These included economic viability, airport compatibility, high speed aerodynamics, sonic boom minimization, environmental impact, and structures and materials. The result of this research was the development of nine separate Requests for Proposal (RFP) that outlined reasonable yet challenging design criteria for the aircraft. All were designed to be technically feasible in the year 2015. The next phase of the project divided the sixty students into nine design groups. Each group, with its own RFP, completed a Class 1 preliminary design of an HSCT. The nine configurations varied from conventional double deltas to variable geometry wings to a pivoting oblique wing design. The final phase of the project included a more detailed Class 2 sizing as well as performance and stability and control analysis. Cal Poly, San Luis Obispo presents nine unique solutions to the same problem: that of designing an economically viable, environmentally acceptable, safe and comfortable supersonic transport.

  13. Digital Analog Design: A Highly-Efficient Method to Design Analog Circuits

    Science.gov (United States)

    2017-03-01

    Digital Analog Design: A Highly-Efficient Method to Design Analog Circuits* Mark Horowitz and Byong Lim Department of Electrical Engineering...Stanford University Stanford, CA, 940305 Abstract: The past 30 years have seen an enormous growth in the power and sophistication of digital ...design tools, while progress in analog tools has been much more modest. Digital tools use many abstractions to allow them to validate

  14. High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations

    Science.gov (United States)

    Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.

    2017-01-01

    To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1

  15. Designing high-Performance layered thermoelectric materials through orbital engineering

    DEFF Research Database (Denmark)

    Zhang, Jiawei; Song, Lirong; Madsen, Georg K. H.

    2016-01-01

    Thermoelectric technology, which possesses potential application in recycling industrial waste heat as energy, calls for novel high-performance materials. The systematic exploration of novel thermoelectric materials with excellent electronic transport properties is severely hindered by limited...... insight into the underlying bonding orbitals of atomic structures. Here we propose a simple yet successful strategy to discover and design high-performance layered thermoelectric materials through minimizing the crystal field splitting energy of orbitals to realize high orbital degeneracy. The approach......-abundant elements. Moreover, the approach can be extended to several other non-cubic materials, thereby substantially accelerating the screening and design of new thermoelectric materials....

  16. On-chip High-Voltage Generator Design

    CERN Document Server

    Tanzawa, Toru

    2013-01-01

    This book describes high-voltage generator design with switched-capacitor multiplier techniques.  The author provides various design techniques for switched-capacitor on-chip high-voltage generators, including charge pump circuits, regulators, level shifters, references, and oscillators.  Readers will see these techniques applied to system design in order to address the challenge of how the on-chip high-voltage generator is designed for Flash memories, LCD drivers, and other semiconductor devices to optimize the entire circuit area and power efficiency with a low voltage supply, while minimizing the cost.   ·         Shows readers how to design charge pump circuits with lower voltage operation, higher power efficiency, and smaller circuit area; ·         Describes comprehensive circuits and systems design of on-chip high-voltage generators; ·         Covers all the component circuit blocks, including charge pumps, pump regulators, level shifters, oscillators, and references.

  17. CUDA Application Design and Development

    CERN Document Server

    Farber, Rob

    2011-01-01

    As the computer industry retools to leverage massively parallel graphics processing units (GPUs), this book is designed to meet the needs of working software developers who need to understand GPU programming with CUDA and increase efficiency in their projects. CUDA Application Design and Development starts with an introduction to parallel computing concepts for readers with no previous parallel experience, and focuses on issues of immediate importance to working software developers: achieving high performance, maintaining competitiveness, analyzing CUDA benefits versus costs, and determining

  18. Financial Leverage and Corporate Performance: Does Financial Crisis Owe an Explanation?

    Directory of Open Access Journals (Sweden)

    Syed Jawad Hussain Shahzad

    2015-04-01

    Full Text Available The objective of this study is to investigate the impact of financial leverage on corporate financial performance of Pakistan’s textile sector from 1999-2012 using panel data. The leverage-performance relationship is examined with a special focus on the Global Financial Crisis of 2007-2008. Both accounting-based (Return on Assets - ROA and market-based (Tobin’s Q measures of corporate financial performance are used. Regression analysis is performed with and without inclusion of financial crisis dummy. Total Debt to Total Assets (TDTA, Long Term Debt to Total Assets (LDTA, Short Term Debt to Total Assets (SDTA and Debt to Equity (DE ratios are used as proxies for financial leverage whereas firm’s size and firm’s efficiency are used as control variables. The results indicate that financial leverage has a negative impact on corporate performance when measured with ROA. Whereas in case of Tobin’s Q, SDTA coefficient is positive. It can be concluded that since cost of borrowing is high in Pakistan and debt capital markets are less developed, firms are forced to resort to banks as their source of debt finance and thus have to repay huge amount of principal and interest which has a heavy toll on their financial health. In addition to this, financial crisis was found to have a negative impact on corporate performance and also affect the leverage-performance relationship.

  19. Leverage principle of retardation signal in titration of double protein via chip moving reaction boundary electrophoresis.

    Science.gov (United States)

    Zhang, Liu-Xia; Cao, Yi-Ren; Xiao, Hua; Liu, Xiao-Ping; Liu, Shao-Rong; Meng, Qing-Hua; Fan, Liu-Yin; Cao, Cheng-Xi

    2016-03-15

    In the present work we address a simple, rapid and quantitative analytical method for detection of different proteins present in biological samples. For this, we proposed the model of titration of double protein (TDP) and its relevant leverage theory relied on the retardation signal of chip moving reaction boundary electrophoresis (MRBE). The leverage principle showed that the product of the first protein content and its absolute retardation signal is equal to that of the second protein content and its absolute one. To manifest the model, we achieved theoretical self-evidence for the demonstration of the leverage principle at first. Then relevant experiments were conducted on the TDP-MRBE chip. The results revealed that (i) there was a leverage principle of retardation signal within the TDP of two pure proteins, and (ii) a lever also existed within these two complex protein samples, evidently demonstrating the validity of TDP model and leverage theory in MRBE chip. It was also showed that the proposed technique could provide a rapid and simple quantitative analysis of two protein samples in a mixture. Finally, we successfully applied the developed technique for the quantification of soymilk in adulterated infant formula. The TDP-MRBE opens up a new window for the detection of adulteration ratio of the poor food (milk) in blended high quality one. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Analysis and design technology for high-speed aircraft structures

    Science.gov (United States)

    Starnes, James H., Jr.; Camarda, Charles J.

    1992-01-01

    Recent high-speed aircraft structures research activities at NASA Langley Research Center are described. The following topics are covered: the development of analytical and numerical solutions to global and local thermal and structural problems, experimental verification of analysis methods, identification of failure mechanisms, and the incorporation of analysis methods into design and optimization strategies. The paper describes recent NASA Langley advances in analysis and design methods, structural and thermal concepts, and test methods.

  1. Machine learning assisted design of highly active peptides for drug discovery.

    Directory of Open Access Journals (Sweden)

    Sébastien Giguère

    2015-04-01

    Full Text Available The discovery of peptides possessing high biological activity is very challenging due to the enormous diversity for which only a minority have the desired properties. To lower cost and reduce the time to obtain promising peptides, machine learning approaches can greatly assist in the process and even partly replace expensive laboratory experiments by learning a predictor with existing data or with a smaller amount of data generation. Unfortunately, once the model is learned, selecting peptides having the greatest predicted bioactivity often requires a prohibitive amount of computational time. For this combinatorial problem, heuristics and stochastic optimization methods are not guaranteed to find adequate solutions. We focused on recent advances in kernel methods and machine learning to learn a predictive model with proven success. For this type of model, we propose an efficient algorithm based on graph theory, that is guaranteed to find the peptides for which the model predicts maximal bioactivity. We also present a second algorithm capable of sorting the peptides of maximal bioactivity. Extensive analyses demonstrate how these algorithms can be part of an iterative combinatorial chemistry procedure to speed up the discovery and the validation of peptide leads. Moreover, the proposed approach does not require the use of known ligands for the target protein since it can leverage recent multi-target machine learning predictors where ligands for similar targets can serve as initial training data. Finally, we validated the proposed approach in vitro with the discovery of new cationic antimicrobial peptides. Source code freely available at http://graal.ift.ulaval.ca/peptide-design/.

  2. Machine learning assisted design of highly active peptides for drug discovery.

    Science.gov (United States)

    Giguère, Sébastien; Laviolette, François; Marchand, Mario; Tremblay, Denise; Moineau, Sylvain; Liang, Xinxia; Biron, Éric; Corbeil, Jacques

    2015-04-01

    The discovery of peptides possessing high biological activity is very challenging due to the enormous diversity for which only a minority have the desired properties. To lower cost and reduce the time to obtain promising peptides, machine learning approaches can greatly assist in the process and even partly replace expensive laboratory experiments by learning a predictor with existing data or with a smaller amount of data generation. Unfortunately, once the model is learned, selecting peptides having the greatest predicted bioactivity often requires a prohibitive amount of computational time. For this combinatorial problem, heuristics and stochastic optimization methods are not guaranteed to find adequate solutions. We focused on recent advances in kernel methods and machine learning to learn a predictive model with proven success. For this type of model, we propose an efficient algorithm based on graph theory, that is guaranteed to find the peptides for which the model predicts maximal bioactivity. We also present a second algorithm capable of sorting the peptides of maximal bioactivity. Extensive analyses demonstrate how these algorithms can be part of an iterative combinatorial chemistry procedure to speed up the discovery and the validation of peptide leads. Moreover, the proposed approach does not require the use of known ligands for the target protein since it can leverage recent multi-target machine learning predictors where ligands for similar targets can serve as initial training data. Finally, we validated the proposed approach in vitro with the discovery of new cationic antimicrobial peptides. Source code freely available at http://graal.ift.ulaval.ca/peptide-design/.

  3. Fundamental understanding and rational design of high energy structural microbatteries

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuxing; Li, Qiuyan; Cartmell, Samuel; Li, Huidong; Mendoza, Sarah; Zhang, Ji-Guang; Deng, Zhiqun Daniel; Xiao, Jie

    2018-01-01

    Microbatteries play a critical role in determining the lifetime of downsized sensors, wearable devices and medical applications, etc. More often, structural batteries are required from the perspective of aesthetics and space utilization, which is however rarely explored. Herein, we discuss the fundamental issues associated with the rational design of practically usable high energy microbatteries. The tubular shape of the cell further allows the flexible integration of microelectronics. A functioning acoustic micro-transmitter continuously powered by this tubular battery has been successfully demonstrated. Multiple design features adopted to accommodate large mechanical stress during the rolling process are discussed providing new insights in designing the structural microbatteries for emerging technologies.

  4. Comparison of Sequential Designs of Computer Experiments in High Dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Kupresanin, A. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johannesson, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-07-21

    We continue a long line of research in applying the design and analysis of computer experiments to the study of real world systems. The problem we consider is that of fitting a Gaussian process model for a computer model in applications where the simulation output is a function of a high dimensional input vector. Our computer experiments are designed sequentially as we learn about the model. We perform an empirical comparison of the effectiveness and efficiency of several statistical criteria that have been used in sequential experimental designs. The specific application that motivates this work comes from climatology.

  5. Conceptual design of a helium heater for high temperature applications

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Xue Zhou, E-mail: jin@kit.edu; Chen, Yuming; Ghidersa, Bradut-Eugen

    2014-10-15

    Highlights: •A special design of heater with two vessels is introduced for the operation at 10 MPa and 800 °C. •The additional coupling between the cold leg and the hot leg of the loop due to the heater design has an impact on the loop energy budget. •Reducing the heat transfer between the two flow channels inside the heater by means of a helium gap in the inlet nozzle is proven to be effective. -- Abstract: The Karlsruhe Advanced Technologies Helium Loop (KATHELO) has been designed for testing divertor modules as well as qualifying materials for high heat flux, high temperature (up to 800 °C) and high pressure (10 MPa) applications. The test section inlet temperature level is controlled using a process electrical heater. To cope with the extreme operating conditions, a special design of this unit has been proposed. In this paper the conceptual design of the unit will be presented and the impact of the coupling between the cold and hot helium gas on the overall efficiency of the loop will be investigated. The detailed thermal-hydraulic analysis of the feed through of the hot helium into the low temperature pressure vessel using ANSYS CFX will be presented. The impact of the design choices on the overall energy budget of the loop will be analyzed using RELAP5-3D.

  6. Design of High Performance Permanent-Magnet Synchronous Wind Generators

    Directory of Open Access Journals (Sweden)

    Chun-Yu Hsiao

    2014-11-01

    Full Text Available This paper is devoted to the analysis and design of high performance permanent-magnet synchronous wind generators (PSWGs. A systematic and sequential methodology for the design of PMSGs is proposed with a high performance wind generator as a design model. Aiming at high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension, etc. are determined with the proposed technique using Maxwell 2-D, Matlab software and the Taguchi method. The proposed double three-phase and six-phase winding configurations, which consist of six windings in the stator, can provide evenly distributed current for versatile applications regarding the voltage and current demands for practical consideration. Specifically, windings are connected in series to increase the output voltage at low wind speed, and in parallel during high wind speed to generate electricity even when either one winding fails, thereby enhancing the reliability as well. A PMSG is designed and implemented based on the proposed method. When the simulation is performed with a 6 Ω load, the output power for the double three-phase winding and six-phase winding are correspondingly 10.64 and 11.13 kW. In addition, 24 Ω load experiments show that the efficiencies of double three-phase winding and six-phase winding are 96.56% and 98.54%, respectively, verifying the proposed high performance operation.

  7. Leverage and other informal pressures in community psychiatry in England.

    Science.gov (United States)

    Canvin, Krysia; Rugkåsa, Jorun; Sinclair, Julia; Burns, Tom

    2013-01-01

    Informal practices aimed at managing psychiatric patients in the community setting fall outside legal and policy provision or guidance. "Leverage" is an informal practice whereby practitioners attempt to influence patients' treatment adherence by, for example, making patients' access to subsidised housing conditional upon adherence to treatment or by making treatment adherence a condition of patients' avoidance of financial control. Lower rates of leverage are reported in the UK compared to the USA, possibly due to differences between the US and European social welfare systems. These differences raise questions as to the international comparability of leverage practices described in the literature. The study aimed to capture patients' experiences and perceptions of pressures and to explore (a) whether "leverage" can be distinguished from other pressures, and (b) how a concept of leverage derived from patient experiences in England might fit with the literature to date. In this article we present the different types of pressure that we identified from patients' accounts, and a set of criteria derived for the purpose of distinguishing between these different types of pressure. Twenty-nine qualitative interviews with a purposive subsample from a study of leverage in the English mental health system were analysed. Participants reported a range of what can be classified as both leveraged and non-leveraged pressures. These were perceived as pressures to adhere to treatment, as well as "staying well." Leveraged pressures were distinguishable from non-leveraged pressures by the presence of three features: conditionality, a lever and direct communication. The portrayal of "leverage" in the current literature does not fully capture patient experiences of pressure. Our analysis offers a clearer concept of leverage and other pressures that influence patients, and which may have different legal, ethical and clinical implications. Copyright © 2013 Elsevier Ltd. All rights

  8. Design Strategies for Optically-Accessible, High-Temperature, High-Pressure Reactor

    Energy Technology Data Exchange (ETDEWEB)

    S. F. Rice; R. R. Steeper; C. A. LaJeunesse; R. G. Hanush; J. D. Aiken

    2000-02-01

    The authors have developed two optical cell designs for high-pressure and high-temperature fluid research: one for flow systems, and the other for larger batch systems. The flow system design uses spring washers to balance the unequal thermal expansions of the reactor and the window materials. A typical design calculation is presented showing the relationship between system pressure, operating temperature, and torque applied to the window-retaining nut. The second design employs a different strategy more appropriate for larger windows. This design uses two seals: one for the window that benefits from system pressure, and a second one that relies on knife-edge, metal-to-metal contact.

  9. Design strategies for optically-accessible, high-temperature, high-pressure reactor

    Energy Technology Data Exchange (ETDEWEB)

    S. F. Rice; R. R. Steeper; C. A. LaJeunesse; R. G. Hanush; J. D. Aiken

    2000-02-01

    The authors have developed two optical cell designs for high-pressure and high-temperature fluid research: one for flow systems, and the other for larger batch systems. The flow system design uses spring washers to balance the unequal thermal expansions of the reactor and the window materials. A typical design calculation is presented showing the relationship between system pressure, operating temperature, and torque applied to the window-retaining nut. The second design employs a different strategy more appropriate for larger windows. This design uses two seals: one for the window that benefits from system pressure, and a second one that relies on knife-edge, metal-to-metal contact.

  10. Fashion Design: Designing a Learner-Active, Multi-Level High School Course

    Science.gov (United States)

    Nelson, Diane

    2009-01-01

    A high school fashion design teacher has much in common with the ringmaster of a three-ring circus. The challenges of teaching a hands-on course are to facilitate the entire class and to meet the needs of individual students. When teaching family and consumer sciences, the goal is to have a learner-active classroom. Revamping the high school's…

  11. Design and Analysis of High Speed Capacitive Pipeline DACs

    OpenAIRE

    Duong, Quoc-Tai; Dabrowski, Jerzy; Alvandpour, Atila

    2014-01-01

    Design of a high speed capacitive digital-to-analog converter (SC DAC) is presented for 65 nm CMOS technology. SC pipeline architecture is used followed by an output driver. For GHz frequency operation with output voltage swing suitable for wireless applications (300 mVpp) the DAC performance is shown to be limited by the capacitor array imperfections. While it is possible to design a highly linear output driver with HD3 < -70 dB and HD2 < -90 dB over 0.55 GHz band as we show, the maxi...

  12. The Mechanical Design Optimization of a High Field HTS Solenoid

    Energy Technology Data Exchange (ETDEWEB)

    Lalitha, SL; Gupta, RC

    2015-06-01

    This paper describes the conceptual design optimization of a large aperture, high field (24 T at 4 K) solenoid for a 1.7 MJ superconducting magnetic energy storage device. The magnet is designed to be built entirely of second generation (2G) high temperature superconductor tape with excellent electrical and mechanical properties at the cryogenic temperatures. The critical parameters that govern the magnet performance are examined in detail through a multiphysics approach using ANSYS software. The analysis results formed the basis for the performance specification as well as the construction of the magnet.

  13. Lessons Learned in High Frequency Data Transmissions Design

    CERN Document Server

    Sullivan, Stephanie W; The ATLAS collaboration

    2016-01-01

    Requirements of HEP experiments lead to highly integrated systems with many electrical, mechanical and thermal constraints. A complex performance optimisation is therefore required. High speed data transmission lines are designed, while simultaneously minimising radiation length. Methods to improve the signal integrity of point to point links and multi-drop configurations are described. FEA calculations are an essential guide to the optimisation which allow data rates of 640 Mbps for point to point links over a length of up to 1.4m, as well as 160 Mbps for multi-drop configuration. The designs were validated using laboratory measurements of S-parameters and direct BER tests.

  14. Leverage hadoop framework for large scale clinical informatics applications.

    Science.gov (United States)

    Dong, Xiao; Bahroos, Neil; Sadhu, Eugene; Jackson, Tommie; Chukhman, Morris; Johnson, Robert; Boyd, Andrew; Hynes, Denise

    2013-01-01

    In this manuscript, we present our experiences using the Apache Hadoop framework for high data volume and computationally intensive applications, and discuss some best practice guidelines in a clinical informatics setting. There are three main aspects in our approach: (a) process and integrate diverse, heterogeneous data sources using standard Hadoop programming tools and customized MapReduce programs; (b) after fine-grained aggregate results are obtained, perform data analysis using the Mahout data mining library; (c) leverage the column oriented features in HBase for patient centric modeling and complex temporal reasoning. This framework provides a scalable solution to meet the rapidly increasing, imperative "Big Data" needs of clinical and translational research. The intrinsic advantage of fault tolerance, high availability and scalability of Hadoop platform makes these applications readily deployable at the enterprise level cluster environment.

  15. The Risk-Return Tradeoff and Leverage Effect in a Stochastic Volatility-in-Mean Model

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posedel, Petra

    We study the risk premium and leverage effect in the S&P500 market using the stochastic volatility-in-mean model of Barndor¤-Nielsen & Shephard (2001). The Merton (1973, 1980) equilibrium asset pricing condition linking the conditional mean and conditional variance of discrete time returns...... is reinterpreted in terms of the continuous time model. Tests are per- formed on the risk-return relation, the leverage effect, and the overidentifying zero intercept restriction in the Merton condition. Results are compared across alternative volatility proxies, in particular, realized volatility from high......-frequency (5-minute) returns, implied Black-Scholes volatility backed out from observed option prices, model-free implied volatility (VIX), and staggered bipower variation. Our results are consistent with a positive risk-return relation and a significant leverage effect, whereas an additional overidentifying...

  16. High performance APCS conceptual design and evaluation scoping study

    Energy Technology Data Exchange (ETDEWEB)

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO{sub x} control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities.

  17. High-throughput theoretical design of lithium battery materials

    Science.gov (United States)

    Shi-Gang, Ling; Jian, Gao; Rui-Juan, Xiao; Li-Quan, Chen

    2016-01-01

    The rapid evolution of high-throughput theoretical design schemes to discover new lithium battery materials is reviewed, including high-capacity cathodes, low-strain cathodes, anodes, solid state electrolytes, and electrolyte additives. With the development of efficient theoretical methods and inexpensive computers, high-throughput theoretical calculations have played an increasingly important role in the discovery of new materials. With the help of automatic simulation flow, many types of materials can be screened, optimized and designed from a structural database according to specific search criteria. In advanced cell technology, new materials for next generation lithium batteries are of great significance to achieve performance, and some representative criteria are: higher energy density, better safety, and faster charge/discharge speed. Project supported by the National Natural Science Foundation of China (Grant Nos. 11234013 and 51172274) and the National High Technology Research and Development Program of China (Grant No. 2015AA034201).

  18. Design and development of ITER high-frequency magnetic sensor

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Y., E-mail: Yunxing.Ma@iter.org [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France); Fircroft Engineering, Lingley House, 120 Birchwood Point, Birchwood Boulevard, Warrington, WA3 7QH (United Kingdom); Vayakis, G. [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France); Begrambekov, L.B. [National Research Nuclear University (MEPhI), 115409, Moscow, Kashirskoe shosse 31 (Russian Federation); Cooper, J.-J. [Culham Centre for Fusion Energy (CCFE), Abingdon, Oxfordshire OX14 3DB (United Kingdom); Duran, I. [IPP Prague, Za Slovankou 1782/3, 182 00 Prague 8 (Czech Republic); Hirsch, M.; Laqua, H.P. [Max-Planck-Institut für Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstraße 1, D-17491 Greifswald (Germany); Moreau, Ph. [CEA Cadarache, 13108 Saint Paul lez Durance Cedex (France); Oosterbeek, J.W. [Eindhoven University of Technology (TU/e), PO Box 513, 5600 MB Eindhoven (Netherlands); Spuig, P. [CEA Cadarache, 13108 Saint Paul lez Durance Cedex (France); Stange, T. [Max-Planck-Institut für Plasmaphysik, Teilinstitut Greifswald, Wendelsteinstraße 1, D-17491 Greifswald (Germany); Walsh, M. [ITER Organization, Route de Vinon-sur-Verdon, CS 90 046, 13067 St. Paul Lez Durance Cedex (France)

    2016-11-15

    Highlights: • ITER high-frequency magnetic sensor system has been designed. • Prototypes have been successfully manufactured. • Manufactured prototypes have been tested in various labs. • Test results experimentally validated the design. - Abstract: High-frequency (HF) inductive magnetic sensors are the primary ITER diagnostic set for Toroidal Alfvén Eigenmodes (TAE) detection, while they also supplement low-frequency MHD and plasma equilibrium measurements. These sensors will be installed on the inner surface of ITER vacuum vessel, operated in a harsh environment with considerable neutron/nuclear radiation and high thermal load. Essential components of the HF sensor system, including inductive coil, electron cyclotron heating (ECH) shield, electrical cabling and termination load, have been designed to meet ITER measurement requirements. System performance (e.g. frequency response, thermal conduction) has been assessed. A prototyping campaign was initiated to demonstrate the manufacturability of the designed components. Prototypes have been produced according to the specifications. A series of lab tests have been performed to examine assembly issues and validate electrical and thermo-mechanical aspects of the design. In-situ microwave radiation test has been conducted in the MISTRAL test facility at IPP-Greifswald to experimentally examine the microwave shielding efficiency and structural integrity of the ECH shield. Low-power microwave attenuation measurement and scanning electron microscopic inspection were conducted to probe and examine the quality of the metal coating on the ECH shield.

  19. The bonsai and the gardener: using flow data to better assess financial sector leverage

    OpenAIRE

    Javier Villar Burke

    2013-01-01

    This paper discusses the concept of leverage, its components and how to measure and monitor it. It proposes an innovative approach to assessing leverage based on flows using the concept of a marginal leverage ratio, which reveals the leverage related to new activities, as a valuable supplement to the traditional absolute leverage ratio. The marginal leverage ratio can be used as an early warning tool to signal potential episodes of excessive leverage and to understand if, and how, banks delev...

  20. Leveraging Gaming Technology to Deliver Effective Training

    Science.gov (United States)

    Cimino, James D.

    2011-01-01

    The best way to engage a soldier is to present them with training content consistent with their learning preference. Blended Interactive Multimedia Instruction (IMI) can be used to leach soldiers what they need to do, how to do each step, and utilize a COTS game engine to actually practices the skills learned. Blended IMI provides an enjoyable experience for the soldier, thereby increasing retention rates and motivation while decreasing the time to subject mastery. And now mobile devices have emerged as an exciting new platform, literally placing the training into the soldier's hands. In this paper, we will discuss how we leveraged commercial game engine technology, tightly integrated with the Blended IMI, to train soldiers on both laptops and mobile devices. We will provide a recent case study of how this training is being utilized, benefits and student/instructor feedback.

  1. Leveraging TSP Solver Complementarity through Machine Learning.

    Science.gov (United States)

    Kerschke, Pascal; Kotthoff, Lars; Bossek, Jakob; Hoos, Holger H; Trautmann, Heike

    2017-08-24

    The Travelling Salesperson Problem (TSP) is one of the best-studied NP-hard problems. Over the years, many different solution approaches and solvers have been developed. For the first time, we directly compare five state-of-the-art inexact solvers-namely, LKH, EAX, restart variants of those, and MAOS-on a large set of well-known benchmark instances and demonstrate complementary performance, in that different instances may be solved most effectively by different algorithms. We leverage this complementarity to build an algorithm selector, which selects the best TSP solver on a per-instance basis and thus achieves significantly improved performance compared to the single best solver, representing an advance in the state of the art in solving the Euclidean TSP. Our in-depth analysis of the selectors provides insight into what drives this performance improvement.

  2. Catalyst design for the growth of highly packed nanotube forests

    Energy Technology Data Exchange (ETDEWEB)

    Esconjauregui, Santiago; Fouquet, Martin; Bayer, Bernhard C.; Robertson, John [Engineering Department, University of Cambridge, CB2 1PZ Cambridge (United Kingdom); Ducati, Caterina [Materials Science Department, University of Cambridge, CB2 3QZ Cambridge (United Kingdom)

    2011-11-15

    We report a technique for the design of high-density catalyst nanoparticles (NPs) which allow the growth of highly packed forests of carbon nanotubes (CNTs). The technique consists of cycles of deposition and annealing of thin metal films, followed by NP immobilisation. This allows a CNT areal density of at least 10{sup 13} cm{sup -2}. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. On-chip high-voltage generator design design methodology for charge pumps

    CERN Document Server

    Tanzawa, Toru

    2016-01-01

    This book provides various design techniques for switched-capacitor on-chip high-voltage generators, including charge pump circuits, regulators, level shifters, references, and oscillators.  Readers will see these techniques applied to system design in order to address the challenge of how the on-chip high-voltage generator is designed for Flash memories, LCD drivers, and other semiconductor devices to optimize the entire circuit area and power efficiency with a low voltage supply, while minimizing the cost.  This new edition includes a variety of useful updates, including coverage of power efficiency and comprehensive optimization methodologies for DC-DC voltage multipliers, modeling of extremely low voltage Dickson charge pumps, and modeling and optimum design of AC-DC switched-capacitor multipliers for energy harvesting and power transfer for RFID.

  4. Phoenix: Preliminary design of a high speed civil transport

    Science.gov (United States)

    Aguilar, Joseph; Davis, Steven; Jett, Brian; Ringo, Leslie; Stob, John; Wood, Bill

    1992-01-01

    The goal of the Phoenix Design Project was to develop a second generation high speed civil transport (HSCT) that will meet the needs of the traveler and airline industry beginning in the 21st century. The primary emphasis of the HSCT is to take advantage of the growing needs of the Pacific Basin and the passengers who are involved in that growth. A passenger load of 150 persons, a mission range of 5150 nautical miles, and a cruise speed of Mach 2.5 constitutes the primary design points of this HSCT. The design concept is made possible with the use of a well designed double delta wing and four mixed flow engines. Passenger comfort, compatibility with existing airport infrastructure, and cost competitive with current subsonic aircraft make the Phoenix a viable aircraft for the future.

  5. Probabilistic performance-based design for high performance control systems

    Science.gov (United States)

    Micheli, Laura; Cao, Liang; Gong, Yongqiang; Cancelli, Alessandro; Laflamme, Simon; Alipour, Alice

    2017-04-01

    High performance control systems (HPCS) are advanced damping systems capable of high damping performance over a wide frequency bandwidth, ideal for mitigation of multi-hazards. They include active, semi-active, and hybrid damping systems. However, HPCS are more expensive than typical passive mitigation systems, rely on power and hardware (e.g., sensors, actuators) to operate, and require maintenance. In this paper, a life cycle cost analysis (LCA) approach is proposed to estimate the economic benefit these systems over the entire life of the structure. The novelty resides in the life cycle cost analysis in the performance based design (PBD) tailored to multi-level wind hazards. This yields a probabilistic performance-based design approach for HPCS. Numerical simulations are conducted on a building located in Boston, MA. LCA are conducted for passive control systems and HPCS, and the concept of controller robustness is demonstrated. Results highlight the promise of the proposed performance-based design procedure.

  6. Leveraging model-based study designs and serial micro-sampling techniques to understand the oral pharmacokinetics of the potent LTB4 inhibitor, CP-105696, for mouse pharmacology studies.

    Science.gov (United States)

    Spilker, Mary E; Chung, Heekyung; Visswanathan, Ravi; Bagrodia, Shubha; Gernhardt, Steven; Fantin, Valeria R; Ellies, Lesley G

    2017-07-01

    1. Leukotriene B4 (LTB4) is a proinflammatory mediator important in the progression of a number of inflammatory diseases. Preclinical models can explore the role of LTB4 in pathophysiology using tool compounds, such as CP-105696, that modulate its activity. To support preclinical pharmacology studies, micro-sampling techniques and mathematical modeling were used to determine the pharmacokinetics of CP-105696 in mice within the context of systemic inflammation induced by a high-fat diet (HFD). 2. Following oral administration of doses > 35 mg/kg, CP-105696 kinetics can be described by a one-compartment model with first order absorption. The compound's half-life is 44-62 h with an apparent volume of distribution of 0.51-0.72 L/kg. Exposures in animals fed an HFD are within 2-fold of those fed a normal chow diet. Daily dosing at 100 mg/kg was not tolerated and resulted in a >20% weight loss in the mice. 3. CP-105696's long half-life has the potential to support a twice weekly dosing schedule. Given that most chronic inflammatory diseases will require long-term therapies, these results are useful in determining the optimal dosing schedules for preclinical studies using CP-105696.

  7. NIF Rugby High Foot Campaign from the design side

    Science.gov (United States)

    Leidinger, J.-P.; Callahan, D. A.; Berzak-Hopkins, L. F.; Ralph, J. E.; Amendt, P.; Hinkel, D. E.; Michel, P.; Moody, J. D.; Ross, J. S.; Rygg, J. R.; Celliers, P.; Clouët, J.-F.; Dewald, E. L.; Kaiser, P.; Khan, S.; Kritcher, A. L.; Liberatore, S.; Marion, D.; Masson-Laborde, P.-E.; Milovich, J. L.; Morice, O.; Pak, A. E.; Poujade, O.; Strozzi, D.; Hurricane, O. A.

    2016-05-01

    The NIF Rugby High Foot campaign results, with 8 shots to date, are compared with the 2D FCI2 design simulations. A special emphasis is placed on the predictive features and on those areas where some work is still required to achieve the best possible modelling of these MJ-class experiments.

  8. Simulant Basis for the Standard High Solids Vessel Design

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-09-01

    This document provides the requirements for a test simulant suitable for demonstrating the mixing requirements for the Single High Solids Vessel Design (SHSVD). This simulant has not been evaluated for other purposes such as gas retention and release or erosion. The objective of this work is to provide an underpinning for the simulant properties based on actual waste characterization.

  9. Fibre optic humidity sensor designed for highly alkaline environments

    OpenAIRE

    K. Bremer; Wollweber, M.; Guenther, S.; Werner, G.; Sun, T.; Grattan, K. T. V.; Roth, B.

    2014-01-01

    This paper presents the design of a sensor packaging for a Fibre Bragg Grating (FBG) based fibre optic humidity sensor. The evaluation of the developed fibre optic sensor was performed under experimental conditions and verified its capability to withstand highly alkaline environments. Therefore, the sensor can be applied to monitor the concrete humidity level and thus to indicate the maintenance of concrete structures.

  10. The Design of a High-Performance File Server

    NARCIS (Netherlands)

    van Renesse, R.; Tanenbaum, A.S.; Wilschut, A.N.

    The Bullet server is a file server that outperforms traditional file servers by more than a factor of three. It achieves high throughput and low delay by a software design radically different from that of file servers currently in use. Whereas files are normally stored as a sequence of disk blocks,

  11. Designing Customizable Reading Modules for a High School Literature Classroom

    Science.gov (United States)

    Russell, L. Roxanne; Cuevas, Joshua

    2014-01-01

    This design case follows an ongoing collaboration between an instructional technologist and a high school literature teacher promoting reading comprehension through modules that provide visually interesting display of text on a computer screen along with cognitive tools. The modules were found to boost comprehension of specific content in even one…

  12. Design for a High Energy Density Kelvin-Helmholtz Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Hurricane, O A

    2007-10-29

    While many high energy density physics (HEDP) Rayleigh-Taylor and Richtmyer-Meshkov instability experiments have been fielded as part of basic HEDP and astrophysics studies, not one HEDP Kelvin-Helmholtz (KH) experiment has been successfully performed. Herein, a design for a novel HEDP x-ray driven KH experiment is presented along with supporting radiation-hydrodynamic simulation and theory.

  13. Cadence® High High-Speed PCB Design Flow Workshop

    CERN Document Server

    2006-01-01

    Last release of Cadence High-Speed PCB Design methodology (PE142) based on Concept-HDL schematic editor, Constraint Manager, SPECCTRAQuest signal integrity analysis tool and ALLEGRO layout associated with SPECCTRA auto router tools, is now enough developed and stable to be taken into account for high-speed board designs at CERN. The implementation of this methodology, build around the new Constraint Manager program, is essential when you have to develop a board having a lot of high-speed design rules such as terminated lines, large bus structures, maximum length, timing, crosstalk etc.. that could not be under control by traditional method. On more conventional designs, formal aspect of the methodology could avoid misunderstanding between hardware and ALLEGRO layout designers, minimizing prototype iteration, development time and price. The capability to keep trace of the original digital designer intents in schematic or board layout, loading formal constraints in EDMS, could also be considered for LHC electro...

  14. Blurring the Lines: Leveraging Internet Technology for Successful Blending of Secondary/Post-Secondary Technical Education

    Science.gov (United States)

    Ryan, Kenneth; Kopischke, Kevin

    2008-01-01

    The Remote Automation Management Platform (RAMP) is a real-time, interactive teaching tool which leverages common off-the-shelf internet technologies to provide high school learners extraordinary access to advanced technical education opportunities. This outreach paradigm is applicable to a broad range of advanced technical skills from automation…

  15. Creating geometrically robust designs for highly sensitive problems using topology optimization: Acoustic cavity design

    DEFF Research Database (Denmark)

    Christiansen, Rasmus E.; Lazarov, Boyan S.; Jensen, Jakob S.

    2015-01-01

    Resonance and wave-propagation problems are known to be highly sensitive towards parameter variations. This paper discusses topology optimization formulations for creating designs that perform robustly under spatial variations for acoustic cavity problems. For several structural problems, robust...... and limitations are discussed. In addition, a known explicit penalization approach is considered for comparison. For near-uniform spatial variations it is shown that highly robust designs can be obtained using the double filter approach. It is finally demonstrated that taking non-uniform variations into account...

  16. Improving Magnet Designs With High and Low Field Regions

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian Robert Haffenden; Smith, Anders

    2011-01-01

    A general scheme for increasing the difference in magnetic flux density between a high and a low magnetic field region by removing unnecessary magnet material is presented. This is important in, e.g., magnetic refrigeration where magnet arrays have to deliver high field regions in close proximity...... to low field regions. Also, a general way to replace magnet material with a high permeability soft magnetic material where appropriate is discussed. As an example, these schemes are applied to a two dimensional concentric Halbach cylinder design resulting in a reduction of the amount of magnet material...

  17. Design and Analysis of a High Speed Carry Select Adder

    OpenAIRE

    Simarpreet Singh Chawla; Swapnil Aggarwal; Anshika; Nidhi Goel

    2015-01-01

    An optimal high-speed and low-power VLSI architecture requires an efficient arithmetic processing unit that is optimized for speed and power consumption. Adders are one of the widely used in digital integrated circuit and system design. High speed adder is the necessary component in a data path, e.g. Microprocessors and a Digital signal processor. The present paper proposes a novel high-speed adder by combining the advantages of Carry Look Ahead Adder (CLAA) and Carry Select Adder (CSA), devi...

  18. Design and Analysis of a High Speed Carry Select Adder

    OpenAIRE

    Simarpreet Singh Chawla; Swapnil Aggarwal; Anshika; Nidhi Goel

    2015-01-01

    An optimal high-speed and low-power VLSI architecture requires an efficient arithmetic processing unit that is optimized for speed and power consumption. Adders are one of the widely used in digital integrated circuit and system design.High speed adder is the necessary component in a data path, e.g. Microprocessors and a Digital signal processor. The present paper proposes a novel high-speed adder by combining the advantages of Carry Look Ahead Adder (CLAA) and Carry Select Adder (CSA), devis...

  19. Aerodynamic design considerations for efficient high-lift supersonic wings

    Science.gov (United States)

    Miller, D. S.; Wood, R. M.

    1985-01-01

    A previously developed technique for selecting a design space for efficient supersonic wings is reviewed; this design-space concept is expanded to include thickness and camber effects and is evaluated for cambered wings at high-lift conditions. The original design-space formulation was based on experimental upper-surface and lower-surface normal-force characteristics for flat, uncambered delta wings; it is shown that these general characteristics hold for various thickness distributions and for various amounts of leading-edge camber. The original design-space formulation was also based on the assumption that the combination of Mach number and leading-edge sweep which would produce an equal division of flat-wing lift between the upper and lower surface would also be the proper combination to give the best cambered-wing performance. Using drag-due-to-lift factor as a measure of performance, for high-lift conditions cambered-wing performance is shown to significantly increase as conditions approach the design space; this correlation is demonstrated for both subcritical and supercritical flows.

  20. Leveraging Industry-Academia Collaborations in Adaptive Biomedical Innovation.

    Science.gov (United States)

    Stewart, S R; Barone, P W; Bellisario, A; Cooney, C L; Sharp, P A; Sinskey, A J; Natesan, S; Springs, S L

    2016-12-01

    Despite the rapid pace of biomedical innovation, research and development (R&D) productivity in the pharmaceutical industry has not improved broadly. Increasingly, firms need to leverage new approaches to product development and commercial execution, while maintaining adaptability to rapid changes in the marketplace and in biomedical science. Firms are also seeking ways to capture some of the talent, infrastructure, and innovation that depends on federal R&D investment. As a result, a major transition to external innovation is taking place across the industry. One example of these external innovation initiatives is the Sanofi-MIT Partnership, which provided seed funding to MIT investigators to develop novel solutions and approaches in areas of interest to Sanofi. These projects were highly collaborative, with information and materials flowing both ways. The relatively small amount of funding and short time frame of the awards built an adaptable and flexible process to advance translational science. © 2016 American Society for Clinical Pharmacology and Therapeutics.

  1. Information leverage in interconnected ecosystems: Overcoming the curse of dimensionality.

    Science.gov (United States)

    Ye, Hao; Sugihara, George

    2016-08-26

    In ecological analysis, complexity has been regarded as an obstacle to overcome. Here we present a straightforward approach for addressing complexity in dynamic interconnected systems. We show that complexity, in the form of multiple interacting components, can actually be an asset for studying natural systems from temporal data. The central idea is that multidimensional time series enable system dynamics to be reconstructed from multiple viewpoints, and these viewpoints can be combined into a single model. We show how our approach, multiview embedding (MVE), can improve forecasts for simulated ecosystems and a mesocosm experiment. By leveraging complexity, MVE is particularly effective for overcoming the limitations of short and noisy time series and should be highly relevant for many areas of science. Copyright © 2016, American Association for the Advancement of Science.

  2. Market Designs for High Levels of Variable Generation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M.; Holttinen, H.; Kiviluoma, J.; Orths, A.; Lynch, M.; Soder, L.

    2014-10-01

    Variable renewable generation is increasing in penetration in modern power systems, leading to higher variability in the supply and price of electricity as well as lower average spot prices. This raises new challenges, particularly in ensuring sufficient capacity and flexibility from conventional technologies. Because the fixed costs and lifetimes of electricity generation investments are significant, designing markets and regulations that ensure the efficient integration of renewable generation is a significant challenge. This papers reviews the state of play of market designs for high levels of variable generation in the United States and Europe and considers new developments in both regions.

  3. Design manual. [High temperature heat pump for heat recovery system

    Energy Technology Data Exchange (ETDEWEB)

    Burch, T.E.; Chancellor, P.D.; Dyer, D.F.; Maples, G.

    1980-01-01

    The design and performance of a waste heat recovery system which utilizes a high temperature heat pump and which is intended for use in those industries incorporating indirect drying processes are described. It is estimated that use of this heat recovery system in the paper, pulp, and textile industries in the US could save 3.9 x 10/sup 14/ Btu/yr. Information is included on over all and component design for the heat pump system, comparison of prime movers for powering the compressor, control equipment, and system economics. (LCL)

  4. Design of 1 MHz Solid State High Frequency Power Supply

    Science.gov (United States)

    Parmar, Darshan; Singh, N. P.; Gajjar, Sandip; Thakar, Aruna; Patel, Amit; Raval, Bhavin; Dhola, Hitesh; Dave, Rasesh; Upadhay, Dishang; Gupta, Vikrant; Goswami, Niranjan; Mehta, Kush; Baruah, Ujjwal

    2017-04-01

    High Frequency Power supply (HFPS) is used for various applications like AM Transmitters, metallurgical applications, Wireless Power Transfer, RF Ion Sources etc. The Ion Source for a Neutral beam Injector at ITER-India uses inductively coupled power source at High Frequency (∼1 MHz). Switching converter based topology used to generate 1 MHz sinusoidal output is expected to have advantages on efficiency and reliability as compared to traditional RF Tetrode tubes based oscillators. In terms of Power Electronics, thermal and power coupling issues are major challenges at such a high frequency. A conceptual design for a 200 kW, 1 MHz power supply and a prototype design for a 600 W source been done. The prototype design is attempted with Class-E amplifier topology where a MOSFET is switched resonantly. The prototype uses two low power modules and a ferrite combiner to add the voltage and power at the output. Subsequently solution with Class-D H-Bridge configuration have been evaluated through simulation where module design is stable as switching device do not participate in resonance, further switching device voltage rating is substantially reduced. The rating of the modules is essentially driven by the maximum power handling capacity of the MOSFETs and ferrites in the combiner circuit. The output passive network including resonance tuned network and impedance matching network caters for soft switching and matches the load impedance to 50ohm respectively. This paper describes the conceptual design of a 200 kW high frequency power supply and experimental results of the prototype 600 W, 1 MHz source.

  5. Design of Plasma Generator Driven by High-frequency High-voltage Power Supply

    OpenAIRE

    Yong-Nong, C.; K. Chih-Ming

    2013-01-01

    In this research, a high-frequency high-voltage power supply designed for plasma generator is presented. The power supply mainly consists of a series resonant converter with a high-frequency high-voltage boost transformer. Due to the indispensable high-voltage inheritance in the operation of plasma generator, the analysis of transformer need considering not only winding resistance, leakage inductance, magnetizing inductance, and core-loss resistance, but also parasitic capacitance resulted fr...

  6. Propulsion Design with Freeform Fabrication Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Propulsion Design with Freeform Fabrication (PDFF) will develop and implement a novel design methodology that leverages the rapidly evolving Solid Freeform...

  7. An application of mechanical leverage to microactuation

    Energy Technology Data Exchange (ETDEWEB)

    Sniegowski, J.J. [Sandia National Labs., Albuquerque, NM (United States); Smith, C. [Wisconsin Univ., Madison, WI (United States)

    1994-12-31

    Preliminary results on the use of mechanical advantage to convert a short-displacement, high-force actuation mechanism into a long-displacement, medium-force actuator are presented. This micromechanical, mechanically-advantaged actuator is capable of relatively large displacement and force values. The target design values are lever ration of 17.5:1 leading to a {plus_minus}17.5 {mu}N of force throughout providing no less than 2.25 {mu}N of force throughout actuator`s range of motion for an applied voltage of less tan 50 volts. The basis for the mechanical advantage is simple levers with fulcrums.

  8. 7 CFR 4290.1100 - Type of Leverage and application procedures.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Type of Leverage and application procedures. 4290... INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) General Information About Obtaining Leverage § 4290.1100 Type of Leverage and application procedures. (a) Type of Leverage available...

  9. 13 CFR 107.1100 - Types of Leverage and application procedures.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Types of Leverage and application... BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) General Information About Obtaining Leverage § 107.1100 Types of Leverage and application procedures. (a) Types of Leverageable...

  10. Leveraging natural dynamical structures to explore multi-body systems

    Science.gov (United States)

    Bosanac, Natasha

    Multi-body systems have become the target of an increasing number of mission concepts and observations, supplying further information about the composition, origin and dynamical environment of bodies within the solar system and beyond. In many of these scenarios, identification and characterization of the particular solutions that exist in a circular restricted three-body model is valuable. This insight into the underlying natural dynamical structures is achieved via the application of dynamical systems techniques. One application of such analysis is trajectory design for CubeSats, which are intended to explore cislunar space and other planetary systems. These increasingly complex mission objectives necessitate innovative trajectory design strategies for spacecraft within our solar system, as well as the capability for rapid and well-informed redesign. Accordingly, a trajectory design framework is constructed using dynamical systems techniques and demonstrated for the Lunar IceCube mission. An additional application explored in this investigation involves the motion of an exoplanet near a binary star system. Due to the strong gravitational field near a binary star, physicists have previously leveraged these systems as testbeds for examining the validity of gravitational and relativistic theories. In this investigation, a preliminary analysis into the effect of an additional three-body interaction on the dynamical environment near a large mass ratio binary system is conducted. As demonstrated through both of these sample applications, identification and characterization of the natural particular solutions that exist within a multi-body system supports a well-informed and guided analysis.

  11. Simulant Basis for the Standard High Solids Vessel Design

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-09-30

    The Waste Treatment and Immobilization Plant (WTP) is working to develop a Standard High Solids Vessel Design (SHSVD) process vessel. To support testing of this new design, WTP engineering staff requested that a Newtonian simulant and a non-Newtonian simulant be developed that would represent the Most Adverse Design Conditions (in development) with respect to mixing performance as specified by WTP. The majority of the simulant requirements are specified in 24590-PTF-RPT-PE-16-001, Rev. 0. The first step in this process is to develop the basis for these simulants. This document describes the basis for the properties of these two simulant types. The simulant recipes that meet this basis will be provided in a subsequent document.

  12. Robust Optimization Design Algorithm for High-Frequency TWTs

    Science.gov (United States)

    Wilson, Jeffrey D.; Chevalier, Christine T.

    2010-01-01

    Traveling-wave tubes (TWTs), such as the Ka-band (26-GHz) model recently developed for the Lunar Reconnaissance Orbiter, are essential as communication amplifiers in spacecraft for virtually all near- and deep-space missions. This innovation is a computational design algorithm that, for the first time, optimizes the efficiency and output power of a TWT while taking into account the effects of dimensional tolerance variations. Because they are primary power consumers and power generation is very expensive in space, much effort has been exerted over the last 30 years to increase the power efficiency of TWTs. However, at frequencies higher than about 60 GHz, efficiencies of TWTs are still quite low. A major reason is that at higher frequencies, dimensional tolerance variations from conventional micromachining techniques become relatively large with respect to the circuit dimensions. When this is the case, conventional design- optimization procedures, which ignore dimensional variations, provide inaccurate designs for which the actual amplifier performance substantially under-performs that of the design. Thus, this new, robust TWT optimization design algorithm was created to take account of and ameliorate the deleterious effects of dimensional variations and to increase efficiency, power, and yield of high-frequency TWTs. This design algorithm can help extend the use of TWTs into the terahertz frequency regime of 300-3000 GHz. Currently, these frequencies are under-utilized because of the lack of efficient amplifiers, thus this regime is known as the "terahertz gap." The development of an efficient terahertz TWT amplifier could enable breakthrough applications in space science molecular spectroscopy, remote sensing, nondestructive testing, high-resolution "through-the-wall" imaging, biomedical imaging, and detection of explosives and toxic biochemical agents.

  13. Thermal design and analysis of high power star sensors

    Directory of Open Access Journals (Sweden)

    Fan Jiang

    2015-09-01

    Full Text Available The requirement for the temperature stability is very high in the star sensors as the high precision needs for the altitude information. Thermal design and analysis thus is important for the high power star sensors and their supporters. CCD, normally with Peltier thermoelectric cooler (PTC, is the most important sensor component in the star sensors, which is also the main heat source in the star sensors suite. The major objective for the thermal design in this paper is to design a radiator to optimize the heat diffusion for CCD and PTC. The structural configuration of star sensors, the heat sources and orbit parameters were firstly introduced in this paper. The influences of the geometrical parameters and coating material characteristics of radiators on the heat diffusion were investigated by heat flux analysis. Carbon–carbon composites were then chosen to improve the thermal conductivity for the sensor supporters by studying the heat transfer path. The design is validated by simulation analysis and experiments on orbit. The satellite data show that the temperatures of three star sensors are from 17.8 °C to 19.6 °C, while the simulation results are from 18.1 °C to 20.1 °C. The temperatures of radiator are from 16.1 °C to 16.8 °C and the corresponding simulation results are from 16.0 °C to 16.5 °C. The temperature variety of each star sensor is less than 2 °C, which satisfies the design objectives.

  14. Integrated Circuit Design in US High-Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Geronimo, G. D. [Brookhaven National Lab. (BNL), Upton, NY (United States); Christian, D. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bebek, C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Garcia-Sciveres, M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lippe, H. V. D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Haller, G. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Grillo, AA [Univ. of California, Santa Cruz, CA (United States); Newcomer, M [Univ. of Pennsylvania, Philadelphia, PA (United States)

    2013-07-10

    This whitepaper summarizes the status, plans, and challenges in the area of integrated circuit design in the United States for future High Energy Physics (HEP) experiments. It has been submitted to CPAD (Coordinating Panel for Advanced Detectors) and the HEP Community Summer Study 2013(Snowmass on the Mississippi) held in Minnesota July 29 to August 6, 2013. A workshop titled: US Workshop on IC Design for High Energy Physics, HEPIC2013 was held May 30 to June 1, 2013 at Lawrence Berkeley National Laboratory (LBNL). A draft of the whitepaper was distributed to the attendees before the workshop, the content was discussed at the meeting, and this document is the resulting final product. The scope of the whitepaper includes the following topics: Needs for IC technologies to enable future experiments in the three HEP frontiers Energy, Cosmic and Intensity Frontiers; Challenges in the different technology and circuit design areas and the related R&D needs; Motivation for using different fabrication technologies; Outlook of future technologies including 2.5D and 3D; Survey of ICs used in current experiments and ICs targeted for approved or proposed experiments; IC design at US institutes and recommendations for collaboration in the future.

  15. Designing high power targets with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Covrig, Silviu D. [JLAB

    2013-11-01

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 {micro}A rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 {micro}A beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  16. Designing high power targets with computational fluid dynamics (CFD)

    Energy Technology Data Exchange (ETDEWEB)

    Covrig, S. D. [Thomas Jefferson National Laboratory, Newport News, VA 23606 (United States)

    2013-11-07

    High power liquid hydrogen (LH2) targets, up to 850 W, have been widely used at Jefferson Lab for the 6 GeV physics program. The typical luminosity loss of a 20 cm long LH2 target was 20% for a beam current of 100 μA rastered on a square of side 2 mm on the target. The 35 cm long, 2500 W LH2 target for the Qweak experiment had a luminosity loss of 0.8% at 180 μA beam rastered on a square of side 4 mm at the target. The Qweak target was the highest power liquid hydrogen target in the world and with the lowest noise figure. The Qweak target was the first one designed with CFD at Jefferson Lab. A CFD facility is being established at Jefferson Lab to design, build and test a new generation of low noise high power targets.

  17. Reprogrammable Controller Design From High-Level Specification

    OpenAIRE

    Benmohammed, M.; M. Bourahla; S. Merniz

    2003-01-01

    Existing techniques in high-level synthesis mostly assume a simple controller architecture model in the form of a single FSM. However, in reality more complex controller architectures are often used. On the other hand, in the case of programmable processors, the controller architecture is largely defined by the available control-flow instructions in the instruction set. With the wider acceptance of behavioral synthesis, the application of these methods for the design of programmable contr...

  18. Computationally Designed Oligomers for High Contrast Black Electrochromic Polymers

    Science.gov (United States)

    2017-05-05

    AFRL-AFOSR-VA-TR-2017-0097 Computationally Designed Oligomers for High Contrast Black Electrochromic Polymers Aimee Tomlinson University Of North...Black Electrochromic FA9550-15-1-0181 Polymers 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6.AUTHO~ 5d. PROJECT NUMBER AimeeL. T . Se. TASK...neraly black neutral state. Additionally, upon oxidation these polymers would have litte to no tailing form the near IR thereby guaranteeing nearly a I

  19. The high-throughput highway to computational materials design.

    Science.gov (United States)

    Curtarolo, Stefano; Hart, Gus L W; Nardelli, Marco Buongiorno; Mingo, Natalio; Sanvito, Stefano; Levy, Ohad

    2013-03-01

    High-throughput computational materials design is an emerging area of materials science. By combining advanced thermodynamic and electronic-structure methods with intelligent data mining and database construction, and exploiting the power of current supercomputer architectures, scientists generate, manage and analyse enormous data repositories for the discovery of novel materials. In this Review we provide a current snapshot of this rapidly evolving field, and highlight the challenges and opportunities that lie ahead.

  20. Optimal design of high concentration reflected photovoltaic module

    Science.gov (United States)

    Hsu, Cheng-Yi; Lin, Yuli

    2017-09-01

    In this study, a fabrication and design process of a high concentration reflected photovoltaic (HCRPV) using 3x3 array modules with the light guide tube and III-V solar cells are demonstrated. The developed and designed of 3x3 array modules with the light guide tube following key design aims are all satisfied with highly uniform irradiance on the solar cell absorber and maximum light collective efficiency. With the use of the maximum peak power output from the tracking system which had two phases of X-Y axis and θ-axis tracking the sun position precisely and getting. With an optimized high concentration reflected photovoltaic systems of 3x3 array modules with the light guide tube, the optimal condition and measured characteristics and efficiency was conducted. This improved HCRPV performance is attributed to the enhanced collection light power from a big reflected mirror area. The HCRPV module was then fabricated using Aluminum material and it was coated with silver material. From the simulation results, the light collective efficiency can be reached to about 94.9% with uniform irradiance. From the measurement results, the power can be calculated to be 2.62W˜2.74W, which is about 90% of the power of solar cell (3W) used.

  1. Improved design for high resolution electrospray ionization ion mobility spectrometry.

    Science.gov (United States)

    Jafari, M T

    2009-03-15

    An improved design for high resolution electrospray ionization ion mobility spectrometry (ESI-IMS) was developed by making some salient modifications to the IMS cell and its performance was investigated. To enhance desolvation of electrospray droplets at high sample flow rates in this new design, volume of the desolvation region was decreased by reducing its diameter and the entrance position of the desolvation gas was shifted to the end of the desolvation region (near the ion gate). In addition, the ESI source (both needle and counter electrode) was positioned outside of the heating oven of the IMS. This modification made it possible to use the instrument at higher temperatures, and preventing needle clogging in the electrospray process. The ion mobility spectra of different chemical compounds were obtained. The resolving power and resolution of the instrument were increased by about 15-30% relative to previous design. In this work, the baseline separation of the two adjacent ion peaks of morphine and those of codeine was achieved for the first time with resolutions of 1.5 and 1.3, respectively. These four ion peaks were well separated from each other using carbon dioxide (CO(2)) rather than nitrogen as the drift gas. Finally, the analytical parameters obtained for ethion, metalaxyl, and tributylamine indicated the high performance of the instrument for quantitative analysis.

  2. High-Pressure Design of Advanced BN-Based Materials

    Directory of Open Access Journals (Sweden)

    Oleksandr O. Kurakevych

    2016-10-01

    Full Text Available The aim of the present review is to highlight the state of the art in high-pressure design of new advanced materials based on boron nitride. Recent experimental achievements on the governing phase transformation, nanostructuring and chemical synthesis in the systems containing boron nitride at high pressures and high temperatures are presented. All these developments allowed discovering new materials, e.g., ultrahard nanocrystalline cubic boron nitride (nano-cBN with hardness comparable to diamond, and superhard boron subnitride B13N2. Thermodynamic and kinetic aspects of high-pressure synthesis are described based on the data obtained by in situ and ex situ methods. Mechanical and thermal properties (hardness, thermoelastic equations of state, etc. are discussed. New synthetic perspectives, combining both soft chemistry and extreme pressure–temperature conditions are considered.

  3. Lattice Design for a High-Power Infrared FEL

    Science.gov (United States)

    Douglas, D. R.

    1997-05-01

    A 1 kW infrared FEL, funded by the U.S. Navy, is under construction at Jefferson Lab. This device will be driven by a compact, 42 MeV, 5 mA, energy-recovering, CW SRF-based linear accelerator to produce light in the 3-6.6 μm range. The machine concept comprises a 10 MeV injector, a linac based on a single high-gradient Jefferson Lab accelerator cryomodule, a wiggler and optical cavity, and an energy-recovery recirculation arc. Energy recovery limits cost and technical risk by reducing the RF power requirements in the driver accelerator. Following deceleration to 10 MeV, the beam is dumped. Stringent phase space requirements at the wiggler, low beam energy, and high beam current subject the accelerator lattice to numerous constraints. Principal considerations include: transport and delivery to the FEL of a high-quality, high-current beam; the impact of coherent synchrotron radiation (CSR) during beam recirculation transport; beam optics aberration control, to provide low-loss energy-recovery transport of a 5% relative momentum spread, high-current beam; attention to possible beam breakup (BBU) instabilities in the recirculating accelerator; and longitudinal phase space management during beam transport, to optimize RF drive system control during energy recovery and FEL operation. The presentation will address the design process and design solution for an accelerator transport lattice that meets the requirements imposed by these physical phenomena and operational necessities.

  4. Design of Plasma Generator Driven by High-frequency High-voltage Power Supply

    Directory of Open Access Journals (Sweden)

    C. Yong-Nong

    2013-04-01

    Full Text Available In this research, a high-frequency high-voltage power supply designed for plasma generator is presented. The power supply mainly consists of a series resonant converter with a high-frequency high-voltage boost transformer. Due to the indispensable high-voltage inheritance in the operation of plasma generator, the analysis of transformer need considering not only winding resistance, leakage inductance, magnetizing inductance, and core-loss resistance, but also parasitic capacitance resulted from the insulation wrappings on the high-voltage side. This research exhibits a simple approach to measuring equivalent circuit parameters of the high-frequency, high-voltage transformer with stray capacitance being introduced into the conventional modeling. The proposed modeling scheme provides not only a precise measurement procedure but also effective design information for series-load resonant converter. The plasma discharging plate is designed as part of the electric circuit in the series load-resonant converter and the circuit model of the plasma discharging plate is also conducted as well. Thus, the overall model of the high-voltage plasma generator is built and the designing procedures for appropriate selections of the corresponding resonant-circuit parameters can be established. Finally, a high-voltage plasma generator with 220V, 60Hz, and 1kW input, along with a 22 kHz and over 8kV output, is realized and implemented.

  5. Design of Plasma Generator Driven by High-frequency High-voltage Power Supply

    Directory of Open Access Journals (Sweden)

    C. Yong-Nong

    2013-03-01

    Full Text Available In this research, a high-frequency high-voltage power supply designed for plasma generator is presented. The powersupply mainly consists of a series resonant converter with a high-frequency high-voltage boost transformer. Due to theindispensable high-voltage inheritance in the operation of plasma generator, the analysis of transformer needconsidering not only winding resistance, leakage inductance, magnetizing inductance, and core-loss resistance, butalso parasitic capacitance resulted from the insulation wrappings on the high-voltage side. This research exhibits asimple approach to measuring equivalent circuit parameters of the high-frequency, high-voltage transformer with straycapacitance being introduced into the conventional modeling. The proposed modeling scheme provides not only aprecise measurement procedure but also effective design information for series-load resonant converter. The plasmadischarging plate is designed as part of the electric circuit in the series load-resonant converter and the circuit modelof the plasma discharging plate is also conducted as well. Thus, the overall model of the high-voltage plasmagenerator is built and the designing procedures for appropriate selections of the corresponding resonant-circuitparameters can be established. Finally, a high-voltage plasma generator with 220V, 60Hz, and 1kW input, along witha 22 kHz and over 8kV output, is realized and implemented.

  6. Leveraging HIPAA to support consumer empowerment.

    Science.gov (United States)

    Niedzwiecki, P; Priest, S L; Pivnicny, V C; Ruffino, B C

    2000-01-01

    The consumer empowerment movement needs to provide consumers with more access and control of their healthcare records. The premise of this article is that there is a fundamental market shift towards consumer empowerment--and technology is the driving force. We contend the results will satisfy the intent of the HIPAA mandate. Two restrictions impede the market from moving toward real consumer empowerment. First, managing one's own health history record is difficult because the complete record is segmented in disparate systems that are difficult to integrate. This is because unique identifiers and consistent coding are nonexistent. Second, security and control of patient identifiable health information is still evolving. There is no consensus among providers for Internet security, as we can see by all the legislative privacy bills trying to address the issue. HIPAA is both a legislative mandate and an enabler of the next healthcare paradigm. Providers must comply with the HIPAA mandates for electronic data interchange (EDI) code sets, administrative simplification, and privacy and confidentiality protocols. By recognizing HIPAA as part of a consumer-driven movement, organizations can incorporate empowerment strategies into a planning process that creates consumer options in healthcare and leverages HIPAA compliance to benefit both providers and consumers. This article suggests methods for meeting HIPAA compliance through innovative consumer empowerment methods.

  7. High Flux Isotope Reactor cold neutron source reference design concept

    Energy Technology Data Exchange (ETDEWEB)

    Selby, D.L.; Lucas, A.T.; Hyman, C.R. [and others

    1998-05-01

    In February 1995, Oak Ridge National Laboratory`s (ORNL`s) deputy director formed a group to examine the need for upgrades to the High Flux Isotope Reactor (HFIR) system in light of the cancellation of the Advanced neutron Source Project. One of the major findings of this study was that there was an immediate need for the installation of a cold neutron source facility in the HFIR complex. In May 1995, a team was formed to examine the feasibility of retrofitting a liquid hydrogen (LH{sub 2}) cold source facility into an existing HFIR beam tube. The results of this feasibility study indicated that the most practical location for such a cold source was the HB-4 beam tube. This location provides a potential flux environment higher than the Institut Laue-Langevin (ILL) vertical cold source and maximizes the space available for a future cold neutron guide hall expansion. It was determined that this cold neutron beam would be comparable, in cold neutron brightness, to the best facilities in the world, and a decision was made to complete a preconceptual design study with the intention of proceeding with an activity to install a working LH{sub 2} cold source in the HFIR HB-4 beam tube. During the development of the reference design the liquid hydrogen concept was changed to a supercritical hydrogen system for a number of reasons. This report documents the reference supercritical hydrogen design and its performance. The cold source project has been divided into four phases: (1) preconceptual, (2) conceptual design and testing, (3) detailed design and procurement, and (4) installation and operation. This report marks the conclusion of the conceptual design phase and establishes the baseline reference concept.

  8. High-Flow Jet Exit Rig Designed and Fabricated

    Science.gov (United States)

    Buehrle, Robert J.; Trimarchi, Paul A.

    2003-01-01

    The High-Flow Jet Exit Rig at the NASA Glenn Research Center is designed to test single flow jet nozzles and to measure the appropriate thrust and noise levels. The rig has been designed for the maximum hot condition of 16 lbm/sec of combustion air at 1960 R (maximum) and to produce a maximum thrust of 2000 lb. It was designed for cold flow of 29.1 lbm/sec of air at 530 R. In addition, it can test dual-flow nozzles (nozzles with bypass flow in addition to core flow) with independent control of each flow. The High- Flow Jet Exit Rig was successfully fabricated in late 2001 and is being readied for checkout tests. The rig will be installed in Glenn's Aeroacoustic Propulsion Laboratory. The High-Flow Jet Exit Rig consists of the following major components: a single component force balance, the natural-gas-fueled J-79 combustor assembly, the plenum and manifold assembly, an acoustic/instrumentation/seeding (A/I/S) section, a table, and the research nozzles. The rig will be unique in that it is designed to operate uncooled. The structure survives the 1960 R test condition because it uses carefully selected high temperature alloy materials such as Hastelloy-X. The lower plenum assembly was designed to operate at pressures to 450 psig at 1960 R, in accordance with the ASME B31.3 piping code. The natural gas-fueled combustor fires directly into the lower manifold. The hot air is directed through eight 1-1/2-in. supply pipes that supply the upper plenum. The flow is conditioned in the upper plenum prior to flowing to the research nozzle. The 1-1/2-in. supply lines are arranged in a U-shaped design to provide for a flexible piping system. The combustor assembly checkout was successfully conducted in Glenn's Engine Component Research Laboratory in the spring of 2001. The combustor is a low-smoke version of the J79 combustor used to power the F4 Phantom military aircraft. The natural gas-fueled combustor demonstrated high-efficiency combustion over a wide range of operating

  9. Design of UAV high resolution image transmission system

    Science.gov (United States)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  10. Launch vehicle systems design analysis

    Science.gov (United States)

    Ryan, Robert; Verderaime, V.

    1993-01-01

    Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.

  11. Lattice Design in High-energy Particle Accelerators

    CERN Document Server

    Holzer, B.J.

    2014-01-01

    This lecture gives an introduction into the design of high-energy storage ring lattices. Applying the formalism that has been established in transverse be am optics, the basic principles of the development of a magnet lattice are explained and the characteristics of the resulting magnet structure are discussed. The periodic assembly of a storage ring cell with its boundary conditions concerning stability and scaling of the beam optics parameters is addressed as well as special lattice insertions such as drifts, mini beta sections, dispersion suppressors, etc. In addition to the exact calculations that are indispensable for a rigorous treatment of the matter, scaling rules are shown and simple rules of thumb are included that enable the lattice designer to do the first estimates and get the basic numbers ‘ on the back of an envelope.

  12. Reprogrammable Controller Design From High-Level Specification

    Directory of Open Access Journals (Sweden)

    M. Benmohammed

    2003-10-01

    Full Text Available Existing techniques in high-level synthesis mostly assume a simple controller architecture model in the form of a single FSM. However, in reality more complex controller architectures are often used. On the other hand, in the case of programmable processors, the controller architecture is largely defined by the available control-flow instructions in the instruction set. With the wider acceptance of behavioral synthesis, the application of these methods for the design of programmable controllers is of fundamental importance in embedded system technology. This paper describes an important extension of an existing architectural synthesis system targeting the generation of ASIP reprogrammable architectures. The designer can then generate both style of architecture, hardwired and programmable, using the same synthesis system and can quickly evaluate the trade-offs of hardware decisions.

  13. Design considerations for achieving high vacuum integrity in fusion devices

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, G.M.; Haines, J.R.

    1983-01-01

    Achieving high vacuum integrity in fusion devices requires close attention to both the overall system configuration and the design details of joints and seals. This paper describes the factors in selecting the system configuration, from a vacuum standpoint, for the Princeton Plasma Physics Laboratory (PPPL) DCT-8 Tokamak device. The DCT-8 (driven current tokamak) is the eighth design in a series of tokamak concepts defined to cover the magnetic confinement and development gap between the Tokamak Fusion Test Reactor (TFTR) and the Engineering Test Reactor (ETR). Leak detection concept development is considered a vital activity, as well as the definition of a configuration that minimizes the consequences of leaks. A major part of the vacuum boundaries of the magnet system and the plasma system is common. For the major penetrations, primary and secondary seals are provided with vacuum control over the region between seals. The intent is to instrument these cavities and provide automated recordings of these measurements for leak maintenance.

  14. High powered rocketry: design, construction, and launching experience and analysis

    Science.gov (United States)

    Paulson, Pryce; Curtis, Jarret; Bartel, Evan; Owens Cyr, Waycen; Lamsal, Chiranjivi

    2018-01-01

    In this study, the nuts and bolts of designing and building a high powered rocket have been presented. A computer simulation program called RockSim was used to design the rocket. Simulation results are consistent with time variations of altitude, velocity, and acceleration obtained in the actual flight. The actual drag coefficient was determined by using altitude back-tracking method and found to be 0.825. Speed of the exhaust determined to be 2.5 km s‑1 by analyzing the thrust curve of the rocket. Acceleration in the coasting phase of the flight, represented by the second-degree polynomial of a small leading coefficient, have been found to approach ‘-g’ asymptotically.

  15. Ultra-high-Q nanobeam cavity design in Diamond

    CERN Document Server

    Bayn, Igal; Kalish, Rafi

    2010-01-01

    A novel nanobeam design with a triangular cross-section is proposed. This design makes possible implementing nanocavities with improved optical properties. The dependence of a diamond-based cavity quality factor Q and mode volume Vm on geometry parameter space are studied via 3D FDTD computations. An ultra-high-Q cavity with Q\\aprox 2.51 \\times 10^6 and Vm=1.06 \\times ({\\lambda}/n)^3 is predicted. The mode preferential radiation is upward. The implications on the potential applications are discussed. The proposed nanobeam enables fabrication of the cavity without relying on a pre-existing free-standing diamond membrane as required in most previous approaches.

  16. High-Speed EMU TCMS Design and LCC Technology Research

    Directory of Open Access Journals (Sweden)

    Hongwei Zhao

    2017-02-01

    Full Text Available This paper introduces the high-speed electrical multiple unit (EMU life cycle, including the design, manufacturing, testing, and maintenance stages. It also presents the train control and monitoring system (TCMS software development platform, the TCMS testing and verification bench, the EMU driving simulation platform, and the EMU remote data transmittal and maintenance platform. All these platforms and benches combined together make up the EMU life cycle cost (LCC system. Each platform facilitates EMU LCC management and is an important part of the system.

  17. Designing High-Refractive Index Polymers Using Materials Informatics

    Directory of Open Access Journals (Sweden)

    Vishwesh Venkatraman

    2018-01-01

    Full Text Available A machine learning strategy is presented for the rapid discovery of new polymeric materials satisfying multiple desirable properties. Of particular interest is the design of high refractive index polymers. Our in silico approach employs a series of quantitative structure–property relationship models that facilitate rapid virtual screening of polymers based on relevant properties such as the refractive index, glass transition and thermal decomposition temperatures, and solubility in standard solvents. Exploration of the chemical space is carried out using an evolutionary algorithm that assembles synthetically tractable monomers from a database of existing fragments. Selected monomer structures that were further evaluated using density functional theory calculations agree well with model predictions.

  18. Design of a highly segmented Endcap at a CLIC detector

    CERN Document Server

    Gerwig, H; Siegrist, N

    2010-01-01

    This technical note describes a possible design for a highly segmented end-cap at a CLIC detector with a strong magnetic field up to 5 Tesla. Reinforcement is horizontal in order to allow an insertion of the muon chambers from the side. Construction issues, assembly questions as well as muon chamber access and support questions have been studied. A FEA analysis to optimize dead space for physics and checking the weakening effect of alignment channels through the end-cap have been performed.

  19. Design and analysis of a high pressure and high temperature sulfuric acid experimental system

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung-Deok, E-mail: sdhong1@kaeri.re.kr [Korea Atomic Energy Research Institute, Yuseong-Gu, Daejeon 305-600 (Korea, Republic of); Kim, Chan-Soo; Kim, Yong-Wan [Korea Atomic Energy Research Institute, Yuseong-Gu, Daejeon 305-600 (Korea, Republic of); Seo, Dong-Un; Park, Goon-Cherl [Seoul National University, San56-1, Sillim-Dong, Kwanak-Gu, Seoul 151-742 (Korea, Republic of)

    2012-10-15

    We discuss the design and analysis of a small scale sulfuric acid experimental system that can simulate a part of the hydrogen production module. Because nuclear hydrogen coupled components such as a SO{sub 3} decomposer and a sulfuric acid evaporator should be tested under high pressure and high temperature operating conditions, we developed the sulfuric acid loop to satisfy design specifications of 900 Degree-Sign C in temperature and 1.0 MPa in pressure. The components for the sulfuric acid loop were specially designed using a combination of materials with good corrosion resistance; a ceramic and Hastelloy-C276. The design feature of the loop was tested for performance in a 10 h sulfuric acid experiment and optimized using Aspen+ code simulation.

  20. Crowdteaching: Supporting Teaching as Designing in Collective Intelligence Communities

    Science.gov (United States)

    Recker, Mimi; Yuan, Min; Ye, Lei

    2014-01-01

    The widespread availability of high-quality Web-based content offers new potential for supporting teachers as designers of curricula and classroom activities. When coupled with a participatory Web culture and infrastructure, teachers can share their creations as well as leverage from the best that their peers have to offer to support a collective…

  1. Designer laccases: a vogue for high-potential fungal enzymes?

    Science.gov (United States)

    Rodgers, Caroline J; Blanford, Christopher F; Giddens, Stephen R; Skamnioti, Pari; Armstrong, Fraser A; Gurr, Sarah J

    2010-02-01

    Laccases are blue multicopper oxidases that catalyse the four-electron reduction of O(2) to water coupled with the oxidation of small organic substrates. Secreted basidiomycete white-rot fungal laccases orchestrate this with high thermodynamic efficiency, making these enzymes excellent candidates for exploitation as industrial oxidants. However, these fungi are less tractable genetically than the ascomycetes, which predominantly produce lower-potential laccases. We address the state-of-play regarding expression of high reduction potential laccases in heterologous hosts, and issues regarding enzyme glycosylation status. We describe the synergistic role of structural biology, particularly in unmasking structure-function relationships following genetic modification and their collective impact on laccase yields. Such recent research draws closer the prospect of industrial quantities of designer, fit-for-purpose laccases. 2009 Elsevier Ltd. All rights reserved.

  2. Designing high-temperature steels via surface science and thermodynamics

    Science.gov (United States)

    Gross, Cameron T.; Jiang, Zilin; Mathai, Allan; Chung, Yip-Wah

    2016-06-01

    Electricity in many countries such as the US and China is produced by burning fossil fuels in steam-turbine-driven power plants. The efficiency of these power plants can be improved by increasing the operating temperature of the steam generator. In this work, we adopted a combined surface science and computational thermodynamics approach to the design of high-temperature, corrosion-resistant steels for this application. The result is a low-carbon ferritic steel with nanosized transition metal monocarbide precipitates that are thermally stable, as verified by atom probe tomography. High-temperature Vickers hardness measurements demonstrated that these steels maintain their strength for extended periods at 700 °C. We hypothesize that the improved strength of these steels is derived from the semi-coherent interfaces of these thermally stable, nanosized precipitates exerting drag forces on impinging dislocations, thus maintaining strength at elevated temperatures.

  3. Design and control of a high precision drive mechanism

    Science.gov (United States)

    Pan, Bo; He, Yongqiang; Wang, Haowei; Zhang, Shuyang; Zhang, Donghua; Wei, Xiaorong; Jiang, Zhihong

    2017-01-01

    This paper summarizes the development of a high precision drive mechanism (HPDM) for space application, such as the directional antenna, the laser communication device, the mobile camera and other pointing mechanisms. In view of the great practical significance of high precision drive system, control technology for permanent magnet synchronous motor (PMSM) servo system is also studied and a PMSM servo controller is designed in this paper. And the software alignment was applied to the controller to eliminate the steady error of the optical encoder, which helps to realize the 1 arcsec (1σ) control precision. To assess its capabilities, the qualification environment testing including the thermal vacuum cycling testing, and the sinusoidal and random vibration were carried out. The testing results show that the performance of the HPDM is almost the same between the former and the end of each testing.

  4. Design of Reforma 509 with High Strength Steel

    Science.gov (United States)

    Smith, Stuart; Whitby, William; Easton, Marc

    Reforma 509 is a high-rise building located in the heart of the Central Business District of Mexico City. The building is comprised of office, hotel, residential and parking and forms part of a cluster of tall buildings in the area. If completed today, Reforma 509 would be the tallest building in Mexico, at 238m. All of the building's gravity and lateral (wind and seismic) loads are carried by an architecturally expressed perimeter frame that is formed from highly efficient Steel Reinforced Concrete (SRC) columns coupled together by steel tube perimeter bracing. This paper investigates the implications of substituting a grade 50 (fy=345 MPa) carbon steel with a higher strength micro-alloyed grade 70 (fy=480 MPa) steel in the design of Reforma 509.

  5. High concentration methanol fuel cells: Design and theory

    Science.gov (United States)

    Shaffer, Christian E.; Wang, Chao-Yang

    Use of highly concentrated methanol fuel is required for direct methanol fuel cells (DMFCs) to compete with the energy density of Li-ion batteries. Because one mole of H 2O is needed to oxidize one mole of methanol (CH 3OH) in the anode, low water crossover to the cathode or even water back flow from the cathode into the anode is a prerequisite for using highly concentrated methanol. It has previously been demonstrated that low or negative water crossover can be realized by the incorporation of a low-α membrane electrode assembly (MEA), which is essentially an MEA designed for optimal water management, using, e.g. hydrophobic anode and cathode microporous layers (aMPL and cMPL). In this paper we extend the low-α MEA concept to include an anode transport barrier (aTB) between the backing layer and hydrophobic aMPL. The main role of the aTB is to act as a barrier to CH 3OH and H 2O diffusion between a water-rich anode catalyst layer (aCL) and a methanol-rich fuel feed. The primary role of the hydrophobic aMPL in this MEA is to facilitate a low (or negative) water crossover to the cathode. Using a previously developed 1D, two-phase DMFC model, we show that this novel design yields a cell with low methanol crossover (i.e. high fuel efficiency, ∼80%, at a typical operating current density of ∼80-90% of the cell limiting current density), while directly feeding high concentration methanol fuel into the anode. The physics of how the aTB and aMPL work together to accomplish this is fully elucidated. We further show that a thicker, more hydrophilic, more permeable aTB, and thicker, more hydrophobic, and less permeable aMPL are most effective in accomplishing low CH 3OH and H 2O crossover.

  6. RISIKO, PROFITABILITAS, LEVERAGE OPERASI, DAN UKURAN PERUSAHAAN TERHADAP PERATAAN LABA

    Directory of Open Access Journals (Sweden)

    Syafriont By

    2017-03-01

    Full Text Available A recent analysis showed that there was a significant effect among firm size,corporate risk, profitability and operating leverage to corporate income smoothing practices.The objective of this research was to empirically reexamine the factors that affected incomesmoothing practices. There were four factors that were examined, namely firm size, corporaterisk, profitability and operating leverage. The samples used in this study were 89 firms listedat Indonesian Stock Exchange (ISE between 2005 to 2007. The multivariate test, the use oflogistic regression results showed both risk and profitability affected significantly to incomesmoothing practices. While firm size and operating leverage did not affect significantly toincome smoothing practices, the univariate test support the previous test that showed therewas statistically difference in risk as well as profitability between smoother and non-smootherfirms. However, both firm size and operating leverage were not statistically different.

  7. Cash Holdings and Leverage of German Listed Firms

    DEFF Research Database (Denmark)

    Rapp, Marc Steffen; Killi, Andreas Maximilian

    2016-01-01

    We examine cash holdings and leverage levels of German listed (non-financial and non-utility) firms. We document a secular increase in cash ratios over the last twenty years (1992–2011), reducing the net debt book leverage ratio for the average sample firm close to zero. Using prediction models...... with standard firm characteristics, our results suggest a fundamental change in firms’ financial policies: In the second half of the sample period, both established firms and IPO firms exhibit substantially higher (lower) cash (net debt leverage) levels than predicted. The unexpected changes among established...... firms are associated with measures of uncertainty faced by firms. Our results suggest that German firms have increased (reduced) their cash (net debt leverage) levels over time in order to adopt more precautionary financial policies....

  8. Design concept of the high performance light water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Schulenberg, Thomas; Starflinger, Joerg [Forschungszentrum Karlsruhe GmbH Technik und Umwelt (Germany). Inst. for Nuclear and Energy Technologies; Bittermann, Dietmar [AREVA NP GmbH, Erlangen (Germany). NEP-G Process

    2009-04-15

    The 'High Performance Light Water Reactor' (HPLWR) is a Light Water Reactor operating with supercritical water as coolant. At a pressure of 25 MPa in the core, water is heated up from 280 to 500 C. For these conditions, the envisaged net plant efficiency is 43.5%. The core design concept is based on a so-called '3-pass-core' in which the coolant is heated up in three subsequent steps. After each step, the coolant is mixed avoiding hot streaks possibly leading to unacceptable wall temperatures. The design of such a core comprises fuel assemblies containing 40 fuel rods and an inner and outer box for a better neutron moderation. Nine of these are assembled to a cluster with common head- and foot piece. The coolant is mixed inside an upper and inside a lower mixing chamber and leaves the reactor pressure vessel through a co-axial pipe, which protects the vessel wall against too high temperatures. (orig.)

  9. Standard High Solids Vessel Design De-inventory Simulant Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Burns, Carolyn A.M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Linn, Diana T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smoot, Margaret R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-09-12

    The Hanford Tank Waste Treatment and Immobilization Plant (WTP) is working to develop a Standard High Solids Vessel Design (SHSVD) process vessel. To support testing of this new design, WTP engineering staff requested that a Newtonian simulant be developed that would represent the de-inventory (residual high-density tank solids cleanout) process. Its basis and target characteristics are defined in 24590-WTP-ES-ENG-16-021 and implemented through PNNL Test Plan TP-WTPSP-132 Rev. 1.0. This document describes the de-inventory Newtonian carrier fluid (DNCF) simulant composition that will satisfy the basis requirement to mimic the density (1.18 g/mL ± 0.1 g/mL) and viscosity (2.8 cP ± 0.5 cP) of 5 M NaOH at 25 °C.1 The simulant viscosity changes significantly with temperature. Therefore, various solution compositions may be required, dependent on the test stand process temperature range, to meet these requirements. Table ES.1 provides DNCF compositions at selected temperatures that will meet the density and viscosity specifications as well as the temperature range at which the solution will meet the acceptable viscosity tolerance.

  10. Designing the ATLAS trigger menu for high luminosities

    Science.gov (United States)

    Nakahama, Yu

    2012-12-01

    The LHC has a bunch-crossing rate of 20 MHz whereas the ATLAS detector has an average recording rate of about 400 Hz. To reduce the rate of events but still maintain a high efficiency for selecting interesting events needed by ATLAS physics analyses, a three-level trigger system is used in ATLAS. Events are selected based on the Trigger Menu, the definitions of the physics signatures the experiment triggers on. In the 2012 data taking since April, approximately 700 chains are used online. The menu must reflect not only the physics goals of the collaboration but also take into consideration the LHC luminosity and the strict DAQ limitations. An overview of the design, the validation and the performance of the trigger menu for the 2011 data-taking is given. During 2011, the menu had to evolve as the luminosity increase from below 2×1033 cm-2s-1 to almost 5×1033 cm-2s-1. Re-designing the menu for the up-coming high luminosity of around 1034 cm-2s-1 and large number of collision events that take place per each bunch crossing (pile-up) of around 35 interactions per bunch crossing at √s = 8 TeV is described. Initial performance in the 2012 data-taking is also reported.

  11. High performance integer arithmetic circuit design on FPGA architecture, implementation and design automation

    CERN Document Server

    Palchaudhuri, Ayan

    2016-01-01

    This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable, and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary “User Constraints File”. The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from us...

  12. A simple high-precision Jacob's staff design for the high-resolution stratigrapher

    Science.gov (United States)

    Elder, W.P.

    1989-01-01

    The new generation of high-resolution stratigraphic research depends upon detailed bed-by-bed analysis to enhance regional correlation potential. The standard Jacob's staff is not an efficient and precise tool for measuring thin-bedded strata. The high-precision Jacob's staff design presented and illustrated in this paper meets the qualifications required of such an instrument. The prototype of this simple design consists of a sliding bracket that holds a Brunton-type compass at right angles to a ruled-off staff. This instrument provides rapid and accurate measurement of both thick- or thin-bedded sequences, thus decreasing field time and increasing stratigraphic precision. -Author

  13. PDF text classification to leverage information extraction from publication reports.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (pPDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. LIKUIDITAS, LEVERAGE, KOMISARIS INDEPENDEN, DAN MANAJEMEN LABA TERHADAP AGRESIVITAS PAJAK PERUSAHAAN

    Directory of Open Access Journals (Sweden)

    Krisnata Dwi Suyanto

    2017-03-01

    Full Text Available Tax aggressive was the action designed to reduce taxable income appropriate to tax plan, which could be legalor illegal. This study investigated if extent of liquidity, leverage, independent commissioners and earningmanagement affected corporate tax aggressiveness. Effective tax rate (ETR and cash effective tax rate (CETRwere used to measure tax aggressiveness. Test was conducted for manufacturing firms which were listed inIndonesian Stock Exchange during the period of 2006-2010. Panel data regression was used to test the hypothesis.The result of the hypothesis was that it failed to find significant relation between liquidity and taxaggressiveness. Independent commissioners had a negative impact to tax aggressiveness, but leverage andearning management had a positive impact to tax aggressiveness.

  15. Portfolio Acquisition - How the DoD Can Leverage the Commercial Product Line Model

    Science.gov (United States)

    2015-04-30

    but focused on delivering a full integrated user experience across products and services. Toyota designs, develops, and produces its cars, trucks...ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Portfolio Acquisition—How the DoD Can Leverage the Commercial Product Line...00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Portfolio Acquisition - How the DoD Can Leverage the Commercial Product Line Model 5a. CONTRACT

  16. Human health and climate change: leverage points for adaptation in urban environments.

    Science.gov (United States)

    Proust, Katrina; Newell, Barry; Brown, Helen; Capon, Anthony; Browne, Chris; Burton, Anthony; Dixon, Jane; Mu, Lisa; Zarafu, Monica

    2012-06-01

    The design of adaptation strategies that promote urban health and well-being in the face of climate change requires an understanding of the feedback interactions that take place between the dynamical state of a city, the health of its people, and the state of the planet. Complexity, contingency and uncertainty combine to impede the growth of such systemic understandings. In this paper we suggest that the collaborative development of conceptual models can help a group to identify potential leverage points for effective adaptation. We describe a three-step procedure that leads from the development of a high-level system template, through the selection of a problem space that contains one or more of the group's adaptive challenges, to a specific conceptual model of a sub-system of importance to the group. This procedure is illustrated by a case study of urban dwellers' maladaptive dependence on private motor vehicles. We conclude that a system dynamics approach, revolving around the collaborative construction of a set of conceptual models, can help communities to improve their adaptive capacity, and so better meet the challenge of maintaining, and even improving, urban health in the face of climate change.

  17. Leveraging cross-link modification events in CLIP-seq for motif discovery.

    Science.gov (United States)

    Bahrami-Samani, Emad; Penalva, Luiz O F; Smith, Andrew D; Uren, Philip J

    2015-01-01

    High-throughput protein-RNA interaction data generated by CLIP-seq has provided an unprecedented depth of access to the activities of RNA-binding proteins (RBPs), the key players in co- and post-transcriptional regulation of gene expression. Motif discovery forms part of the necessary follow-up data analysis for CLIP-seq, both to refine the exact locations of RBP binding sites, and to characterize them. The specific properties of RBP binding sites, and the CLIP-seq methods, provide additional information not usually present in the classic motif discovery problem: the binding site structure, and cross-linking induced events in reads. We show that CLIP-seq data contains clear secondary structure signals, as well as technology- and RBP-specific cross-link signals. We introduce Zagros, a motif discovery algorithm specifically designed to leverage this information and explore its impact on the quality of recovered motifs. Our results indicate that using both secondary structure and cross-link modifications can greatly improve motif discovery on CLIP-seq data. Further, the motifs we recover provide insight into the balance between sequence- and structure-specificity struck by RBP binding. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Human Health and Climate Change: Leverage Points for Adaptation in Urban Environments

    Directory of Open Access Journals (Sweden)

    Monica Zarafu

    2012-06-01

    Full Text Available The design of adaptation strategies that promote urban health and well-being in the face of climate change requires an understanding of the feedback interactions that take place between the dynamical state of a city, the health of its people, and the state of the planet. Complexity, contingency and uncertainty combine to impede the growth of such systemic understandings. In this paper we suggest that the collaborative development of conceptual models can help a group to identify potential leverage points for effective adaptation. We describe a three-step procedure that leads from the development of a high-level system template, through the selection of a problem space that contains one or more of the group’s adaptive challenges, to a specific conceptual model of a sub-system of importance to the group. This procedure is illustrated by a case study of urban dwellers’ maladaptive dependence on private motor vehicles. We conclude that a system dynamics approach, revolving around the collaborative construction of a set of conceptual models, can help communities to improve their adaptive capacity, and so better meet the challenge of maintaining, and even improving, urban health in the face of climate change.

  19. Process Design Concepts for Stabilization of High Level Waste Calcine

    Energy Technology Data Exchange (ETDEWEB)

    T. R. Thomas; A. K. Herbst

    2005-06-01

    The current baseline assumption is that packaging ¡§as is¡¨ and direct disposal of high level waste (HLW) calcine in a Monitored Geologic Repository will be allowed. The fall back position is to develop a stabilized waste form for the HLW calcine, that will meet repository waste acceptance criteria currently in place, in case regulatory initiatives are unsuccessful. A decision between direct disposal or a stabilization alternative is anticipated by June 2006. The purposes of this Engineering Design File (EDF) are to provide a pre-conceptual design on three low temperature processes under development for stabilization of high level waste calcine (i.e., the grout, hydroceramic grout, and iron phosphate ceramic processes) and to support a down selection among the three candidates. The key assumptions for the pre-conceptual design assessment are that a) a waste treatment plant would operate over eight years for 200 days a year, b) a design processing rate of 3.67 m3/day or 4670 kg/day of HLW calcine would be needed, and c) the performance of waste form would remove the HLW calcine from the hazardous waste category, and d) the waste form loadings would range from about 21-25 wt% calcine. The conclusions of this EDF study are that: (a) To date, the grout formulation appears to be the best candidate stabilizer among the three being tested for HLW calcine and appears to be the easiest to mix, pour, and cure. (b) Only minor differences would exist between the process steps of the grout and hydroceramic grout stabilization processes. If temperature control of the mixer at about 80„aC is required, it would add a major level of complexity to the iron phosphate stabilization process. (c) It is too early in the development program to determine which stabilizer will produce the minimum amount of stabilized waste form for the entire HLW inventory, but the volume is assumed to be within the range of 12,250 to 14,470 m3. (d) The stacked vessel height of the hot process vessels

  20. High Quality Acquisition of Surface Electromyography - Conditioning Circuit Design

    Science.gov (United States)

    Shobaki, Mohammed M.; Malik, Noreha Abdul; Khan, Sheroz; Nurashikin, Anis; Haider, Samnan; Larbani, Sofiane; Arshad, Atika; Tasnim, Rumana

    2013-12-01

    The acquisition of Surface Electromyography (SEMG) signals is used for many applications including the diagnosis of neuromuscular diseases, and prosthesis control. The diagnostic quality of the SEMG signal is highly dependent on the conditioning circuit of the SEMG acquisition system. This paper presents the design of an SEMG conditioning circuit that can guarantee to collect high quality signal with high SNR such that it is immune to environmental noise. The conditioning circuit consists of four stages; consisting of an instrumentation amplifier that is used with a gain of around 250; 4th order band pass filter in the 20-500Hz frequency range as the two initial stages. The third stage is an amplifier with adjustable gain using a variable resistance; the gain could be changed from 1000 to 50000. In the final stage the signal is translated to meet the input requirements of data acquisition device or the ADC. Acquisition of accurate signals allows it to be analyzed for extracting the required characteristic features for medical and clinical applications. According to the experimental results, the value of SNR for collected signal is 52.4 dB which is higher than the commercial system, the power spectrum density (PSD) graph is also presented and it shows that the filter has eliminated the noise below 20 Hz.

  1. High Pressure Angle Gears: Comparison to Typical Gear Designs

    Science.gov (United States)

    Handschuh, Robert F.; Zabrajsek, Andrew J.

    2010-01-01

    A preliminary study has been completed to determine the feasibility of using high-pressure angle gears in aeronautic and space applications. Tests were conducted in the NASA Glenn Research Center (GRC) Spur Gear Test Facility at speeds up to 10,000 rpm and 73 N*m (648 in.*lb) for 3.18, 2.12, and 1.59 module gears (8, 12, and 16 diametral pitch gears), all designed to operate in the same test facility. The 3.18 module (8-diametral pitch), 28 tooth, 20deg pressure angle gears are the GRC baseline test specimen. Also, 2.12 module (12-diametral pitch), 42 tooth, 25deg pressure angle gears were tested. Finally 1.59 module (16-diametral pitch), 56 tooth, 35deg pressure angle gears were tested. The high-pressure angle gears were the most efficient when operated in the high-speed aerospace mode (10,000 rpm, lubricated with a synthetic turbine engine oil), and produced the lowest wear rates when tested with a perfluoroether-based grease. The grease tests were conducted at 150 rpm and 71 N*m (630 in.*lb).

  2. Overview of High Power Vacuum Dry RF Load Designs

    Energy Technology Data Exchange (ETDEWEB)

    Krasnykh, Anatoly [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-27

    A specific feature of RF linacs based on the pulsed traveling wave (TW) mode of operation is that only a portion of the RF energy is used for the beam acceleration. The residual RF energy has to be terminated into an RF load. Higher accelerating gradients require higher RF sources and RF loads, which can stably terminate the residual RF power. RF feeders (from the RF source though the accelerating section to the load) are vacuumed to transmit multi-megawatt high power RF. This overview will outline vacuumed RF loads only. A common method to terminate multi-MW RF power is to use circulated water (or other liquid) as an absorbing medium. A solid dielectric interface (a high quality ceramic) is required to separate vacuum and liquid RF absorber mediums. Using such RF load approaches in TW linacs is troubling because there is a fragile ceramic window barrier and a failure could become catastrophic for linac vacuum and RF systems. Traditional loads comprising of a ceramic disk have limited peak and average power handling capability and are therefore not suitable for high gradient TW linacs. This overview will focus on ''vacuum dry'' or ''all-metal'' loads that do not employ any dielectric interface between vacuum and absorber. The first prototype is an original design of RF loads for the Stanford Two-Mile Accelerator.

  3. DSP Architecture Design Essentials

    CERN Document Server

    Marković, Dejan

    2012-01-01

    In DSP Architecture Design Essentials, authors Dejan Marković and Robert W. Brodersen cover a key subject for the successful realization of DSP algorithms for communications, multimedia, and healthcare applications. The book addresses the need for DSP architecture design that maps advanced DSP algorithms to hardware in the most power- and area-efficient way. The key feature of this text is a design methodology based on a high-level design model that leads to hardware implementation with minimum power and area. The methodology includes algorithm-level considerations such as automated word-length reduction and intrinsic data properties that can be leveraged to reduce hardware complexity. From a high-level data-flow graph model, an architecture exploration methodology based on linear programming is used to create an array of architectural solutions tailored to the underlying hardware technology. The book is supplemented with online material: bibliography, design examples, CAD tutorials and custom software.

  4. License Application Design Selection Enhanced Design Alternative V: Very High Thermal Loading

    Energy Technology Data Exchange (ETDEWEB)

    C. L. Linden

    1999-06-22

    The major goals of Enhanced Design Alternative (EDA) V are to keep the temperature of the cladding on the spent nuclear fuel (SNF) within the waste package below 350 C (Section 4.2.3), the temperature of the emplacement drift walls below 225 C (Section 4.2.3), and to keep the emplacement drifts dry for several thousand years. In addition, the design would produce relatively consistent heat output from waste package to waste package and ensure that waste package thermal outputs are spread more evenly across the repository. The design would also provide defense in depth (Section 5.3). The goals of this design would be achieved by the combination of design features described below. This EDA would have an areal mass loading (AML) of 150 metric tons of uranium equivalent (MTU) per acre (Section 4.1.16) as opposed to the 85 MTU/acre in the Viability Assessment (VA) reference design. To achieve this loading and the elements necessary to the EDA's overall goals, the design would require approximately 420 acres of emplacement area, within the lower repository block (Appendix A, Section A.2). A conceptual layout was developed for EDA V (Section 5.4.3). The layout, as shown in Figure 2, contains openings that are sized and arranged in a similar configuration as the VA reference design. A total of 54 emplacement drifts will be required for emplacement of the 70,000 MTU of spent nuclear fuel and high level waste packages. A total of four ventilation shafts, one intake and three exhausts are anticipated for the layout in order to provide sufficient air quantities to the emplacement drifts. Two exhaust mains will be located below the level of the emplacement drifts to provide exhaust from the emplacement drifts. In addition, the evaluation has confirmed that the decision to close the repository is possible 50 years after start of emplacement (Section 5.7.5). The licensing and preclosure period encompassed by the Mined Geologic Repository (MGR) extends from the year 2002

  5. Principia designae pre-design, design, and post-design : social motive for the highly advanced technological society

    CERN Document Server

    2015-01-01

    This book presents a broad design purview within the framework of “pre-design, design, and post-design” by focusing on the “motive of design,” which implies an underlying reason for the design of a product. The chapters are comprised of papers based on discussions at the “Design Research Leading Workshop” held in Nara, Japan, in 2013. This book encourages readers to enhance and expand their thinking within a widened design perspective.

  6. Design and implementation of a high performance network security processor

    Science.gov (United States)

    Wang, Haixin; Bai, Guoqiang; Chen, Hongyi

    2010-03-01

    The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.

  7. Experimental Design of Formulations Utilizing High Dimensional Model Representation.

    Science.gov (United States)

    Li, Genyuan; Bastian, Caleb; Welsh, William; Rabitz, Herschel

    2015-07-23

    Many applications involve formulations or mixtures where large numbers of components are possible to choose from, but a final composition with only a few components is sought. Finding suitable binary or ternary mixtures from all the permissible components often relies on simplex-lattice sampling in traditional design of experiments (DoE), which requires performing a large number of experiments even for just tens of permissible components. The effect rises very rapidly with increasing numbers of components and can readily become impractical. This paper proposes constructing a single model for a mixture containing all permissible components from just a modest number of experiments. Yet the model is capable of satisfactorily predicting the performance for full as well as all possible binary and ternary component mixtures. To achieve this goal, we utilize biased random sampling combined with high dimensional model representation (HDMR) to replace DoE simplex-lattice design. Compared with DoE, the required number of experiments is significantly reduced, especially when the number of permissible components is large. This study is illustrated with a solubility model for solvent mixture screening.

  8. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  9. 13 CFR 107.1200 - SBA's Leverage commitment to a Licensee-application procedure, amount, and term.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA's Leverage commitment to a... (Leverage) Conditional Commitments by Sba to Reserve Leverage for A Licensee § 107.1200 SBA's Leverage... type of Leverage for your future use. You may then apply to draw down Leverage against the commitment...

  10. Designing High Performance Factory Automation Applications on Top of DDS

    Directory of Open Access Journals (Sweden)

    Isidro Calvo

    2013-04-01

    Full Text Available DDS is a recent specification aimed at providing high-performance publisher/subscriber middleware solutions. Despite being a very powerful flexible technology, it may prove complex to use, especially for the inexperienced. This work provides some guidelines for connecting software components that represent a new generation of automation devices (such as PLCs, IPCs and robots using Data Distribution Service (DDS as a virtual software bus. More specifically, it presents the design of a DDS-based component, the so-called Automation Component, and discusses how to map different traffic patterns using DDS entities exploiting the wealth of QoS management mechanisms provided by the DDS specification. A case study demonstrates the creation of factory automation applications out of software components that encapsulate independent stations.

  11. Design implications of high-speed digital PPM

    Science.gov (United States)

    Sibley, Martin J. N.

    1993-11-01

    Work in the area of digital pulse position modulation (digital PPM) has shown that this type of modulation can yield sensitivities that are typically 4 - 5 dB better than an equivalent PCM system. Recent experimental work has shown that the receiver in a digital PPM system does not need to have a wide bandwidth. Instead, the bandwidth can be very low so that the receiver is effectively impulsed by the digital PPM signal. The advent of very high speed Si digital ICs, and fast lasers, means that digital PPM can now be used to code gigabit PCM signals. This paper presents original theoretical results for a digital PPM system coding 1 Gbit/s PCM signals into 8 Gbit/s digital PPM signals. The paper also addresses the difficulties that the system designer is likely to encounter, and discusses some possible solutions.

  12. Holistic design in high-speed optical interconnects

    Science.gov (United States)

    Saeedi, Saman

    Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking. In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy eciency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The

  13. Design of blue LEDs array with high optical power

    Science.gov (United States)

    Lu, Pengzhi; Yang, Hua; Xue, Bin; Xie, Haizhong; Li, Jing; Yi, Xiaoyan; Wang, Junxi; Wang, Guohong; Li, Jinmin

    2014-11-01

    In this paper, an array of blue LEDs with high optical power was presented and discussed. Optical of the novel design was completed with the help of running simulation in TracePro to predict the performance of the module. 36 Cree XP-E blue LEDs with a square reflector were used in the novel design. Optical simulation obtained from TracePro showed that the total optical power of the LED array could reach 16.83W. To verify the simulation results, Aluminum PCB, Copper PCB and Aluminum square reflector have been made respectively. Firstly, 36 Cree XP-E blue LEDs with small-pitch were fixed on each PCB, then; an Aluminum square reflector was assembled on each PCB. This optical module was installed on a radiator and tested. The optical output power of sample 1 used Aluminum PCB and Aluminum reflector and sample 2 used Copper PCB and Aluminum reflector was 8.126W and 9.445W at 2A, respectively. It could be observed that the optical output power of sample 2 was higher than that of sample 1. It could be attributed to the better thermal dispersion performance of Copper. In order to improve the light reflectivity and reduce the loss of light, ultrathin silver was coated on the Aluminum reflector by electron beam evaporation. The optical output power of sample 3 used Copper PCB and silver-plated Aluminum reflector was 12.541W at 2A. A uniform square spot with high optical power was obtained.

  14. New lighting for the design of high quality biomedical devices

    Science.gov (United States)

    Jaffe, Claudia B.; Jaffe, Steven M.; Conner, Arlie R.

    2009-02-01

    Among the trends redefining 21st century biomedical diagnostics and therapeutics are the advent of low-cost portable analyzers. Because light is a powerful tool in many of today's most widely used life science instruments, high intensity, low cost light engines are essential to the design and proliferation of the newest bioanalytical instruments, medical devices and miniaturized analyzers. The development of new light technology represents a critical technical hurdle in the realization of point-of-care analysis. Lumencor has developed an inexpensive lighting solution, uniquely well suited to the production of safe, effective and commercially viable life science tools and biomedical devices. Lumencor's proprietary, solid-state light engine provides powerful, pure, stable, inexpensive light across the UV-Vis- NIR. Light engines are designed to directly replace the entire configuration of light management components with a single, simple unit. Power, spectral breadth and purity, stability and reliability data will demonstrate the advantages of these light engines for today's bioanalytical needs. Performance and cost analyses will be compared to traditional optical subsystems based on lamps, lasers and LEDs with respect to their suitability as sources for biomedical applications, implementation for development/evaluation of novel measurement tools and overall superior reliability. Next generation products based on such sources will be described to fulfill the demand for portable, hand-held analyzers and affordable devices with highly integrated light sources. A four color violet/cyan/green/red product will be demonstrated. A variety of multicolor prototypes, their spectral outputs and facile modulation will be discussed and their performance capabilities disclosed.

  15. Leveraging R&D Resources via the Joint LLC Model

    Science.gov (United States)

    Ganz, Matthew W.

    2008-03-01

    Industrial scientific research labs have become increasingly stressed in recent years by a variety of external forces. Both corporations and government funding agencies have shifted their priorities from long-term fundamental research toward projects that have a high probability of shorter-term payoff. Industrial funding has been further stressed by an increasing demand for quarterly results and fierce global competition. Industry leaders are now asking their R&D labs for ``home runs” and not just a solid base in the physical sciences. The end of the Cold War has also left the US without a declared enemy whose overt intention was to defeat us through a mastery of large-scale weaponry based upon exploitation of fundamental physics. This, when combined with a bona-fide need for technology gap fillers to respond to on-the-ground threats in the current Middle East conflicts, has led to diminished government emphasis on long-term research in the physical sciences. Simultaneously, the global sources of R&D spending are expanding. The dramatic growth of private equity in the technology development arena has both drawn talent from industry and changed the expectations on researchers. R&D spending in China, India and many other countries is growing significantly. Thus, in order to become relevant, industry must now keep its finger on the pulse of the hundreds of billions of dollars being invested privately and publicly around the world. HRL Laboratories, LLC in Malibu, California represents a unique and successful new business model for industrial R&D. HRL was founded by Howard Hughes in 1948 as the Hughes Research Laboratory and for more than four decades was the internal R&D lab for the Hughes Aircraft Company. After a series of mergers, acquisitions and divestitures over the past 15 years, HRL is now a stand-alone LLC that is owned jointly by General Motors and the Boeing Company. HRL, with a staff of about 300, performs R&D services for GM and Boeing as well as for

  16. Multiple pulse electron beam converter design for high power radiography

    Science.gov (United States)

    Pincosy, P. A.; Back, N.; Bergstrom, P. M.; Chen, Yu-Jiuan; Poulsen, P.

    2001-06-01

    The typical response of the x-ray converter material to the passage of a high-powered relativistic electron beam is vaporization and rapid dispersal. The effect of this dispersal on subsequent pulses for multi-pulse radiography is the collective effects on the propagation of the electron beam through the expanding plasma and the reduced number of electron to photon interactions. Thus, for the dual-axis radiographic hydrodynamic test facility, the converter material must either be replaced or confined long enough to accommodate the entire pulse train. Typically the 1-mm-thick high Z and full density converter material is chosen to give peak dose and minimum radiographic spot. For repeated pulses we propose a modified converter, constructed of either low density, high Z material in the form of foam or of foils spaced over ten times the axial thickness of the standard 1 mm converter. The converter material is confined within a tube to impede outward motion in radius outside the beam interaction region. We report single-pulse experiments which measure the dose and spot size produced by the modified converter and compare them to similar measurements made by the standard converter. For multiple pulses over a microsecond time scale, we calculate the radial and axial hydrodynamic flow to study the material reflux into the converter volume and the resultant density decrease as the electron beam energy is deposited. Both the electron transport through the expanding low density plasma and beam in the higher density material are modeled. The x-ray source dose and spot size are calculated to evaluate the impact of the changing converter material density distribution on the radiographic spot size and dose. The results indicate that a multiple-pulse converter design for three or four high-power beam pulses is feasible.

  17. Impact of Solar Array Designs on High Voltage Operations

    Science.gov (United States)

    Brandhorst, Henry W., Jr.; Ferguson, Dale; Piszczor, Mike; ONeill, Mark

    2006-01-01

    As power levels of advanced spacecraft climb above 25 kW, higher solar array operating voltages become attractive. Even in today s satellites, operating spacecraft buses at 100 V and above has led to arcing in GEO communications satellites, so the issue of spacecraft charging and solar array arcing remains a design problem. In addition, micrometeoroid impacts on all of these arrays can also lead to arcing if the spacecraft is at an elevated potential. For example, tests on space station hardware disclosed arcing at 75V on anodized A1 structures that were struck with hypervelocity particles in Low Earth Orbit (LEO) plasmas. Thus an understanding of these effects is necessary to design reliable high voltage solar arrays of the future, especially in light of the Vision for Space Exploration of NASA. In the future, large GEO communication satellites, lunar bases, solar electric propulsion missions, high power communication systems around Mars can lead to power levels well above 100 kW. As noted above, it will be essential to increase operating voltages of the solar arrays well above 80 V to keep the mass of cabling needed to carry the high currents to an acceptable level. Thus, the purpose of this paper is to discuss various solar array approaches, to discuss the results of testing them at high voltages, in the presence of simulated space plasma and under hypervelocity impact. Three different types of arrays will be considered. One will be a planar array using thin film cells, the second will use planar single or multijunction cells and the last will use the Stretched Lens Array (SLA - 8-fold concentration). Each of these has different approaches for protection from the space environment. The thin film cell based arrays have minimal covering due to their inherent radiation tolerance, conventional GaAs and multijunction cells have the traditional cerium-doped microsheet glasses (of appropriate thickness) that are usually attached with Dow Corning DC 93-500 silicone

  18. The Designs of High Efficiency Launcher of Quasi-Optical Mode Converter for High Power Gyrotrons

    Science.gov (United States)

    Minami, R.; Kasugai, A.; Takahashi, K.; Kobayashi, N.; Mitsunaka, Y.; Sakamoto, K.

    2006-01-01

    A high efficiency launcher of quasi-optical (QO) mode converters for high power gyrotrons have been designed and tested. A helical cut launcher radiates the RF power via its straight cut onto the first phase correcting mirror. The launchers have been optimized for the TE31.8 mode at 170 GHz and TE22.6 mode at 110 GHz by numerically optimizing a launcher surface. The helical cut of the launcher has been optimized by taking the taper angle into account. Further more, the amplitude of the surface perturbation have been optimized for improved focusing in order to reduce the diffraction losses at the helical cut. Low power measurement shows a good agreement with the design. High efficiency characteristics of the design have also been calculated on the assumption of frequency downshift due to the thermal expansion of the cavity and stepwise frequency tuning by changing the operating mode. Besides, the possibility of high efficiency launcher for higher mode is discussed, and these results give the prospect to high efficiency long pulse gyrotrons.

  19. Design Strategies for Ultra-high Efficiency Photovoltaics

    Science.gov (United States)

    Warmann, Emily Cathryn

    While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches. This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems. Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired

  20. Towards the development of run times leveraging virtualization for high performance computing; Contribution a l'elaboration de supports executifs exploitant la virtualisation pour le calcul hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Diakhate, F.

    2010-12-15

    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  1. 13 CFR 107.1130 - Leverage fees and additional charges payable by Licensee.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Leverage fees and additional... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) General Information About Obtaining Leverage § 107.1130 Leverage fees and additional charges payable by Licensee. (a...

  2. 13 CFR 107.1210 - Payment of leverage fee upon receipt of commitment.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Payment of leverage fee upon... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) Conditional Commitments by Sba to Reserve Leverage for A Licensee § 107.1210 Payment of leverage fee upon receipt of...

  3. 13 CFR 108.1150 - Maximum amount of Leverage for a NMVC Company.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum amount of Leverage for a... NEW MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM SBA Financial Assistance for NMVC Companies (Leverage) Maximum Amount of Leverage for Which A Nmvc Company Is Eligible § 108.1150 Maximum amount of Leverage for...

  4. 13 CFR 108.1100 - Type of Leverage and application procedures.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Type of Leverage and application... MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM SBA Financial Assistance for NMVC Companies (Leverage) General Information About Obtaining Leverage § 108.1100 Type of Leverage and application procedures. (a) Type of...

  5. 13 CFR 107.1230 - Draw-downs by Licensee under SBA's Leverage commitment.

    Science.gov (United States)

    2010-01-01

    ... Leverage commitment. 107.1230 Section 107.1230 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) Conditional Commitments by Sba to Reserve Leverage for A Licensee § 107.1230 Draw-downs by Licensee under SBA's Leverage...

  6. 17 CFR 31.21 - Leverage contracts entered into prior to April 13, 1984; subsequent transactions.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Leverage contracts entered... Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS § 31.21 Leverage contracts entered into... construed to affect any lawful activities that occurred prior to April 13, 1984. All leverage contracts...

  7. 17 CFR 31.13 - Financial reports of leverage transaction merchants.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Financial reports of leverage... COMMISSION LEVERAGE TRANSACTIONS § 31.13 Financial reports of leverage transaction merchants. (a) Each leverage transaction merchant who files an application for registration with the National Futures...

  8. 7 CFR 4290.1200 - Leverage commitment to a RBIC-application procedure, amount, and term.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Leverage commitment to a RBIC-application procedure... BUSINESS INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) Conditional Commitments to Reserve Leverage for A Rbic § 4290.1200 Leverage commitment to a RBIC—application procedure...

  9. 13 CFR 108.1130 - Leverage fees payable by NMVC Company.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Leverage fees payable by NMVC... MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM SBA Financial Assistance for NMVC Companies (Leverage) General Information About Obtaining Leverage § 108.1130 Leverage fees payable by NMVC Company. There is no fee for the...

  10. 7 CFR 4290.1150 - Maximum amount of Leverage for a RBIC.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum amount of Leverage for a RBIC. 4290.1150... (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) Maximum Amount of Leverage for Which A Rbic Is Eligible § 4290.1150 Maximum amount of Leverage for a RBIC. The face amount of a RBIC's...

  11. 7 CFR 4290.1230 - Draw-downs by RBIC under Leverage commitment.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Draw-downs by RBIC under Leverage commitment. 4290... INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) Conditional Commitments to Reserve Leverage for A Rbic § 4290.1230 Draw-downs by RBIC under Leverage commitment. (a) RBIC's...

  12. 17 CFR Appendix A to Part 31 - Schedule of Fees for Registration of Leverage Commodities

    Science.gov (United States)

    2010-04-01

    ... Registration of Leverage Commodities A Appendix A to Part 31 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS Pt. 31, App. A Appendix A to Part 31—Schedule of Fees for Registration of Leverage Commodities (a) Each application for registration of a leverage commodity must be...

  13. 13 CFR 107.1160 - Maximum amount of Leverage for a Section 301(d) Licensee.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum amount of Leverage for a... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) Maximum Amount of Leverage for Which A Licensee Is Eligible § 107.1160 Maximum amount of Leverage for a Section...

  14. Leveraging multi-generational workforce values in interactive information societies

    Directory of Open Access Journals (Sweden)

    Sophie van der Walt

    2010-08-01

    Full Text Available Background: The success of organisations relies on various factors including the ability of its multi-generational workforce to collaborate within the interactive information society. By developing an awareness of the different values of a diverse workforce, organisations may benefit from diversity. Various diversity factors, such as ethnicity, age and gender, impact on the way people interact, especially in the interactive information society.Objectives: This article advocates the need for generational awareness and addresses how this awareness presents benefits to companies, such as, increased productivity, improved succession planning policies and strategies to recruit and retain a diverse workforce. The research problem is directed at how diversity management influences Traditionalists, Baby Boomers, Generation X and Generation Y in terms of their work performance and co-worker relationships.Method: The research design combines Critical Theory and Generational Theory within the mixed-method paradigm. The sequential exploratory design was decided upon as it studies the unknown relationships between different generations of employees. The literature review was followed by a quantitative empirical research component and data was collected by means of a questionnaire. Results: The findings highlight specific differences between generations regarding their perspectives on work values and co-worker relationships, rewards, work-life balance and retirement.Conclusion: The article concludes with recommendations on the role diversity management plays in terms of work performance and co-worker relationships. By leveraging generational awareness in the interactive information society organizations with a multi-generational workforce will succeed in the competitive business environment.

  15. Compressed gas domestic aerosol valve design using high viscous product

    Directory of Open Access Journals (Sweden)

    A Nourian

    2016-10-01

    Full Text Available Most of the current universal consumer aerosol products using high viscous product such as cooking oil, antiperspirants, hair removal cream are primarily used LPG (Liquefied Petroleum Gas propellant which is unfriendly environmental. The advantages of the new innovative technology described in this paper are: i. No butane or other liquefied hydrocarbon gas is used as a propellant and it replaced with Compressed air, nitrogen or other safe gas propellant. ii. Customer acceptable spray quality and consistency during can lifetime iii. Conventional cans and filling technology There is only a feasible energy source which is inert gas (i.e. compressed air to replace VOCs (Volatile Organic Compounds and greenhouse gases, which must be avoided, to improve atomisation by generating gas bubbles and turbulence inside the atomiser insert and the actuator. This research concentrates on using "bubbly flow" in the valve stem, with injection of compressed gas into the passing flow, thus also generating turbulence. The new valve designed in this investigation using inert gases has advantageous over conventional valve with butane propellant using high viscous product (> 400 Cp because, when the valving arrangement is fully open, there are negligible energy losses as fluid passes through the valve from the interior of the container to the actuator insert. The use of valving arrangement thus permits all pressure drops to be controlled, resulting in improved control of atomising efficiency and flow rate, whereas in conventional valves a significant pressure drops occurs through the valve which has a complex effect on the corresponding spray.

  16. High Temperature Superconducting Space Experiment II (HTSSE II) cryogenic design

    Science.gov (United States)

    Kawecki, T. G.; Chappie, S. S.; Mahony, D. R.

    At 60 to 80 K large performance gains are possible from high temperature superconducting (HTS) microwave devices for communications applications. The High Temperature Superconducting Space Experiment II (HTSSE II) will demonstrate eight HTS experiments in space for up to 3 years of operation. HTSSE II is the first application of HTS technology to space. In addition to demonstrating HTS devices, an important secondary goal is to demonstrate the cryogenic technologies required for long life HTS space applications. HTSSE II utilizes a British Aerospace 80 K Stirling cycle cryocooler to refrigerate a central cryogenic bus of seven HTS experiments and has an additional stand-alone TRW HTS experiment cooled by a TRW Stirling cycle cryocooler. The HTSSE II flight unit has been assembled and has successfully passed vibration and thermal vacuum environmental tests. HTSSE II was developed on a fixed budget and a fast track schedule of 24 months and is due to launch in March 1997 on the ARGOS spacecraft. This paper presents the design and test results of the cryogenic subsystem, cryocooler integration and a cryogenic coaxial cable I/O assembly.

  17. Vacuum Window Design for High-Power Lasers

    CERN Document Server

    Shaftan, T V

    2005-01-01

    One of the problems in the high-power lasers design is in outcoupling of a powerful laser beam out of a vacuum volume into atmosphere. Usually the laser device is located inside a vacuum tank. The laser radiation is transported to the outside world through the transparent vacuum window. While considered transparent, some of the light passing through the glass is absorbed and converted to heat. For most applications, these properties are academic curiosities; however, in multi-kilowatt lasers, the heat becomes significant and can lead to a failure. The absorbed power can result in thermal stress, reduction of light transmission and, consequently, window damage. Modern optical technology has developed different types of glass (Silica, BK7, diamond, etc.) that have high thermal conductivity and damage threshold. However, for kilo- and megawatt lasers the issue still remains open. In this paper we present a solution that may relieve the heat load on the output window. We discuss advantages and issues of this part...

  18. Composite flywheel material design for high-speed energy storage

    Directory of Open Access Journals (Sweden)

    Michael A. Conteh

    2016-06-01

    Full Text Available Lamina and laminate mechanical properties of materials suitable for flywheel high-speed energy storage were investigated. Low density, low modulus and high strength composite material properties were implemented for the constant stress portion of the flywheel while higher density, higher modulus and strength were implemented for the constant thickness portion of the flywheel. Design and stress analysis were used to determine the maximum energy densities and shape factors for the flywheel. Analytical studies along with the use of the CADEC-online software were used to evaluate the lamina and laminate properties. This study found that a hybrid composite of M46J/epoxy–T1000G/epoxy for the flywheel exhibits a higher energy density when compared to known existing flywheel hybrid composite materials such as boron/epoxy–graphite/epoxy. Results from this study will contribute to further development of the flywheel that has recently re-emerged as a promising application for energy storage due to significant improvements in composite materials and technology.

  19. High Performance Receiver Design for RX Carrier Aggregation

    Directory of Open Access Journals (Sweden)

    Jusung Kim

    2017-05-01

    Full Text Available Carrier aggregation is one of the key features to increase the data rate given a scarce bandwidth spectrum. This paper describes the design of a high performance receiver suitable for carrier aggregation in LTE-Advanced and future 5 G standards. The proposed architecture is versatile to support legacy mode (single carrier, inter-band carrier aggregation, and intra-band carrier aggregation. Performance with carrier-aggregation support is as good as legacy receivers. Contradicting requirements of high linearity and the low noise is satisfied with the single-gm receiver architecture in addition to supporting carrier aggregation. The proposed cascode-shutoff low-noise trans-conductance amplifier (LNTA achieves 57.1 dB voltage gain, 1.76 dB NF (noise figure , and - 6 . 7 dBm IIP3 (Third-order intercept point with the power consumption of 21.3 mW in the intra-band carrier aggregation scenario. With legacy mode, the same receiver signal path achieves 56.6 dB voltage gain, 1.33 dB NF, and - 6 . 2 dBm IIP3 with a low power consumption of 7.4 mW.

  20. Design requirements for high-efficiency high concentration ratio space solar cells

    Science.gov (United States)

    Rauschenbach, H.; Patterson, R.

    1980-01-01

    A miniaturized Cassegrainian concentrator system concept was developed for low cost, multikilowatt space solar arrays. The system imposes some requirements on solar cells which are new and different from those imposed for conventional applications. The solar cells require a circular active area of approximately 4 mm in diameter. High reliability contacts are required on both front and back surfaces. The back area must be metallurgically bonded to a heat sink. The cell should be designed to achieve the highest practical efficiency at 100 AMO suns and at 80 C. The cell design must minimize losses due to nonuniform illumination intensity and nonnormal light incidence. The primary radiation concern is the omnidirectional proton environment.

  1. Pressures to adhere to treatment ('leverage') in English mental healthcare.

    Science.gov (United States)

    Burns, Tom; Yeeles, Ksenija; Molodynski, Andrew; Nightingale, Helen; Vazquez-Montes, Maria; Sheehan, Kathleen; Linsell, Louise

    2011-08-01

    Coercion has usually been equated with legal detention. Non-statutory pressures to adhere to treatment, 'leverage', have been identified as widespread in US public mental healthcare. It is not clear if this is so outside the USA. To measure rates of different non-statutory pressures in distinct clinical populations in England, to test their associations with patient characteristics and compare them with US rates. Data were collected by a structured interview conducted by independent researchers supplemented by data extraction from case notes. We recruited a sample of 417 participants from four differing clinical populations. Lifetime experience of leverage was reported in 35% of the sample, 63% in substance misusers, 33% and 30% in the psychosis samples and 15% in the non-psychosis sample. Leverage was associated with repeated hospitalisations, substance misuse diagnosis and lower insight as measured by the Insight and Treatment Attitudes Questionnaire. Housing leverage was the most frequent form (24%). Levels were markedly lower than those reported in the USA. Non-statutory pressure to adhere to treatment (leverage) is common in English mental healthcare but has received little clinical or research attention. Urgent attention is needed to understand its variation and place in community practice.

  2. Systematic Assessment of a High-Impact Course Design Institute

    Science.gov (United States)

    Palmer, Michael S.; Streifer, Adriana C.; Williams-Duncan, Stacy

    2016-01-01

    Herein, we describe an intensive, week-long course design institute (CDI) designed to introduce participants to the scholarly and evidence-driven process of learning-focused course design. Impact of this intervention is demonstrated using a multifaceted approach: (a) post-CDI satisfaction and perception surveys, (b) pre-/post-CDI surveys probing…

  3. Impact of chemistry on Standard High Solids Vessel Design mixing

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-03-02

    The plan for resolving technical issues regarding mixing performance within vessels of the Hanford Waste Treatment Plant Pretreatment Facility directs a chemical impact study to be performed. The vessels involved are those that will process higher (e.g., 5 wt % or more) concentrations of solids. The mixing equipment design for these vessels includes both pulse jet mixers (PJM) and air spargers. This study assesses the impact of feed chemistry on the effectiveness of PJM mixing in the Standard High Solids Vessel Design (SHSVD). The overall purpose of this study is to complement the Properties that Matter document in helping to establish an acceptable physical simulant for full-scale testing. The specific objectives for this study are (1) to identify the relevant properties and behavior of the in-process tank waste that control the performance of the system being tested, (2) to assess the solubility limits of key components that are likely to precipitate or crystallize due to PJM and sparger interaction with the waste feeds, (3) to evaluate the impact of waste chemistry on rheology and agglomeration, (4) to assess the impact of temperature on rheology and agglomeration, (5) to assess the impact of organic compounds on PJM mixing, and (6) to provide the technical basis for using a physical-rheological simulant rather than a physical-rheological-chemical simulant for full-scale vessel testing. Among the conclusions reached are the following: The primary impact of precipitation or crystallization of salts due to interactions between PJMs or spargers and waste feeds is to increase the insoluble solids concentration in the slurries, which will increase the slurry yield stress. Slurry yield stress is a function of pH, ionic strength, insoluble solids concentration, and particle size. Ionic strength and chemical composition can affect particle size. Changes in temperature can affect SHSVD mixing through its effect on properties such as viscosity, yield stress, solubility

  4. How Can Home Care Patients and Their Caregivers Better Manage Fall Risks by Leveraging Information Technology?

    Science.gov (United States)

    Alhuwail, Dari; Koru, Güneş; Nahm, Eun-Shim

    2016-12-01

    From the perspectives of home care patients and caregivers, this study aimed to (a) identify the challenges for better fall-risk management during home care episodes and (b) explore the opportunities for them to leverage health information technology (IT) solutions to improve fall-risk management during home care episodes. Twelve in-depth semistructured interviews with the patients and caregivers were conducted within a descriptive single case study design in 1 home health agency (HHA) in the mid-Atlantic region of the United States. Patients and caregivers faced challenges to manage fall risks such as unmanaged expectations, deteriorating cognitive abilities, and poor care coordination between the HHA and physician practices. Opportunities to leverage health IT solutions included patient portals, telehealth, and medication reminder apps on smartphones. Effectively leveraging health IT could further empower patients and caregivers to reduce fall risks by acquiring the necessary information and following clinical advice and recommendations. The HHAs could improve the quality of care by adopting IT solutions that show more promise of improving the experiences of patients and caregivers in fall-risk management.

  5. Design of a high capacity long range cargo aircraft

    Science.gov (United States)

    Weisshaar, Terrence A.

    1994-01-01

    This report examines the design of a long range cargo transport to attempt to reduce ton-mile shipping costs and to stimulate the air cargo market. This design effort involves the usual issues but must also include consideration of: airport terminal facilities; cargo loading and unloading; and defeating the 'square-cube' law to design large structures. This report reviews the long range transport design problem and several solutions developed by senior student design teams at Purdue University. The results show that it will be difficult to build large transports unless the infrastructure is changed and unless the basic form of the airplane changes so that aerodynamic and structural efficiencies are employed.

  6. High-reliability condenser design study. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Mussalli, Y.G.; Bell, R.J.; Impagliazzo, A.M.

    1983-07-01

    Steam condensers are a major source of problems causing poor power-plant availability and efficiency. EPRI sponsored this study to review US and European designs and practices in order to recommend improvements and further research needs. Improved designs to achieve condenser integrity, deaeration, and thermal performance are discussed. These designs include tube-material selections, joints, connections, tube-bundle designs, hot-well deaerators, air-removal capacities, and enhanced-heat-transfer tubes. Designs to improve the performance of condenser-associated systems are also presented. These include macro- and micro-fouling control technologies and improved cooling-water-pump designs. Innovative, but proven, features such as removable tube bundles, multiple tube bundles, and low-tuned turbine foundations can also reduce initial construction costs and improve maintenance conditions. This study determined that the technology is available to achieve a reliable condenser design. The report also identifies several areas for future research.

  7. Design and Development of LPU-B High School Website

    Directory of Open Access Journals (Sweden)

    Abner B. Tupas

    2015-12-01

    Full Text Available This study was conducted to develop and assess the LPU-B High School website as perceived by the faculty members and selected staff and students in terms of content, efficiency, functionality and usability. It also sought to test the significant difference in the assessment of the two groups of respondents; and to propose measures to enhance the website. The applications were built in the web. It comprises of three aspects: a database component, the use of a programming/scripting language, the design and implementation of the graphical user interface (GUI. The user interface was written primarily in Hypertext Mark-up Language (HTML and Cascading Style Sheet (CSS and is accessible via any web browser. The processing application handles request/tasks performed by the user on the server side using PHP content management system technology. The server then returns the appropriate information from the database. The data storage is in the form of relational databases using MySQL that stores data needed by the application in its tables. All components of the application, which include the user interface, processing scripts and database reside on the server. The website Uniform resource locator (URL is hs.lpubatangas.edu.ph.

  8. Design and high order optimization of the ATF2 lattices

    CERN Document Server

    Marin, E; Woodley, M; Kubo, K; Okugi, T; Tauchi, T; Urakawa, J; Tomas, R

    2013-01-01

    The next generation of future linear colliders (LC) demands nano-meter beam sizes at the interaction point (IP) in order to reach the required luminosity. The final focus system (FFS) of a LC is meant to deliver such small beam sizes. The Accelerator Test Facility (ATF) aims to test the feasibility of the new local chromaticity correction scheme which the future LCs are based on. To this end the ATF2 nominal and ultra-low beta* lattices are design to vertically focus the beam at the IP to 37nm and 23nm, respectively if error-free lattices are considered. However simulations show that the measured field errors of the ATF2 magnets preclude to reach the mentioned spot sizes. This paper describes the optimization of high order aberrations of the ATF2 lattices in order to minimize the detrimental effect of the measured multipole components for both ATF2 lattices. Specifically three solutions are studied, the replacement of the last focusing quadrupole (QF1FF), insertion of octupole magnets and optics modification....

  9. Fast ignition integrated experiments and high-gain point design

    Energy Technology Data Exchange (ETDEWEB)

    Shiraga, H. [Osaka Univ., Osaka (Japan); Nagatomo, H. [Osaka Univ., Osaka (Japan); Theobald, W. [Univ. of Rochester, Rochester, NY (United States); Solodov, A. A. [Univ. of Rochester, Rochester, NY (United States); Tabak, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-04-17

    Here, integrated fast ignition experiments were performed at ILE, Osaka, and LLE, Rochester, in which a nanosecond driver laser implodes a deuterated plastic shell in front of the tip of a hollow metal cone and an intense ultrashort-pulse laser is injected through the cone to heat the compressed plasma. Based on the initial successful results of fast electron heating of cone-in-shell targets, large-energy short-pulse laser beam lines were constructed and became operational: OMEGA-EP at Rochester and LFEX at Osaka. Neutron enhancement due to heating with a ~kJ short-pulse laser has been demonstrated in the integrated experiments at Osaka and Rochester. The neutron yields are being analyzed by comparing the experimental results with simulations. Details of the fast electron beam transport and the electron energy deposition in the imploded fuel plasma are complicated and further studies are imperative. The hydrodynamics of the implosion was studied including the interaction of the imploded core plasma with the cone tip. Theory and simulation studies are presented on the hydrodynamics of a high-gain target for a fast ignition point design.

  10. High-Throughput Phase-Field Design of High-Energy-Density Polymer Nanocomposites.

    Science.gov (United States)

    Shen, Zhong-Hui; Wang, Jian-Jun; Lin, Yuanhua; Nan, Ce-Wen; Chen, Long-Qing; Shen, Yang

    2017-11-22

    Understanding the dielectric breakdown behavior of polymer nanocomposites is crucial to the design of high-energy-density dielectric materials with reliable performances. It is however challenging to predict the breakdown behavior due to the complicated factors involved in this highly nonequilibrium process. In this work, a comprehensive phase-field model is developed to investigate the breakdown behavior of polymer nanocomposites under electrostatic stimuli. It is found that the breakdown strength and path significantly depend on the microstructure of the nanocomposite. The predicted breakdown strengths for polymer nanocomposites with specific microstructures agree with existing experimental measurements. Using this phase-field model, a high throughput calculation is performed to seek the optimal microstructure. Based on the high-throughput calculation, a sandwich microstructure for PVDF-BaTiO3 nanocomposite is designed, where the upper and lower layers are filled with parallel nanosheets and the middle layer is filled with vertical nanofibers. It has an enhanced energy density of 2.44 times that of the pure PVDF polymer. The present work provides a computational approach for understanding the electrostatic breakdown, and it is expected to stimulate future experimental efforts on synthesizing polymer nanocomposites with novel microstructures to achieve high performances. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High-Leverage Leadership: Improving Outcomes in Educational Settings

    Science.gov (United States)

    Mongon, Denis; Chapman, Christopher

    2011-01-01

    Globalisation of world trade, international media, technological innovation and social change are creating opportunities and challenges that today's pupils will inherit and build on. A pupil's academic, technical and social capacity will define their success or failure. Therefore, educational outcomes and well-being for young people across…

  12. Systemic risk and heterogeneous leverage in banking networks

    Science.gov (United States)

    Kuzubaş, Tolga Umut; Saltoğlu, Burak; Sever, Can

    2016-11-01

    This study probes systemic risk implications of leverage heterogeneity in banking networks. We show that the presence of heterogeneous leverages drastically changes the systemic effects of defaults and the nature of the contagion in interbank markets. Using financial leverage data from the US banking system, through simulations, we analyze the systemic significance of different types of borrowers, the evolution of the network, the consequences of interbank market size and the impact of market segmentation. Our study is related to the recent Basel III regulations on systemic risk and the treatment of the Global Systemically Important Banks (GSIBs). We also assess the extent to which the recent capital surcharges on GSIBs may curb financial fragility. We show the effectiveness of surcharge policy for the most-levered banks vis-a-vis uniform capital injection.

  13. PENGARUH PROFITABILITAS, LEVERAGE DAN LIKUIDITAS TERHADAP KINERJA LINGKUNGAN

    Directory of Open Access Journals (Sweden)

    Agus Widarsono

    2015-12-01

    Full Text Available This study aims to test and obtain empirical evidence of factors that affect the environmental performance partially and simultaneously. Factors studied in this research are profitability, leverage and liquidity. The research method used is descriptive method verifikatif. With verificative testing using multiple regression, partial test (t test and simultaneous test (F test. The data used are secondary data that is the company's annual report and PROPER report of Ministry of Environment as sample in the research. The sample of research is 11 State-Owned Enterprise (BUMN Year 2009-2013 taken by using purposive sampling method. The results of this study indicate that profitability, leverage and liquidity have no significant effect on environmental performance partially. And profitability, leverage, and profitability have no significant effect on environmental performance simultaneously.

  14. Asimetri Informasi, Leverage, dan Ukuran Perusahaan pada Manajemen Laba

    Directory of Open Access Journals (Sweden)

    Tiya Mahawyahrti

    2017-03-01

    Full Text Available This study aims at finding the empirical evidence of the effect of asymmetry information, leverage, and firm size on earning management. This research uses agency theory and positive accounting theory to explain the effect of asymmetry information, leverage, and firm size on earning management. This study was conducted on companies listed in Indonesia Stock Exchange during the period of 2009-2013. The samples were selected by purposive sampling method. The number of selected samples were 39 companies. Multiple linear regression analysis was used to analyze the data. Based on the data analysis, the study proves that the asymmetry information has positive effects on earning management, leverage has positive effects on earning management and firm size has negative effects on earning management.

  15. Landowners' ability to leverage in negotiations over habitat conservation

    DEFF Research Database (Denmark)

    Lennox, Gareth D.; Dallimer, Martin; Armsworth, Paul R.

    2012-01-01

    can be estimated solely on the basis of the value of alternative land uses. However, in a voluntary negotiation, a landowner could hold-out for a higher payment based on a conservation group or agency's willingness-to-pay by leveraging the value of biodiversity on the property. We examine landowners......' ability to leverage and the consequences for conservation planning. To explore this, we first use an analytical approximation that simplifies the situation to one where a conservation group prioritizes one site for acquisition. Landowners' ability to hold-out for higher payments in this situation ranges...... factors negatively covary. Next, we consider multi-site selection decisions accounting for community complementarity across parcels. We find that leverage potential can be significantly higher in this context, with a maximum increase of 237% of the value of alternative land uses, and that community...

  16. On the performance of a high head Francis turbine at design and off-design conditions

    Science.gov (United States)

    Aakti, B.; Amstutz, O.; Casartelli, E.; Romanelli, G.; Mangani, L.

    2015-01-01

    In the present paper, fully 360 degrees transient and steady-state simulations of a Francis turbine were performed at three operating conditions, namely at part load (PL), best efficiency point (BEP), and high load (HL), using different numerical approaches for the pressure-velocity coupling. The simulation domain includes the spiral casing with stay and guide vanes, the runner and the draft tube. The main target of the investigations is the numerical prediction of the overall performance of the high head Francis turbine model as well as local and integral quantities of the complete machine in different operating conditions. All results were compared with experimental data published by the workshop organization. All CFD simulations were performed at model scale with a new in-house, 3D, unstructured, object-oriented finite volume code within the framework of the open source OpenFOAM library. The novel fully coupled pressure-based solver is designed to solve the incompressible RANS- Equations and is capable of handling multiple references of frame (MRF). The obtained results show that the overall performance is well captured by the simulations. Regarding the local flow distributions within the inlet section of the draft-tube, the axial velocity is better estimated than the circumferential component.

  17. PZT-Actuated and -Sensed Resonant Micromirrors with Large Scan Angles Applying Mechanical Leverage Amplification for Biaxial Scanning

    Directory of Open Access Journals (Sweden)

    Shanshan Gu-Stoppel

    2017-07-01

    Full Text Available This article presents design, fabrication and characterization of lead zirconate titanate (PZT-actuated micromirrors, which enable extremely large scan angle of up to 106° and high frequency of 45 kHz simultaneously. Besides the high driving torque delivered by PZT actuators, mechanical leverage amplification has been applied for the micromirrors in this work to reach large displacements consuming low power. Additionally, fracture strength and failure behavior of poly-Si, which is the basic material of the micromirrors, have been studied to optimize the designs and prevent the device from breaking due to high mechanical stress. Since comparing to using biaxial micromirror, realization of biaxial scanning using two independent single-axial micromirrors shows considerable advantages, a setup combining two single-axial micromirrors for biaxial scanning and the results will also be presented in this work. Moreover, integrated piezoelectric position sensors are implemented within the micromirrors, based on which closed-loop control has been developed and studied.

  18. ANALISIS PENGARUH ROA, EPS, FINANCIAL LEVERAGE, PROCEED TERHADAP INITIAL RETURN

    Directory of Open Access Journals (Sweden)

    Andhi Wijayanto

    2010-03-01

    Full Text Available Riset ini bertujuan untuk mengetahui pengaruh ROA, EPS, Financial Leverage dan Proceed terhadap initial return. Initial return diperoleh dengan mengukur perbedaan harga pada hari pertama perdangangan di pasar sekunder dengan harga saat IPO. Penelitian ini menduga bahwa ROA, EPS, Proceed mempunyai pengaruh negatif dengan initial return, disisi lain, Financial Leverage diduga mempunyai pengaruh yang positif terhadap initial return. Data pada penelitian ini terdapat dalam prospectus perusahaan. Sampel diambil dengan menggunakan metode purposive sampling dengan dua kriteria yaitu terdiri dari perusahaan yang IPO selama periode tahun 2000-2006 dan underpriced. Dengan kriteria tersebut, 67 perusahaan dijadikan sebagai sampel. Metode analisis menggunakan regresi berganda. Hasil penelitian ini adalah Earning Per-Share (EPS, dan Proceed mempunyai pengaruh negatif dan signifikan terhadap initial return, sedangkan Return on Assets Ratio (ROA, dan Financial Leverage tidak berpengaruh signifikan terhadap initial return. This research aimed to examine the influence of ROA, EPS, Financial Leverage, and Proceed on initial return. Initial return was measured by the difference between the firm’s stock price on the first day in the secondary market and it’s IPO. This research expected that return on assets ratio (ROA, earning per-share (EPS, and proceed negatively associated with initial return. On other hand, financial leverage ratio expected to positively associate with initial return. Data in this study were obtained from company prospectus, ICMD. Sample had been taken by using purposive sampling method with two criterions such as conducted IPO during period 2000-2006 and underpriced. With criterions, 67 companies obtains as sample. The analytical methods used multiple regressions, the empirical result of this research indicate that EPS, and proceed significantly associated with initial returns. Whereas ROA, and financial leverage ratio not

  19. Leveraging Affective Learning for Developing Future Airmen

    Science.gov (United States)

    2009-11-01

    brings a generic, systematic ap- proach based on foundations-of-learning principles and stan- dard system theory to the ID process ( Tennyson , 989...solving: Ef- fects on learning. Cognitive Science, 12(2), 257–285. Tennyson , R. D. (989). Cognitive science update of instructions systems design models

  20. Leveraging architecture patterns to satisfy quality attributes

    NARCIS (Netherlands)

    Harrison, Neil B.; Avgeriou, Paris; Oquendo, F

    2007-01-01

    Architectural design has been characterized as making a series of decisions that have system-wide impact. These decisions have side effects which can have significant impact on the system. However, the impact may be first understood much later; when the system architecture is difficult to change.

  1. Leveraging the wisdom of the crowd in software testing

    CERN Document Server

    Sharma, Mukesh

    2015-01-01

    Its scale, flexibility, cost effectiveness, and fast turnaround are just a few reasons why crowdsourced testing has received so much attention lately. While there are a few online resources that explain what crowdsourced testing is all about, there's been a need for a book that covers best practices, case studies, and the future of this technique.Filling this need, Leveraging the Wisdom of the Crowd in Software Testing shows you how to leverage the wisdom of the crowd in your software testing process. Its comprehensive coverage includes the history of crowdsourcing and crowdsourced testing, im

  2. Designing large, high-efficiency, high-numerical-aperture, transmissive meta-lenses for visible light

    CERN Document Server

    Byrnes, Steven J; Aieta, Francesco; Capasso, Federico

    2015-01-01

    A metasurface lens (meta-lens) is a lens that bends light with an array of nanostructures on a flat surface, rather than by refraction. Macroscopic meta-lenses (mm- to cm-scale diameter) have been quite difficult to simulate and optimize, due to the large area, the lack of periodicity, and the billions of adjustable parameters. We describe a method for designing a large-area meta-lens that allows not only prediction of the efficiency and far-field, but also optimization of the shape and position of each individual nanostructure, with a computational cost that is almost independent of the lens size. Loosely speaking, the technique consists of designing a series of metasurface beam deflectors (blazed gratings), and then gluing them together. As a test of this framework, we design some high-numerical-aperture (NA=0.94) meta-lenses for visible light, based on TiO2 nano-pillars on a glass substrate. One of our designs is predicted to focus unpolarized 580nm light with 79% predicted efficiency; another focuses 580n...

  3. A high-throughput media design approach for high performance mammalian fed-batch cultures.

    Science.gov (United States)

    Rouiller, Yolande; Périlleux, Arnaud; Collet, Natacha; Jordan, Martin; Stettler, Matthieu; Broly, Hervé

    2013-01-01

    An innovative high-throughput medium development method based on media blending was successfully used to improve the performance of a Chinese hamster ovary fed-batch medium in shaking 96-deepwell plates. Starting from a proprietary chemically-defined medium, 16 formulations testing 43 of 47 components at 3 different levels were designed. Media blending was performed following a custom-made mixture design of experiments considering binary blends, resulting in 376 different blends that were tested during both cell expansion and fed-batch production phases in one single experiment. Three approaches were chosen to provide the best output of the large amount of data obtained. A simple ranking of conditions was first used as a quick approach to select new formulations with promising features. Then, prediction of the best mixes was done to maximize both growth and titer using the Design Expert software. Finally, a multivariate analysis enabled identification of individual potential critical components for further optimization. Applying this high-throughput method on a fed-batch, rather than on a simple batch, process opens new perspectives for medium and feed development that enables identification of an optimized process in a short time frame.

  4. The designer's guide to high-purity oscillators

    CERN Document Server

    Hegazi, Emad; Abidi, Asad

    2006-01-01

    Presents a comprehensive theory and design methodology for the design of LC CMOS oscillators used in every wireless transmission system. This book introduces the subject of phase noise and oscillators from the very first principles, and attempts to carry the reader to a very intuitive circuit-driven theory of phase noise in LC oscillators.

  5. Coach design for the Korean high-speed train: a systematic approach to passenger seat design and layout.

    Science.gov (United States)

    Jung, E S; Han, S H; Jung, M; Choe, J

    1998-12-01

    Proper ergonomic design of a passenger seat and coach layout for a high-speed train is an essential component that is directly related to passenger comfort. In this research, a systematic approach to the design of passenger seats was described and the coach layout which reflected the tradeoff between transportation capacity and passenger comfort was investigated for the Korean high-speed train. As a result, design recommendations and specifications of the passenger seat and its layout were suggested. The whole design process is composed of four stages. A survey and analysis of design requirement was first conducted, which formed the base for designing the first and second class passenger seats. Prototypes were made and evaluated iteratively, and seat arrangement and coach layout were finally obtained. The systematic approach and recommendations suggested in this study are expected to be applicable to the seat design for public transportations and to help modify and redesign existing vehicular seats.

  6. Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution.

    Science.gov (United States)

    Boteanu, Adrian; St Clair, Aaron; Mohseni-Kabir, Anahita; Saldanha, Carl; Chernova, Sonia

    2016-12-01

    This work seeks to leverage semantic networks containing millions of entries encoding assertions of commonsense knowledge to enable improvements in robot task execution and learning. The specific application we explore in this project is object substitution in the context of task adaptation. Humans easily adapt their plans to compensate for missing items in day-to-day tasks, substituting a wrap for bread when making a sandwich, or stirring pasta with a fork when out of spoons. Robot plan execution, however, is far less robust, with missing objects typically leading to failure if the robot is not aware of alternatives. In this article, we contribute a context-aware algorithm that leverages the linguistic information embedded in the task description to identify candidate substitution objects without reliance on explicit object affordance information. Specifically, we show that the task context provided by the task labels within the action structure of a task plan can be leveraged to disambiguate information within a noisy large-scale semantic network containing hundreds of potential object candidates to identify successful object substitutions with high accuracy. We present two extensive evaluations of our work on both abstract and real-world robot tasks, showing that the substitutions made by our system are valid, accepted by users, and lead to a statistically significant reduction in robot learning time. In addition, we report the outcomes of testing our approach with a large number of crowd workers interacting with a robot in real time.

  7. Beyond traditional advertisements: leveraging Facebook's social structures for research recruitment.

    Science.gov (United States)

    Valdez, Rupa S; Guterbock, Thomas M; Thompson, Morgan J; Reilly, Jeremiah D; Menefee, Hannah K; Bennici, Maria S; Williams, Ishan C; Rexrode, Deborah L

    2014-10-27

    Obtaining access to a demographically and geographically diverse sample for health-related research can be costly and time consuming. Previous studies have reported mixed results regarding the potential of using social media-based advertisements to overcome these challenges. Our aim was to develop and assess the feasibility, benefits, and challenges of recruiting for research studies related to consumer health information technology (IT) by leveraging the social structures embedded in the social networking platform, Facebook. Two recruitment strategies that involved direct communication with existing Facebook groups and pages were developed and implemented in two distinct populations. The first recruitment strategy involved posting a survey link directly to consenting groups and pages and was used to recruit Filipino-Americans to a study assessing the perceptions, use of, and preferences for consumer health IT. This study took place between August and December 2013. The second recruitment strategy targeted individuals with type 2 diabetes and involved creating a study-related Facebook group and asking administrators of other groups and pages to publicize our group to their members. Group members were then directly invited to participate in an online pre-study survey. This portion of a larger study to understand existing health management practices as a foundation for consumer health IT design took place between May and June 2014. In executing both recruitment strategies, efforts were made to establish trust and transparency. Recruitment rate, cost, content of interaction, and characteristics of the sample obtained were used to assess the recruitment methods. The two recruitment methods yielded 87 and 79 complete responses, respectively. The first recruitment method yielded a rate of study completion proportionate to that of the rate of posts made, whereas recruitment successes of the second recruitment method seemed to follow directly from the actions of a subset

  8. Mechanical Design of High Lift Systems for High Aspect Ratio Swept Wings

    Science.gov (United States)

    Rudolph, Peter K. C.

    1998-01-01

    The NASA Ames Research Center is working to develop a methodology for the optimization and design of the high lift system for future subsonic airliners with the involvement of two partners. Aerodynamic analysis methods for two dimensional and three dimensional wing performance with flaps and slats deployed are being developed through a grant with the aeronautical department of the University of California Davis, and a flap and slat mechanism design procedure is being developed through a contract with PKCR, Inc., of Seattle, WA. This report documents the work that has been completed in the contract with PKCR on mechanism design. Flap mechanism designs have been completed for seven (7) different mechanisms with a total of twelve (12) different layouts all for a common single slotted flap configuration. The seven mechanisms are as follows: Simple Hinge, Upside Down/Upright Four Bar Linkage (two layouts), Upside Down Four Bar Linkages (three versions), Airbus A330/340 Link/Track Mechanism, Airbus A320 Link/Track Mechanism (two layouts), Boeing Link/Track Mechanism (two layouts), and Boeing 767 Hinged Beam Four Bar Linkage. In addition, a single layout has been made to investigate the growth potential from a single slotted flap to a vane/main double slotted flap using the Boeing Link/Track Mechanism. All layouts show Fowler motion and gap progression of the flap from stowed to a fully deployed position, and evaluations based on spanwise continuity, fairing size and number, complexity, reliability and maintainability and weight as well as Fowler motion and gap progression are presented. For slat design, the options have been limited to mechanisms for a shallow leading edge slat. Three (3) different layouts are presented for maximum slat angles of 20 deg, 15 deg and 1O deg all mechanized with a rack and pinion drive similar to that on the Boeing 757 airplane. Based on the work of Ljungstroem in Sweden, this type of slat design appears to shift the lift curve so that

  9. INSTITUTIONALIZING SAFEGUARDS-BY-DESIGN: HIGH-LEVEL FRAMEWORK

    Energy Technology Data Exchange (ETDEWEB)

    Trond Bjornard PhD; Joseph Alexander; Robert Bean; Brian Castle; Scott DeMuth, Ph.D.; Phillip Durst; Michael Ehinger; Prof. Michael Golay, Ph.D.; Kevin Hase, Ph.D.; David J. Hebditch, DPhil; John Hockert, Ph.D.; Bruce Meppen; James Morgan; Jerry Phillips, Ph.D., PE

    2009-02-01

    The application of a Safeguards-by-Design (SBD) process for new nuclear facilities can reduce proliferation risks. A multi-laboratory team was sponsored in Fiscal Year (FY) 2008 to define a SBD process and determine how it could be incorporated into existing facility design and construction processes. The possibility to significantly influence major design features, such as process selection and plant layout, largely ends with the conceptual design step. Therefore SBD’s principal focus must be on the early inclusion of safeguards requirements and the early identification of beneficial design features. The result could help form the basis for a new international norm for integrating safeguards into facility design. This is an interim report describing progress and project status as of the end of FY08. In this effort, SBD is defined as a structured approach to ensure the timely, efficient, and cost-effective integration of international and national safeguards, physical security, and other nonproliferation objectives into the overall design process for a nuclear facility. A key objective is to ensure that security and nonproliferation issues are considered when weighing facility design alternatives. Central to the work completed in FY08 was a study in which a SBD process was developed in the context of the current DOE facility acquisition process. The DOE study enabled the development of a “SBD design loop” that is suitable for use in any facility design process. It is a graded, iterative process that incorporates safeguards concerns throughout the conceptual, preliminary and final design processes. Additionally, a set of proposed design principles for SBD was developed. A “Generic SBD Process” was then developed. Key features of the process include the initiation of safeguards design activities in the pre-conceptual planning phase, early incorporation of safeguards requirements into the project requirements, early appointment of an SBD team, and

  10. Design and selection of triazole-based compounds with high ...

    Indian Academy of Sciences (India)

    linked energetic polynitropyrazoles, which showed that these compounds have potential applications as energetic compounds. Zhu et al.,15 designed three novel explosives by introducing the N- oxides into the 1,2,4-triazole, which were more pow ...

  11. High-Fidelity Aerodynamic Design with Transition Prediction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To enhance aerodynamic design capabilities, Desktop Aeronautics proposes to significantly improve upon the integration (performed in Phase 1) of a new sweep/taper...

  12. High-Fidelity Aerodynamic Design with Transition Prediction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To enhance aerodynamic design capabilities, Desktop Aeronautics proposes to combine a new sweep/taper integrated-boundary-layer (IBL) code that includes transition...

  13. ADVANCED DESIGN SOLUTIONS FOR HIGH-PRECISION WOODWORKING MACHINES

    Directory of Open Access Journals (Sweden)

    Giuseppe Lucisano

    2016-03-01

    Full Text Available With the aim at performing the highest precision during woodworking, a mix of alternative approaches, fruitfully integrated in a common design strategy, is essential. This paper represents an overview of technical solutions, recently developed by authors, in design of machine tools and their final effects on manufacturing. The most advanced solutions in machine design are reported side by side with common practices or little everyday expedients. These design actions are directly or indirectly related to the rational use of materials, sometimes very uncommon, as in the case of magnetorheological fluids chosen to implement an active control in speed and force on the electro-spindle, and permitting to improve the quality of wood machining. Other actions are less unusual, as in the case of the adoption of innovative anti-vibration supports for basement. Tradition or innovation, all these technical solutions contribute to the final result: the highest precision in wood machining.

  14. Total systems design analysis of high performance structures

    Science.gov (United States)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  15. Total systems design analysis of high performance structures

    Science.gov (United States)

    Verderaime, V.

    1993-11-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  16. Design phase identification of high pile rebound soils : final report

    Science.gov (United States)

    2010-12-15

    An engineering problem has occurred when installing displacement piles in certain soils. During driving, piles are rebounding excessively during each hammer blow, causing delay and as a result may not achieve the required design capacities. Piles dri...

  17. Energy Design Guidelines for High Performance Schools: Hot and Dry Climates.

    Science.gov (United States)

    Department of Energy, Washington, DC. Office of Energy Efficiency and Renewable Energy.

    This guide contains recommendations for designing high performance, energy efficient schools located in hot and dry climates. A high performance checklist for designers is included along with several case studies of projects that successfully demonstrated high performance design solutions for hot and dry climates. The guide's 10 sections…

  18. Mining E-mail to Leverage Knowledge Networks in Organizations

    NARCIS (Netherlands)

    van Reijsen, J.; Helms, R.W.; Jackson, T.W.

    2009-01-01

    There is nothing new about the notion that in today‟s knowledge driven economy, knowledge is the key strategic asset for competitive advantage in an organization. Also, we have learned that knowledge is residing in the organization‟s informal network. Hence, to leverage business performance from a

  19. Leveraging Mobile Games for Place-Based Language Learning

    Science.gov (United States)

    Holden, Christopher L.; Sykes, Julie M.

    2011-01-01

    This paper builds on the emerging body of research aimed at exploring the educational potential of mobile technologies, specifically, how to leverage place-based, augmented reality mobile games for language learning. Mentira is the first place-based, augmented reality mobile game for learning Spanish in a local neighborhood in the Southwestern…

  20. Leveraging Institutional Knowledge for Student Success: Promoting Academic Advisors

    Science.gov (United States)

    Pellegrino, Jeffrey Louis; Snyder, Charity; Crutchfield, Nikki; Curtis, Cesquinn M.; Pringle, Eboni

    2015-01-01

    To engage students and meet institutional goals, higher education leaders need to leverage the institutional knowledge of their staff and their professional competencies. Evidence based decision-making provides a stepping-stone to strategic staffing practices. Strategically developing and retaining staff members moves the conversation from…

  1. 77 FR 19417 - Proposed Guidance on Leveraged Lending

    Science.gov (United States)

    2012-03-30

    ..., management information systems (MIS) at some institutions have proven less than satisfactory in accurately... leveraged finance that facilitates consistent application across all business lines. Well-defined... with the institution's risk appetite. Sound MIS that enable management to identify, aggregate, and...

  2. Factors affecting Leverage: An empirical analysis of Mauritius ...

    African Journals Online (AJOL)

    Nafiisah

    His findings also suggest that firms with substantial cash flow from depreciation exploit their higher debt capacity by maintaining a capital structure with significantly more debt than otherwise. Harris and Raviv (1991) found an inverse relationship between leverage and volatility, advertising expenditure, the probability of ...

  3. Technophobia and technostress as deterrents to leveraging ICTs for ...

    African Journals Online (AJOL)

    This paper therefore examined the possible manifestation of technophobia and techno-stress as deterrents and inhibitors to leveraging ICTs for user information service delivery in Nigerian academic libraries. It discussed concepts of technophobia and techno-stress, nature of user information services in academic libraries, ...

  4. Real interest rates, leverage, and bank risk-taking

    NARCIS (Netherlands)

    Dell’Ariccia, G.; Laeven, L.; Marquez, R.

    2014-01-01

    Do low interest rate environments lead to greater bank risk-taking? We show that, when banks can adjust their capital structures, reductions in real interest rates lead to greater leverage and higher risk for any downward sloping loan demand function. However, if the capital structure is fixed, the

  5. The effect of leverage increases on real earnings management

    NARCIS (Netherlands)

    I. Zagers-Mamedova (Irina)

    2009-01-01

    textabstractMain subject of this paper is to understand whether there could be an incentive for managers to manipulate cash flow from operating activities (CFO) through the use of real earnings management (REM), in situations with increasing leverage. Based upon a study of Jelinek (2007) who

  6. Knowledge-leverage-based TSK Fuzzy System modeling.

    Science.gov (United States)

    Zhaohong Deng; Yizhang Jiang; Kup-Sze Choi; Fu-Lai Chung; Shitong Wang

    2013-08-01

    Classical fuzzy system modeling methods consider only the current scene where the training data are assumed to be fully collectable. However, if the data available from the current scene are insufficient, the fuzzy systems trained by using the incomplete datasets will suffer from weak generalization capability for the prediction in the scene. In order to overcome this problem, a knowledge-leverage-based fuzzy system (KL-FS) is studied in this paper from the perspective of transfer learning. The KL-FS intends to not only make full use of the data from the current scene in the learning procedure, but also effectively leverage the existing knowledge from the reference scenes. Specifically, a knowledge-leverage-based Takagi-Sugeno-Kang-type Fuzzy System (KL-TSK-FS) is proposed by integrating the corresponding knowledge-leverage mechanism. The new fuzzy system modeling technique is evaluated through experiments on synthetic and real-world datasets. The results demonstrate that KL-TSK-FS has better performance and adaptability than the traditional fuzzy modeling methods in scenes with insufficient data.

  7. A feasible central limit theory for realised volatility under leverage

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Shephard, Neil

    In this note we show that the feasible central limit theory for realised volatility and realised covariation recently developed by Barndor-Nielsen and Shephard applies under arbitrary diusion based leverage eects. Results from a simulation experiment suggest that the feasible version of the limit...

  8. The impact of Taxation on Bank Leverage and Asset Risk

    NARCIS (Netherlands)

    Horvath, B.L.

    2013-01-01

    Abstract: The tax-bene t of interest deductibility encourages debt nancing, but regulatory and market constraints create dependency between bank leverage and risk. Using a large international sample of banks this paper estimates the short and long run effects of corporate income taxes (CIT) on bank

  9. Leverage Leadership: A Practical Guide to Building Exceptional Schools

    Science.gov (United States)

    Bambrick-Santoyo, Paul

    2012-01-01

    Paul Bambrick-Santoyo (Managing Director of Uncommon Schools) shows leaders how they can raise their schools to greatness by following a core set of principles. These seven principles, or "levers," allow for consistent, transformational, and replicable growth. With intentional focus on these areas, leaders will leverage much more…

  10. Design of high gradient, high repetition rate damped C -band rf structures

    Science.gov (United States)

    Alesini, David; Bellaveglia, Marco; Bini, Simone; Gallo, Alessandro; Lollo, Valerio; Pellegrino, Luigi; Piersanti, Luca; Cardelli, Fabio; Migliorati, Mauro; Mostacci, Andrea; Palumbo, Luigi; Tocci, Simone; Ficcadenti, Luca; Pettinacci, Valerio

    2017-03-01

    The gamma beam system of the European Extreme Light Infrastructure-Nuclear Physics project foresees the use of a multibunch train colliding with a high intensity recirculated laser pulse. The linac energy booster is composed of 12 traveling wave C -band structures, 1.8 m long with a field phase advance per cell of 2 π /3 and a repetition rate of 100 Hz. Because of the multibunch operation, the structures have been designed with a dipole higher order mode (HOM) damping system to avoid beam breakup (BBU). They are quasiconstant gradient structures with symmetric input couplers and a very effective damping of the HOMs in each cell based on silicon carbide (SiC) rf absorbers coupled to each cell through waveguides. An optimization of the electromagnetic and mechanical design has been done to simplify the fabrication and to reduce the cost of the structures. In the paper, after a review of the beam dynamics issues related to the BBU effects, we discuss the electromagnetic and thermomechanic design criteria of the structures. We also illustrate the criteria to compensate the beam loading and the rf measurements that show the effectiveness of the HOM damping.

  11. Design of high gradient, high repetition rate damped C-band rf structures

    Directory of Open Access Journals (Sweden)

    David Alesini

    2017-03-01

    Full Text Available The gamma beam system of the European Extreme Light Infrastructure–Nuclear Physics project foresees the use of a multibunch train colliding with a high intensity recirculated laser pulse. The linac energy booster is composed of 12 traveling wave C-band structures, 1.8 m long with a field phase advance per cell of 2π/3 and a repetition rate of 100 Hz. Because of the multibunch operation, the structures have been designed with a dipole higher order mode (HOM damping system to avoid beam breakup (BBU. They are quasiconstant gradient structures with symmetric input couplers and a very effective damping of the HOMs in each cell based on silicon carbide (SiC rf absorbers coupled to each cell through waveguides. An optimization of the electromagnetic and mechanical design has been done to simplify the fabrication and to reduce the cost of the structures. In the paper, after a review of the beam dynamics issues related to the BBU effects, we discuss the electromagnetic and thermomechanic design criteria of the structures. We also illustrate the criteria to compensate the beam loading and the rf measurements that show the effectiveness of the HOM damping.

  12. A new design for a high resolution, high efficiency CZT gamma camera detector

    Science.gov (United States)

    Mestais, C.; Baffert, N.; Bonnefoy, J. P.; Chapuis, A.; Koenig, A.; Monnet, O.; Ouvrier Buffet, P.; Rostaing, J. P.; Sauvage, F.; Verger, L.

    2001-02-01

    We have designed a CZT gamma camera detector that provides an array of CZT pixels and associated front-end electronics - including an ASIC - and permits gamma camera measurements using the method patented by CEA-LETI and reported by Verger et al. [1]. Electron response in each CZT pixel is registered by correcting pulse height for position of interaction based on fast rise-time information. This method brings advantages of high scatter rejection while allowing high detection efficiency. These techniques and the systems approach have been developed at CEA-LETI in an exclusive joint development with BICRON and CRISMATEC who in turn are commercializing the technology. The initial system is implemented in an array framework with 1920 pixels, approximately 180×215 mm 2 in dimension, but the system architecture expands readily to 4096 pixels, and these arrays can be ganged into groups of up to 8 for pixel planes totaling over 32 000 pixels without architecture changes. The overall system design is described and brain phantom images are presented that were obtained by scanning with a small number of pixels.

  13. Design of a high linearity and high gain accuracy analog baseband circuit for DAB receiver

    Science.gov (United States)

    Li, Ma; Zhigong, Wang; Jian, Xu; Yiqiang, Wu; Junliang, Wang; Mi, Tian; Jianping, Chen

    2015-02-01

    An analog baseband circuit of high linearity and high gain accuracy for a digital audio broadcasting receiver is implemented in a 0.18-μm RFCMOS process. The circuit comprises a 3rd-order active-RC complex filter (CF) and a programmable gain amplifier (PGA). An automatic tuning circuit is also designed to tune the CF's pass band. Instead of the class-A fully differential operational amplifier (FDOPA) adopted in the conventional CF and PGA design, a class-AB FDOPA is specially employed in this circuit to achieve a higher linearity and gain accuracy for its large current swing capability with lower static current consumption. In the PGA circuit, a novel DC offset cancellation technique based on the MOS resistor is introduced to reduce the settling time significantly. A reformative switching network is proposed, which can eliminate the switch resistor's influence on the gain accuracy of the PGA. The measurement result shows the gain range of the circuit is 10-50 dB with a 1-dB step size, and the gain accuracy is less than ±0.3 dB. The OIP3 is 23.3 dBm at the gain of 10 dB. Simulation results show that the settling time is reduced from 100 to 1 ms. The image band rejection is about 40 dB. It only draws 4.5 mA current from a 1.8 V supply voltage.

  14. High Gradient $Nb_3Sn$ Quadrupole Demonstrator MKQXF Engineering Design

    CERN Document Server

    Kokkinos, C; Karppinen, Mikko; CERN. Geneva. ATS Department

    2016-01-01

    A new mechanical design concept for the $Nb_3Sn$ quadrupoles has been developed with a goal of an accelerator quality magnet that can be industrially produced in large series. This concept can easily be extended to any length and applied on both 1-in-1 and 2-in-1 configurations. It is based on the pole-loading concept and collared coils using dipole-type collars. Detailed design optimisation of a demonstrator magnet based on present base-line HL-LHC IR quadrupole QXF coil geometry has been carried out including the end regions. This report describes the design concept and the fully parametric multi-physics finite element (FE) models that were used to determine the optimal assembly parameters including the effects of the manufacturing tolerances.

  15. Designing high-quality interactive multimedia learning modules.

    Science.gov (United States)

    Huang, Camillan

    2005-01-01

    Modern research has broadened scientific knowledge and revealed the interdisciplinary nature of the sciences. For today's students, this advance translates to learning a more diverse range of concepts, usually in less time, and without supporting resources. Students can benefit from technology-enhanced learning supplements that unify concepts and are delivered on-demand over the Internet. Such supplements, like imaging informatics databases, serve as innovative references for biomedical information, but could improve their interaction interfaces to support learning. With information from these digital datasets, multimedia learning tools can be designed to transform learning into an active process where students can visualize relationships over time, interact with dynamic content, and immediately test their knowledge. This approach bridges knowledge gaps, fosters conceptual understanding, and builds problem-solving and critical thinking skills-all essential components to informatics training for science and medicine. Additional benefits include cost-free access and ease of dissemination over the Internet or CD-ROM. However, current methods for the design of multimedia learning modules are not standardized and lack strong instructional design. Pressure from administrators at the top and students from the bottom are pushing faculty to use modern technology to address the learning needs and expectations of contemporary students. Yet, faculty lack adequate support and training to adopt this new approach. So how can faculty learn to create educational multimedia materials for their students? This paper provides guidelines on best practices in educational multimedia design, derived from the Virtual Labs Project at Stanford University. The development of a multimedia module consists of five phases: (1) understand the learning problem and the users needs; (2) design the content to harness the enabling technologies; (3) build multimedia materials with web style standards and

  16. Design Optimization of a High Aspect Ratio Rigid/Inflatable Wing

    OpenAIRE

    Butt, Lauren Marie

    2011-01-01

    High aspect-ratio, long-endurance aircraft require different design modeling from those with traditional moderate aspect ratios. High aspect-ratio, long endurance aircraft are generally more flexible structures than the traditional wing; therefore, they require modeling methods capable of handling a flexible structure even at the preliminary design stage. This work describes a design optimization method for combining rigid and inflatable wing design. The design will take advantage of the ...

  17. The Design of a High-Integrity Disk Management Subsystem

    NARCIS (Netherlands)

    Oey, M.A.

    2005-01-01

    This dissertation describes and experimentally evaluates the design of the Logical Disk, a disk management subsystem that guarantees the integrity of data stored on disk even after system failures, while still providing performance competitive to other storage systems. Current storage systems that

  18. Safe, High-Performance, Sustainable Precast School Design

    Science.gov (United States)

    Finsen, Peter I.

    2011-01-01

    School design utilizing integrated architectural and structural precast and prestressed concrete components has gained greater acceptance recently for numerous reasons, including increasingly sophisticated owners and improved learning environments based on material benefits such as: sustainability, energy efficiency, indoor air quality, storm…

  19. Design of Ultra High Performance Fiber Reinforced Concrete Shells

    DEFF Research Database (Denmark)

    Jepsen, Michael S.; Lambertsen, Søren Heide; Damkilde, Lars

    2013-01-01

    Fiber Reinforced Concrete shell. The major challenge in the design phase has been securing sufficient stiffness of the structure while keeping the weight at a minimum. The weight/stiffness issue has been investigated by means of the finite element method, to optimize the structure regarding overall...

  20. Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing

    Science.gov (United States)

    Rehder, Joe

    2000-01-01

    Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same

  1. Low Power Design with High-Level Power Estimation and Power-Aware Synthesis

    CERN Document Server

    Ahuja, Sumit; Shukla, Sandeep Kumar

    2012-01-01

    Low-power ASIC/FPGA based designs are important due to the need for extended battery life, reduced form factor, and lower packaging and cooling costs for electronic devices. These products require fast turnaround time because of the increasing demand for handheld electronic devices such as cell-phones, PDAs and high performance machines for data centers. To achieve short time to market, design flows must facilitate a much shortened time-to-product requirement. High-level modeling, architectural exploration and direct synthesis of design from high level description enable this design process. This book presents novel research techniques, algorithms,methodologies and experimental results for high level power estimation and power aware high-level synthesis. Readers will learn to apply such techniques to enable design flows resulting in shorter time to market and successful low power ASIC/FPGA design. Integrates power estimation and reduction for high level synthesis, with low-power, high-level design; Shows spec...

  2. Leveraging synergy for multiple agent infotaxis

    Energy Technology Data Exchange (ETDEWEB)

    Gintautas, Vadas [Los Alamos National Laboratory; Hagberg, Aric A [Los Alamos National Laboratory; Bettencourt, Luis M A [Los Alamos National Laboratory

    2008-01-01

    Social computation, whether in the form of a search performed by a swarm of agents or the predictions of markets, often supplies remarkably good solutions to complex problems, which often elude the best experts. There is an intuition, built upon many anecdotal examples, that pervading principles are at play that allow individuals trying to solve a problem locally to aggregate their information to arrive at an outcome superior than any available to isolated parties. Here we show that the general structure of this problem can be cast in terms of information theory and derive general mathematical conditions for information sharing and coordination that lead to optimal multi-agent searches. Specifically we illustrate the problem in terms of the construction of local search algorithms for autonomous agents looking for the spatial location of a stochastic source. We explore the types of search problems -defined in terms of the properties of the source and the nature of measurements at each sensor -for which coordination among multiple searchers yields an advantage beyond that gained by having the same number of independent searchers. We assert that effective coordination corresponds to synergy and that ineffective coordination corresponds to redundancy as defined using information theory. We classify explicit types of sources in terms of their potential for synergy. We show that sources that emit uncorrelated particles based on a Poisson process, provide no opportunity for synergetic coordination while others, particularly sources that emit correlated signals, do allow for strong synergy between searchers. These general considerations are crucial for designing optimal algorithms for particular search problems in real world settings.

  3. 30 CFR 77.804 - High-voltage trailing cables; minimum design requirements.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false High-voltage trailing cables; minimum design... OF UNDERGROUND COAL MINES Surface High-Voltage Distribution § 77.804 High-voltage trailing cables; minimum design requirements. (a) High-voltage trailing cables used in resistance grounded systems shall be...

  4. Developing a Coding Scheme to Analyse Creativity in Highly-constrained Design Activities

    DEFF Research Database (Denmark)

    Dekoninck, Elies; Yue, Huang; Howard, Thomas J.

    2010-01-01

    This work is part of a larger project which aims to investigate the nature of creativity and the effectiveness of creativity tools in highly-constrained design tasks. This paper presents the research where a coding scheme was developed and tested with a designer-researcher who conducted two rounds...... of design and analysis on a highly constrained design task. This paper shows how design changes can be coded using a scheme based on creative ‘modes of change’. The coding scheme can show the way a designer moves around the design space, and particularly the strategies that are used by a creative designer...... larger study with more designers working on different types of highly-constrained design task is needed, in order to draw conclusions on the modes of change and their relationship to creativity....

  5. Permanent magnet design for high-speed superconducting bearings

    Science.gov (United States)

    Hull, John R.; Uherka, Kenneth L.; Abdoud, Robert G.

    1996-01-01

    A high temperature superconducting bearing including a permanent magnet rotor levitated by a high temperature superconducting structure. The rotor preferably includes one or more concentric permanent magnet rings coupled to permanent magnet ring structures having substantially triangular and quadrangular cross-sections. Both alternating and single direction polarity magnet structures can be used in the bearing.

  6. High-Efficiency Klystron Design for the CLIC Project

    CERN Document Server

    Mollard, Antoine; Peauger, Franck; Plouin, Juliette; Beunas, Armel; Marchesin, Rodolphe

    2017-01-01

    The CLIC project requests new type of RF sources for the high power conditioning of the accelerating cavities. We are working on the development of a new kind of high-efficiency klystron to fulfill this need. This work is performed under the EuCARD-2 European program and involves theoretical and experimental study of a brand new klystron concept.

  7. Design studies of a high-current radiofrequency quadrupole for ...

    Indian Academy of Sciences (India)

    employing the adiabatic bunching process. This process increases the capture effi- ciency of the RFQ to nearly 100%. Because of their high capture efficiency at low energies, the RFQs suite well as a first unit of high-current RF linear accelerators in many advanced applications, such as production of radioactive ion beams ...

  8. Design and Modeling of High Performance Permanent Magnet Synchronous Machines

    NARCIS (Netherlands)

    Van der Geest, M.

    2015-01-01

    The electrification of transportation, and especially aerospace transportation, increases the demand for high performance electrical machines. Those machines often need to be fault-tolerant, cheap, highly efficient, light and small, and interface well with the inverter. In addition, the development

  9. Hohlraum Designs for High Velocity Implosions on NIF

    Energy Technology Data Exchange (ETDEWEB)

    Meezan, N B; Hicks, D G; Callahan, D A; Olson, R E; Schneider, M S; Thomas, C A; Robey, H F; Celliers, P M; Kline, J K; Dixit, S N; Michel, P A; Jones, O S; Clark, D S; Ralph, J E; Doeppner, T; MacKinnon, A J; Haan, S W; Landen, O L; Glenzer, S H; Suter, L J; Edwards, M J; Macgowan, B J; Lindl, J D; Atherton, L J

    2011-10-19

    In this paper, we compare experimental shock and capsule trajectories to design calculations using the radiation-hydrodynamics code HYDRA. The measured trajectories from surrogate ignition targets are consistent with reducing the x-ray flux on the capsule by about 85%. A new method of extracting the radiation temperature as seen by the capsule from x-ray intensity and image data shows that about half of the apparent 15% flux deficit in the data with respect to the simulations can be explained by HYDRA overestimating the x-ray flux on the capsule. The National Ignition Campaign (NIC) point-design target is designed to reach a peak fuel-layer velocity of 370 km/s by ablating 90% of its plastic (CH) ablator. The 192-beam National Ignition Facility laser drives a gold hohlraum to a radiation temperature (T{sub RAD}) of 300 eV with a 20 ns-long, 420 TW, 1.3 MJ laser pulse. The hohlraum x-rays couple to the CH ablator in order to apply the required pressure to the outside of the capsule. In this paper, we compare experimental measurements of the hohlraum T{sub RAD} and the implosion trajectory with design calculations using the code hydra. The measured radial positions of the leading shock wave and the unablated shell are consistent with simulations in which the x-ray flux on the capsule is artificially reduced by 85%. We describe a new method of inferring the T{sub RAD} seen by the capsule from time-dependent x-ray intensity data and static x-ray images. This analysis shows that hydra overestimates the x-ray flux incident on the capsule by {approx}8%.

  10. Novel design for transparent high-pressure fuel injector nozzles

    Science.gov (United States)

    Falgout, Z.; Linne, M.

    2016-08-01

    The efficiency and emissions of internal combustion (IC) engines are closely tied to the formation of the combustible air-fuel mixture. Direct-injection engines have become more common due to their increased practical flexibility and efficiency, and sprays dominate mixture formation in these engines. Spray formation, or rather the transition from a cylindrical liquid jet to a field of isolated droplets, is not completely understood. However, it is known that nozzle orifice flow and cavitation have an important effect on the formation of fuel injector sprays, even if the exact details of this effect remain unknown. A number of studies in recent years have used injectors with optically transparent nozzles (OTN) to allow observation of the nozzle orifice flow. Our goal in this work is to design various OTN concepts that mimic the flow inside commercial injector nozzles, at realistic fuel pressures, and yet still allow access to the very near nozzle region of the spray so that interior flow structure can be correlated with primary breakup dynamics. This goal has not been achieved until now because interior structures can be very complex, and the most appropriate optical materials are brittle and easily fractured by realistic fuel pressures. An OTN design that achieves realistic injection pressures and grants visual access to the interior flow and spray formation will be explained in detail. The design uses an acrylic nozzle, which is ideal for imaging the interior flow. This nozzle is supported from the outside with sapphire clamps, which reduces tensile stresses in the nozzle and increases the nozzle's injection pressure capacity. An ensemble of nozzles were mechanically tested to prove this design concept.

  11. Recent progress in econophysics: Chaos, leverage, and business cycles as revealed by agent-based modeling and human experiments

    Science.gov (United States)

    Xin, Chen; Huang, Ji-Ping

    2017-12-01

    Agent-based modeling and controlled human experiments serve as two fundamental research methods in the field of econophysics. Agent-based modeling has been in development for over 20 years, but how to design virtual agents with high levels of human-like "intelligence" remains a challenge. On the other hand, experimental econophysics is an emerging field; however, there is a lack of experience and paradigms related to the field. Here, we review some of the most recent research results obtained through the use of these two methods concerning financial problems such as chaos, leverage, and business cycles. We also review the principles behind assessments of agents' intelligence levels, and some relevant designs for human experiments. The main theme of this review is to show that by combining theory, agent-based modeling, and controlled human experiments, one can garner more reliable and credible results on account of a better verification of theory; accordingly, this way, a wider range of economic and financial problems and phenomena can be studied.

  12. Design study of a high power rotary transformer

    Science.gov (United States)

    Weinberger, S. M.

    1982-01-01

    A design study was made on a rotary transformer for transferring electrical power across a rotating spacecraft interface. The analysis was performed for a 100 KW, 20 KHz unit having a ""pancake'' geometry. The rotary transformer had a radial (vertical) gap and consisted of 4-25 KW modules. It was assumed that the power conditioning comprised of a Schwarz resonant circuit with a 20 KHz switching frequency. The rotary transformer, mechanical and structural design, heat rejection system and drive mechanism which provide a complete power transfer device were examined. The rotary transformer losses, efficiency, weight and size were compared with an axial (axial symmetric) gap transformer having the same performance requirements and input characteristics which was designed as part of a previous program. The ""pancake'' geometry results in a heavier rotary transformer primarily because of inefficient use of the core material. It is shown that the radial gap rotary transformer is a feasible approach for the transfer of electrical power across a rotating interface and can be implemented using presently available technology.

  13. Using the protein leverage hypothesis to understand socioeconomic variation in obesity.

    Science.gov (United States)

    Bekelman, Traci A; Santamaría-Ulloa, Carolina; Dufour, Darna L; Marín-Arias, Lilliam; Dengo, Ana Laura

    2017-05-06

    The protein leverage hypothesis (PLH) predicts that protein appetite will stimulate excess energy intake, and consequently obesity, when the proportion of protein in the diet is low. Experimental studies support the PLH, but whether protein leverage can be used to understand socioeconomic (SES) variation in obesity is unknown. The objective of this study was to test two hypotheses from the PLH under non-experimental conditions. Consistent with the PLH, we expect that (1) absolute protein intake will be similar across populations, here defined as SES groups and, (2) the proportion of protein in the diet will be inversely associated with energy intake. This was a cross-sectional study conducted in a random sample of 135 low-, middle-, and high-SES women in Costa Rica. Anthropometry was used to calculate body mass index (BMI). Twenty-four-hour dietary recalls were used to measure dietary intake. The prevalence of obesity varied between low- (38.8%), middle- (43.9%), and high- (17.8%) SES women. Absolute protein intake was similar across low- (58.5 g), middle- (59.4 g), and high- (65.6 g) SES women (p = 0.12). Protein intake as a proportion of total energy intake was inversely associated with total energy intake only among middle- (r = -0.37, p = 0.02) and high- (r = -0.36, p = 0.01) SES women. Consistent with the PLH, absolute protein intake was similar across SES groups. The relationship between the proportion of protein in the diet and total energy intake should be studied further in the context of real world conditions that may influence protein leverage. © 2017 Wiley Periodicals, Inc.

  14. High efficiency endocrine operation protocol: From design to implementation.

    Science.gov (United States)

    Mascarella, Marco A; Lahrichi, Nadia; Cloutier, Fabienne; Kleiman, Simcha; Payne, Richard J; Rosenberg, Lawrence

    2016-10-01

    We developed a high efficiency endocrine operative protocol based on a mathematical programming approach, process reengineering, and value-stream mapping to increase the number of operations completed per day without increasing operating room time at a tertiary-care, academic center. Using this protocol, a case-control study of 72 patients undergoing endocrine operation during high efficiency days were age, sex, and procedure-matched to 72 patients undergoing operation during standard days. The demographic profile, operative times, and perioperative complications were noted. The average number of cases per 8-hour workday in the high efficiency and standard operating rooms were 7 and 5, respectively. Mean procedure times in both groups were similar. The turnaround time (mean ± standard deviation) in the high efficiency group was 8.5 (±2.7) minutes as compared with 15.4 (±4.9) minutes in the standard group (P < .001). Transient postoperative hypocalcemia was 6.9% (5/72) and 8.3% (6/72) for the high efficiency and standard groups, respectively (P = .99). In this study, patients undergoing high efficiency endocrine operation had similar procedure times and perioperative complications compared with the standard group. The proposed high efficiency protocol seems to better utilize operative time and decrease the backlog of patients waiting for endocrine operation in a country with a universal national health care program. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. High Throughput Atomic Layer Deposition Processes: High Pressure Operations, New Reactor Designs, and Novel Metal Processing

    Science.gov (United States)

    Mousa, MoatazBellah Mahmoud

    Atomic Layer Deposition (ALD) is a vapor phase nano-coating process that deposits very uniform and conformal thin film materials with sub-angstrom level thickness control on various substrates. These unique properties made ALD a platform technology for numerous products and applications. However, most of these applications are limited to the lab scale due to the low process throughput relative to the other deposition techniques, which hinders its industrial adoption. In addition to the low throughput, the process development for certain applications usually faces other obstacles, such as: a required new processing mode (e.g., batch vs continuous) or process conditions (e.g., low temperature), absence of an appropriate reactor design for a specific substrate and sometimes the lack of a suitable chemistry. This dissertation studies different aspects of ALD process development for prospect applications in the semiconductor, textiles, and battery industries, as well as novel organic-inorganic hybrid materials. The investigation of a high pressure, low temperature ALD process for metal oxides deposition using multiple process chemistry revealed the vital importance of the gas velocity over the substrate to achieve fast depositions at these challenging processing conditions. Also in this work, two unique high throughput ALD reactor designs are reported. The first is a continuous roll-to-roll ALD reactor for ultra-fast coatings on porous, flexible substrates with very high surface area. While the second reactor is an ALD delivery head that allows for in loco ALD coatings that can be executed under ambient conditions (even outdoors) on large surfaces while still maintaining very high deposition rates. As a proof of concept, part of a parked automobile window was coated using the ALD delivery head. Another process development shown herein is the improvement achieved in the selective synthesis of organic-inorganic materials using an ALD based process called sequential vapor

  16. Design and Development of LPU-B High School Website

    National Research Council Canada - National Science Library

    Abner B. Tupas

    2015-01-01

    This study was conducted to develop and assess the LPU-B High School website as perceived by the faculty members and selected staff and students in terms of content, efficiency, functionality and usability...

  17. High Burn Rate Hybrid Fuel for Improved Grain Design Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A novel type of fuel providing high burning rate for hybrid rocket applications is proposed. This fuel maintains a hydrodynamically rough surface to...

  18. Modern trends in designing high-speed trains

    National Research Council Canada - National Science Library

    Golubović, Snežana D; Rašuo, Boško P; Lučanin, Vojkan J

    2015-01-01

    Increased advantages of railway transportation systems over other types of transportation systems in the past sixty years have been a result of an intensive development of the new generations of high-speed trains...

  19. Design of cryostat for testing high-Tc superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Ho Myung; Baik, Joun Hoon; Lee, Hoon; Kim, Young Kwon; Park, Jeong Soo; Song, Seung Jae [Hongik University, Seoul (Korea, Republic of)

    1997-07-01

    This project is proposed to develop several design techniques concerning the gas-cooled or the refrigerator-cooled cryostats to test the HTS at temperature ranges between 20 K and 100 K. (1) It is shown by a numerical analysis that the thermal stability of HTS in a gas-cooled cryostat is satisfactory, mainly because of large heat capacity. The feasibility of the gas-cooled cryostat is demonstrated after the cooling load calculation, the selection of the cryocooler, and the detailed design and fabrication. It is also found that the current leads in the gas-cooled cryostat increases the cooling load but can make the cool-down time shorter to a considerable degree. (2) The thermal stability and the cooling load of HTS in a refrigerator-cooled cryostat do not differ much from those in a gas-cooled cryostat. On the other hand, it has been known that the thermal switches and the soft-contact materials in the refrigerator-superconductor interface are necessary to shorten the coo-down time and to provide a flexibility in the configuration of cryostat. Various shapes and designs are demonstrated for the refrigerator-cooled cryostat. (3) Binary current leads are indispensable in a refrigerator-cooled cryostat. The current lead is a series combination of a normal metal at warm side and a HTS at cold side. It is shown that the optimal diameter-length relation exits for the minimum refrigeration work. It is also found that the refrigerator work decreases as the length of HTS increases. For a given length of HTS, there is an optimal cross-sectional area and it increases with the length. 54 refs., 9 tabs., 56 figs. (author)

  20. Hohlraum designs for high velocity implosions on NIF

    Directory of Open Access Journals (Sweden)

    Meezan Nathan B.

    2013-11-01

    Full Text Available In this paper, we compare experimental shock and capsule trajectories to design calculations using the radiation-hydrodynamics code hydra. The measured trajectories from surrogate ignition targets are consistent with reducing the x-ray flux on the capsule by about 85%. A new method of extracting the radiation temperature from x-ray data shows that about half of the apparent 15% flux deficit in the data with respect to the simulations can be explained by hydra overestimating the x-ray flux on the capsule.

  1. Design Requirements for High-Efficiency Electronic Ballasts

    Science.gov (United States)

    Takahashi, Yuuji; Shimizu, Keiichi

    The energy loss of an electronic ballast is mainly composed of switch loss and induction loss. The loss decreases when setting the component values so as to reduce the phase angle of the road circuit. However, the lamp current limiting action falls and fails to stably operate the lamp. We have examined the ways to improve the control stability by using the loop transfer function, the bode plot in particular. By designing a control circuit with the experimentally measured loop transfer function and the transfer function of each functional circuit block, the tested lamp was able to operate stably.

  2. Application of Powered High Lift Systems to STOL Aircraft Design.

    Science.gov (United States)

    1979-09-01

    56 2-25 Hawker Siddeley P.1127/Harrier ------------------ 63 7 2-26 EWR SUD VJ-IOIC 67 2-27 Dornier DO-31...be analyzed. 1. EWR SUD VJ-101C 2. Dornier DO-31 ). VFW-FOKKER VAK - 191 B .1 .64 e’V DESIGNATION: VJ-lOlC MANUFACTURER: EWR SUD SPONSOR: Federal...German Defense Ministry CONCEPT: Lift/Vectored-thrust (jet) principle MILESTONES: EWR consortium formed 1959 Hovering rig-first free flight Mar 1962

  3. Ultra high resolution stepper motors design, development, performance and application

    Science.gov (United States)

    Moll, H.; Roeckl, G.

    1979-01-01

    The design and development of stepper motors with steps in the 10 arc sec to 2 arc min range is described. Some of the problem areas, e.g. rotor suspension, tribology aspects and environmental conditions are covered. A summary of achieved test results and the employment in different mechanisms already developed and tested is presented to give some examples of the possible use of this interesting device. Adaptations to military and commercial requirements are proposed and show the wide range of possible applications.

  4. Design and Prototyping of a High Granularity Scintillator Calorimeter

    Energy Technology Data Exchange (ETDEWEB)

    Zutshi, Vishnu [Northern Illinois Univ., DeKalb, IL (United States). Dept. of Physics

    2016-03-27

    A novel approach for constructing fine-granularity scintillator calorimeters, based on the concept of an Integrated Readout Layer (IRL) was developed. The IRL consists of a printed circuit board inside the detector which supports the directly-coupled scintillator tiles, connects to the surface-mount SiPMs and carries the necessary front-end electronics and signal/bias traces. Prototype IRLs using this concept were designed, prototyped and successfully exposed to test beams. Concepts and implementations of an IRL carried out with funds associated with this contract promise to result in the next generation of scintillator calorimeters.

  5. Analisis Pengaruh Company Size, Return on Assets, Financial Leverage, dan Operating Leverage terhadap Income Smoothing Practices pada Perusahaan Manufaktur yang Terdaftar di Bursa Efek Indonesia

    Directory of Open Access Journals (Sweden)

    Auditya Williyarto Pradana

    2012-05-01

    Full Text Available The primary objectives of this research is to learn the amounts of industrial company using income smoothing practices and effecting among company size, return on assets, financial leverage, and operating leverage with income smoothing practices neither simultaneous nor partial. Data secondary collected by industrial companies listed at Indonesian Stock Exchage and the journal preceding research persons. The sesults of this research describe that there are 11 from 31 samples of industrial company using income smoothing practices. Company size, Return on assets, and financial leverage have not any an effecting significant to income smoothing practices, and Operating leverage have an effecting significant to income smoothing practices, but Company size, Return on assets, Financial leverage, and Operating Leverage have the simultaneous effecting significants to income smoothing practices. The topic of this research can be continued using a certain industrial groups or added many new independence variables, because if compared to proceeding research persons who mention the same and differences of its.

  6. Learning to Leverage Student Thinking: What Novice Approximations Teach Us about Ambitious Practice

    Science.gov (United States)

    Singer-Gabella, Marcy; Stengel, Barbara; Shahan, Emily; Kim, Min-Joung

    2016-01-01

    Central to ambitious teaching is a constellation of practices we have come to call "leveraging student thinking." In leveraging, teachers position students' understanding and reasoning as a central means to drive learning forward. While leveraging typically is described as a feature of mature practice, in this article we examine…

  7. 7 CFR 4290.1700 - Secretary's transfer of interest in a RBIC's Leverage security.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Secretary's transfer of interest in a RBIC's Leverage... INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) Miscellaneous § 4290.1700 Secretary's transfer of interest in a RBIC's Leverage security. Upon such conditions and for such...

  8. 13 CFR 108.1910 - Non-waiver of SBA's rights or terms of Leverage security.

    Science.gov (United States)

    2010-01-01

    ... terms of Leverage security. 108.1910 Section 108.1910 Business Credit and Assistance SMALL BUSINESS... or terms of Leverage security. SBA's failure to exercise or delay in exercising any right or remedy...'s failure to require you to perform any term or provision of your Leverage does not affect SBA's...

  9. 13 CFR 108.1230 - Draw-downs by NMVC Company under SBA's Leverage commitment.

    Science.gov (United States)

    2010-01-01

    ... SBA's Leverage commitment. 108.1230 Section 108.1230 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION NEW MARKETS VENTURE CAPITAL (âNMVCâ) PROGRAM SBA Financial Assistance for NMVC Companies (Leverage) Conditional Commitments by Sba to Reserve Leverage for A Nmvc Company § 108.1230 Draw-downs by NMVC Company...

  10. 13 CFR 107.1700 - Transfer by SBA of its interest in Licensee's Leverage security.

    Science.gov (United States)

    2010-01-01

    ... Licensee's Leverage security. 107.1700 Section 107.1700 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES SBA Financial Assistance for Licensees (Leverage) Miscellaneous § 107.1700 Transfer by SBA of its interest in Licensee's Leverage security. Upon such conditions...

  11. 13 CFR 107.1910 - Non-waiver of SBA's rights or terms of Leverage security.

    Science.gov (United States)

    2010-01-01

    ... terms of Leverage security. 107.1910 Section 107.1910 Business Credit and Assistance SMALL BUSINESS... of Leverage security. SBA's failure to exercise or delay in exercising any right or remedy under the... failure to require you to perform any term or provision of your Leverage does not affect SBA's right to...

  12. 7 CFR 4290.1500 - Restrictions on distributions to RBIC investors while RBIC has outstanding Leverage.

    Science.gov (United States)

    2010-01-01

    ... RBIC has outstanding Leverage. 4290.1500 Section 4290.1500 Agriculture Regulations of the Department of... AGRICULTURE RURAL BUSINESS INVESTMENT COMPANY (âRBICâ) PROGRAM Financial Assistance for RBICs (Leverage) Distributions by Rbics with Outstanding Leverage § 4290.1500 Restrictions on distributions to RBIC investors...

  13. 7 CFR 4290.1910 - Non-waiver of rights or terms of Leverage security.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Non-waiver of rights or terms of Leverage security... INVESTMENT COMPANY (âRBICâ) PROGRAM Miscellaneous § 4290.1910 Non-waiver of rights or terms of Leverage... failure to require you to perform any term or provision of your Leverage does not affect the Secretary's...

  14. 17 CFR 31.7 - Maintenance of minimum financial, cover and segregation requirements by leverage transaction...

    Science.gov (United States)

    2010-04-01

    ... financial, cover and segregation requirements by leverage transaction merchants. 31.7 Section 31.7 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION LEVERAGE TRANSACTIONS § 31.7 Maintenance of minimum financial, cover and segregation requirements by leverage transaction merchants. (a) Each...

  15. Computational design and optimization of energy materials

    Science.gov (United States)

    Chan, Maria

    The use of density functional theory (DFT) to understand and improve energy materials for diverse applications - including energy storage, thermal management, catalysis, and photovoltaics - is widespread. The further step of using high throughput DFT calculations to design materials and has led to an acceleration in materials discovery and development. Due to various limitations in DFT, including accuracy and computational cost, however, it is important to leverage effective models and, in some cases, experimental information to aid the design process. In this talk, I will discuss efforts in design and optimization of energy materials using a combination of effective models, DFT, machine learning, and experimental information.

  16. High-level assessment of LANL ABC Design

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-15

    An annual weapon`s grade Pu disposition goal should be stated and related to the amount of Pu that needs to be disposed of. It needs to be determined to what extent it is possible to destroy Pu without building up any new Pu, i.e., how realistic this goal is. The strong positive Doppler coefficient for a Pu core might require the addition of some fertile material to ensure a negative Doppler coefficient. This in turn will affect the net Pu disposition rate. If a fertile material is required throughout the life of the ABC to ensure a negative Doppler coefficient, the difference between the molten salt ABC and other reactors in regard to Pu disposition is not a principled difference anymore but one of degree. A rationale has then to be developed that explains why {open_quotes}x{close_quotes} kg production of fissile material are acceptable but {open_quotes}y{close_quotes} kg are not. It is important to determine how a requirement for electricity production will impact on the ABC design choices. It is conceivable that DOE will not insist on electricity generation. In this case advantage has to be taken in terms of design simplifications and relaxed operating conditions.

  17. Design of high-performance parallelized gene predictors in MATLAB.

    Science.gov (United States)

    Rivard, Sylvain Robert; Mailloux, Jean-Gabriel; Beguenane, Rachid; Bui, Hung Tien

    2012-04-10

    This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel's algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU) and on a graphics processing unit (GPU). Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps) whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  18. A 100% MOX core design using a highly moderated concept

    Energy Technology Data Exchange (ETDEWEB)

    Girieud, R.; Guigon, B. [CEA/Cadarache, 13 Saint-Paul-lez-Durance (France); Lenain, R.; Barbet, N.; Royer, E.

    1997-12-31

    In the framework of plutonium utilization in future French nuclear plants, feasibility studies were done on large Pressurized Water Reactor (PWR) cores, with 100% MOX (Mixed OXide) fuel assemblies reloads aiming at a large consumption of plutonium. Increasing the moderation ratio was adopted as an approach to make the reactivity control of 100% MOX realistically achievable. This paper presents the results of core design studies, operating transient analyses, fuel management strategies and isotopic balance studies recently performed at CEA/DRN in the framework of innovative systems. Three strategies of fuel management (at equilibrium) were studied: 4 x 12 months (with low leakage loading pattern), 3 x 18 months (reference) and 2 x 24 months (with and without burnable absorbers in fuel). With a moderation ratio of 4 (19 x 19 square lattice), an initial content of plutonium in fuel from 7.5% to 10.3% is needed. In these conditions, the feasibility of 100% MOX PWR is established in standard operation conditions. The presented design allows the unitary power to be maintained. (author)

  19. Design of high-performance parallelized gene predictors in MATLAB

    Directory of Open Access Journals (Sweden)

    Rivard Sylvain

    2012-04-01

    Full Text Available Abstract Background This paper proposes a method of implementing parallel gene prediction algorithms in MATLAB. The proposed designs are based on either Goertzel’s algorithm or on FFTs and have been implemented using varying amounts of parallelism on a central processing unit (CPU and on a graphics processing unit (GPU. Findings Results show that an implementation using a straightforward approach can require over 4.5 h to process 15 million base pairs (bps whereas a properly designed one could perform the same task in less than five minutes. In the best case, a GPU implementation can yield these results in 57 s. Conclusions The present work shows how parallelism can be used in MATLAB for gene prediction in very large DNA sequences to produce results that are over 270 times faster than a conventional approach. This is significant as MATLAB is typically overlooked due to its apparent slow processing time even though it offers a convenient environment for bioinformatics. From a practical standpoint, this work proposes two strategies for accelerating genome data processing which rely on different parallelization mechanisms. Using a CPU, the work shows that direct access to the MEX function increases execution speed and that the PARFOR construct should be used in order to take full advantage of the parallelizable Goertzel implementation. When the target is a GPU, the work shows that data needs to be segmented into manageable sizes within the GFOR construct before processing in order to minimize execution time.

  20. Defining and Leveraging Game Qualities for Serious Games

    Science.gov (United States)

    Martin, Michael W.; Shen, Yuzhong

    2011-01-01

    Serious games can and should leverage the unique qualities of video games to effectively deliver educational experiences for the learners. However, leveraging these qualities is incumbent upon understanding what these unique 'game' qualities are , and how they can facilitate the learning process. This paper presents an examination of the meaning of the term 'game' . as it applies to both serious games and digital entertainment games. Through the examination of counter examples, we derive three game characteristics; games are self contained, provide a variety of meaningful choices, and are intrinsically compelling. We also discuss the theoretical educational foundations which support the application of these 'game qualities' to educational endeavors. This paper concludes with a presentation of results achieved through the application of these qualities and the applicable educational theories to teach learners about the periodic table of elements via a serious game developed by the authors.

  1. Superstatistical fluctuations in time series of leverage returns

    Science.gov (United States)

    Katz, Y. A.; Tian, L.

    2014-07-01

    We analyze to what extent the emergence of fat-tailed q-Gaussian distributions of daily leverage returns of North American industrial companies that survive default and de-listing between 2006 and 2012 can be described by superstatistics. To this end, we compare mean values of the Tsallis entropic parameter q obtained by two independent methods: (i) direct fitting of q-Gaussians to distributions of leverage returns; and (ii) derived from shape parameters of Gamma distributions fitted to histograms of inverted realized variances of these returns. For a vast majority of companies, we observe the striking consistency of average values of q obtained by both methods. This finding supports the applicability of superstatistical hypothesis, which assumes that q-Gaussians result from the superposition of locally normal distributions with Gamma-distributed precision (inverted variance).

  2. Leveraging best practices to promote health, safety, sustainability, and stewardship.

    Science.gov (United States)

    Weiss, Marjorie D

    2013-08-01

    Strategically leveraging health and safety initiatives with sustainability and stewardship helps organizations improve profitability and positively impact team member and customer attachment to the organization. Collective efficacy enhances the triple bottom line: healthy people, healthy planet, and healthy profits. The HS(3)™ Best Practice Exchanges group demonstrated that collective efficacy can leverage the social cohesion, communication channels, and activities within workplaces to promote a healthy, sustainable work culture. This in turn (1) protects the health and safety of workers, (2) preserves the natural environment, and (3) increases attachment to the organization. Community-based participatory research using the Attach21 survey assessed the progress of these companies in their efforts to integrate health, safety, sustainability, and stewardship. Monthly Best Practice Exchanges promoted collective efficacy by providing support, encouragement, and motivation to share and adopt new ideas. Copyright 2013, SLACK Incorporated.

  3. Leverage points for improving global food security and the environment.

    Science.gov (United States)

    West, Paul C; Gerber, James S; Engstrom, Peder M; Mueller, Nathaniel D; Brauman, Kate A; Carlson, Kimberly M; Cassidy, Emily S; Johnston, Matt; MacDonald, Graham K; Ray, Deepak K; Siebert, Stefan

    2014-07-18

    Achieving sustainable global food security is one of humanity's contemporary challenges. Here we present an analysis identifying key "global leverage points" that offer the best opportunities to improve both global food security and environmental sustainability. We find that a relatively small set of places and actions could provide enough new calories to meet the basic needs for more than 3 billion people, address many environmental impacts with global consequences, and focus food waste reduction on the commodities with the greatest impact on food security. These leverage points in the global food system can help guide how nongovernmental organizations, foundations, governments, citizens' groups, and businesses prioritize actions. Copyright © 2014, American Association for the Advancement of Science.

  4. Modern trends in designing high-speed trains

    Directory of Open Access Journals (Sweden)

    Golubović Snežana D.

    2015-01-01

    Full Text Available Increased advantages of railway transportation systems over other types of transportation systems in the past sixty years have been a result of an intensive development of the new generations of high-speed trains. Not only do these types of trains comply with the need for increased speed of transportation and make the duration of the journey shorter, but they also meet the demands for increased reliability, safety and direct application of energy efficiency to the transportation system itself. Along with increased train speed, the motion resistance is increased as well, whereby at speeds over 200 km/h the proportion of air resistance becomes the most dominant member. One of the most efficient measures for reducing air resistance, as well as other negative consequences of high-speed motion, is the development of the aerodynamic shape of the train. This paper presents some construction solutions that affect the aerodynamic properties of high-speed trains, first and foremost, the nose shape, as well as the similarities and differences of individual subsystems necessary for the functioning of modern high-speed rail systems. We analysed two approaches to solving the problem of the aerodynamic shape of the train and the appropriate infrastructure using the examples of Japan and France. Two models of high-speed trains, Shinkansen (Japan and TGV, i.e. AGV (France, have been discussed.

  5. Designing an Energy Drink: High School Students Learn Design and Marketing Skills in This Activity

    Science.gov (United States)

    Martin, Doug

    2008-01-01

    A decade ago, energy drinks were almost nonexistent in the United States, but in the past five years they've become wildly popular. In fact, the $3.4 billion energy-drink market is expected to double this year alone, and the younger generation is the market targeted by manufacturers. This article presents an energy-drink designing activity. This…

  6. How to design a good photoresist solvent package using solubility parameters and high-throughput research

    Science.gov (United States)

    Tate, Michael P.; Cutler, Charlotte; Sakillaris, Mike; Kaufman, Michael; Estelle, Thomas; Mohler, Carol; Tucker, Chris; Thackeray, Jim

    2014-03-01

    Understanding fundamental properties of photoresists and how interactions between photoresist components affect performance targets are crucial to the continued success of photoresists. More specifically, polymer solubility is critical to the overall performance capability of the photoresist formulation. While several theories describe polymer solvent solubility, the most common industrially applied method is Hansen's solubility parameters. Hansen's method, based on regular solution theory, describes a solute's ability to dissolve in a solvent or solvent blend using four physical properties determined experimentally through regression of solubility data in many known solvents. The four physical parameters are dispersion, polarity, hydrogen bonding, and radius of interaction. Using these parameters a relative cohesive energy difference (RED), which describes a polymer's likelihood to dissolve in a given solvent blend, may be calculated. Leveraging a high throughput workflow to prepare and analyze the thousands of samples necessary to calculate the Hansen's solubility parameters from many different methacrylate-based polymers, we compare the physical descriptors to reveal a large range of polarities and hydrogen bonding. Further, we find that Hansen's model correctly predicts the soluble/insoluble state of 3-component solvent blends where the dispersion, polar, hydrogen-bonding, and radius of interaction values were determined through regression of experimental values. These modeling capabilities have allowed for optimization of the photoresist solubility from initial blending through application providing valuable insights into the nature of photoresist.

  7. Energy saving obligations—cutting the Gordian Knot of leverage?

    OpenAIRE

    Rohde, Clemens; Rosenow, Jan; Eyre, Nick; Giraudet, Louis-Gaëtan

    2015-01-01

    International audience; Better leverage of public funding is essential in order to trigger the invest-ment needed for energy efficiency. In times of austerity governments in-creasingly look at policy instruments not funded by public expenditure and Energy Savings Obligations represent one option. Because Energy Savings Obligations are paid for by all energy customers, the degree to which they are able to raise additional private capital for energy efficiency invest-ments is crucial with regar...

  8. Pengaruh Rasio Aktivitas Dan Rasio Leverage Terhadap Tingkat Profitabilitas

    OpenAIRE

    Tri Noormuliyaningsih; Fifi Swandari

    2016-01-01

    The objectives of this research to analyze the influence of activity ratio (inventory turnover, fixed assets turnover, and total assets turnover) and leverage ratio (debt ratio and debt to equity ratio) to profitability level (return on assets and return on equity) on food and beverage companies that listed in Indonesia Stock Exchange (IDX).  Sample on this research consist of 14 (fourteen) food and beverage companies that listed in Indonesia Stock Exchange (IDX). The observation periods  are...

  9. Leveraging tourism legacies: social capital and the 2010 Games

    OpenAIRE

    Elkhashab, Aliaa Shawky

    2010-01-01

    Conflicting views exist concerning the extent to which the planning, development and delivery processes related to mega-events leave positive tangible and intangible legacies for the host destinations. This research suggest that the 2010 Tourism Consortium activities have built the foundation for a legacy of social networks and social capital that can be leveraged well beyond the Games. Growing evidence reveals that investing in social capital yields various streams of human, intellectual, an...

  10. Leveraging data to transform nursing care: insights from nurse leaders.

    Science.gov (United States)

    Jeffs, Lianne; Nincic, Vera; White, Peggy; Hayes, Laureen; Lo, Joyce

    2015-01-01

    A study was undertaken to gain insight into how nurse leaders are influencing the use of performance data to improve nursing care in hospitals. Two themes emerged: getting relevant, reliable, and timely data into the hands of nurses, and the leaders' ability to "connect the dots" in working with different stakeholders. Study findings may inform nurse leaders in their efforts to leverage data to transform nursing care.

  11. High-power FEL design issues - a critical review

    Energy Technology Data Exchange (ETDEWEB)

    Litvinenko, V.N.; Madey, J.M.J.; O`Shea, P.G. [Duke Univ., Durham, NC (United States)

    1995-12-31

    The high-average power capability of FELs has been much advertised but little realized. In this paper we provide a critical analysis of the technological and economic issues associated with high-average power FEL operation from the UV to near IR. The project of IR FEL for the Siberian Center of photochemical researches is described. The distinguished features of this project are the use of the race-track microtron-recuperator and the {open_quotes}electron output of radiation{close_quotes}. The building for the machine is under reconstruction now. About half of hardware has been manufactured. The assembly of installation began.

  12. Energy Design Guidelines for High Performance Schools: Arctic and Subarctic Climates

    Energy Technology Data Exchange (ETDEWEB)

    2004-11-01

    The Energy Design Guidelines for High Performance Schools--Arctic and Subarctic Climates provides school boards, administrators, and design staff with guidance to help them make informed decisions about energy and environmental issues important to school systems and communities. These design guidelines outline high performance principles for the new or retrofit design of your K-12 school in arctic and subarctic climates. By incorporating energy improvements into their construction or renovation plans, schools can significantly reduce energy consumption and costs.

  13. Testing protein leverage in lean humans: a randomised controlled experimental study.

    Directory of Open Access Journals (Sweden)

    Alison K Gosby

    Full Text Available A significant contributor to the rising rates of human obesity is an increase in energy intake. The 'protein leverage hypothesis' proposes that a dominant appetite for protein in conjunction with a decline in the ratio of protein to fat and carbohydrate in the diet drives excess energy intake and could therefore promote the development of obesity. Our aim was to test the 'protein leverage hypothesis' in lean humans by disguising the macronutrient composition of foods offered to subjects under ad libitum feeding conditions. Energy intakes and hunger ratings were measured for 22 lean subjects studied over three 4-day periods of in-house dietary manipulation. Subjects were restricted to fixed menus in random order comprising 28 foods designed to be similar in palatability, availability, variety and sensory quality and providing 10%, 15% or 25% energy as protein. Nutrient and energy intake was calculated as the product of the amount of each food eaten and its composition. Lowering the percent protein of the diet from 15% to 10% resulted in higher (+12±4.5%, p = 0.02 total energy intake, predominantly from savoury-flavoured foods available between meals. This increased energy intake was not sufficient to maintain protein intake constant, indicating that protein leverage is incomplete. Urinary urea on the 10% and 15% protein diets did not differ statistically, nor did they differ from habitual values prior to the study. In contrast, increasing protein from 15% to 25% did not alter energy intake. On the fourth day of the trial, however, there was a greater increase in the hunger score between 1-2 h after the 10% protein breakfast versus the 25% protein breakfast (1.6±0.4 vs 25%: 0.5±0.3, p = 0.005. In our study population a change in the nutritional environment that dilutes dietary protein with carbohydrate and fat promotes overconsumption, enhancing the risk for potential weight gain.

  14. Testing protein leverage in lean humans: a randomised controlled experimental study.

    Science.gov (United States)

    Gosby, Alison K; Conigrave, Arthur D; Lau, Namson S; Iglesias, Miguel A; Hall, Rosemary M; Jebb, Susan A; Brand-Miller, Jennie; Caterson, Ian D; Raubenheimer, David; Simpson, Stephen J

    2011-01-01

    A significant contributor to the rising rates of human obesity is an increase in energy intake. The 'protein leverage hypothesis' proposes that a dominant appetite for protein in conjunction with a decline in the ratio of protein to fat and carbohydrate in the diet drives excess energy intake and could therefore promote the development of obesity. Our aim was to test the 'protein leverage hypothesis' in lean humans by disguising the macronutrient composition of foods offered to subjects under ad libitum feeding conditions. Energy intakes and hunger ratings were measured for 22 lean subjects studied over three 4-day periods of in-house dietary manipulation. Subjects were restricted to fixed menus in random order comprising 28 foods designed to be similar in palatability, availability, variety and sensory quality and providing 10%, 15% or 25% energy as protein. Nutrient and energy intake was calculated as the product of the amount of each food eaten and its composition. Lowering the percent protein of the diet from 15% to 10% resulted in higher (+12±4.5%, p = 0.02) total energy intake, predominantly from savoury-flavoured foods available between meals. This increased energy intake was not sufficient to maintain protein intake constant, indicating that protein leverage is incomplete. Urinary urea on the 10% and 15% protein diets did not differ statistically, nor did they differ from habitual values prior to the study. In contrast, increasing protein from 15% to 25% did not alter energy intake. On the fourth day of the trial, however, there was a greater increase in the hunger score between 1-2 h after the 10% protein breakfast versus the 25% protein breakfast (1.6±0.4 vs 25%: 0.5±0.3, p = 0.005). In our study population a change in the nutritional environment that dilutes dietary protein with carbohydrate and fat promotes overconsumption, enhancing the risk for potential weight gain.

  15. Inclusive STEM High School Design: 10 Critical Components

    Science.gov (United States)

    Peters-Burton, Erin E.; Lynch, Sharon J.; Behrend, Tara S.; Means, Barbara B.

    2014-01-01

    Historically, the mission of science, technology, engineering, and mathematics (STEM) schools emphasized providing gifted and talented students with advanced STEM coursework. However, a newer type of STEM school is emerging in the United States: inclusive STEM high schools (ISHSs). ISHSs have open enrollment and are focused on preparing…

  16. VCSEL design and integration for high-capacity optical interconnects

    Science.gov (United States)

    Larsson, Anders; Gustavsson, Johan S.; Westbergh, Petter; Haglund, Erik; Haglund, Emanuel P.; Simpanen, Ewa; Lengyel, Tamas; Szczerba, Krzysztof; Karlsson, Magnus

    2017-02-01

    Vertical-cavity surface-emitting lasers and multi-mode fibers is the dominating technology for short-reach optical interconnects in datacenters and high performance computing systems at current serial rates of up to 25-28 Gbit/s. This is likely to continue at 50-56 Gbit/s. The technology shows potential for 100 Gbit/s.

  17. High Power Wind Generator Designs with Less or No PMs

    DEFF Research Database (Denmark)

    Boldea, Ion; Tutelea, Lucian; Blaabjerg, Frede

    2014-01-01

    synchronous generators, by doubly-fed (wound rotor) induction and cage induction generators and by introducing new topologies with pertinent costs for high power (MW range) wind energy conversion units. The present overview attempts, based on recent grid specifications, an evaluation of commercial and novel...

  18. Lattice design in high-energy particle accelerators

    CERN Document Server

    Holzer, B J

    2006-01-01

    This lecture introduces storage-ring lattice desing. Applying the formalism that has been established in transverse beam optics, the basic principles of the development of a magnet lattice are explained and the characteristics of the resulting magnet structure are discussed. The periodic assembly of a storage ring cell with its boundary conditions concerning stability and scaling of the beam optics parameters is addressed as well as special lattice structures: drifts, mini beta insertions, dispersion suppressors, etc. In addition to the exact calculations indispensable for a rigorous treatment of the matter, scaling rules are shown and simple rules of thumb are included that enable the lattice designer to do the first estimates and get the basic numbers ‘on the back of an envelope’.

  19. Zero Index Metamaterial for Designing High-Gain Patch Antenna

    Directory of Open Access Journals (Sweden)

    Yahong Liu

    2013-01-01

    Full Text Available A planar wideband zero-index metamaterial (ZIM based on mesh grid structure is studied. It is demonstrated that the real part of the index approaches zero at the wideband covering from 9.9 GHz to 11.4 GHz. Two conventional patch antennas whose operating frequencies are both in the range of zero-index frequencies are designed and fabricated. And then, the ZIM is placed in the presence of the conventional patch antennas to form the proposed antennas. The distance between the antenna and the ZIM cover is investigated. Antenna performances are studied with simulations and measurements. The results show that the more directional and higher gain patch antennas can be obtained. The measured results are in good agreement with the simulations. Compared to the conventional patch antenna without the ZIM, it is shown that the beamwidth of antenna with the ZIM cover becomes more convergent and the gain is much higher.

  20. Compact Beamformer Design with High Frame Rate for Ultrasound Imaging

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2014-04-01

    Full Text Available In medical field, two-dimension ultrasound images are widely used in clinical diagnosis. Beamformer is critical in determining the complexity and performance of an ultrasound imaging system. Different from traditional means implemented with separated chips, a compact beamformer with 64 effective channels in a single moderate Field Programmable Gate Array has been presented in this paper. The compactness is acquired by employing receive synthetic aperture, harmonic imaging, time sharing and linear interpolation. Besides that, multi-beams method is used to improve the frame rate of the ultrasound imaging system. Online dynamic configuration is employed to expand system’s flexibility to two kinds of transducers with multi-scanning modes. The design is verified on a prototype scanner board. Simulation results have shown that on-chip memories can be saved and the frame rate can be improved on the case of 64 effective channels which will meet the requirement of real-time application.

  1. High-performance pulsed magnets: Theory, design and construction

    Science.gov (United States)

    Li, Liang

    This thesis is an in-depth study of the design and construction of coils for pulsed magnets energised by a capacitor bank, including mathematical modelling and testing of the coils. The magnetic field generated by solenoid magnets with homogeneous and non-homogenous current distribution is calculated with the elliptical integral method. Coupled partial differential equations for magnetic and thermal diffusion and the electric circuits are solved numerically to calculate the pulse shape and the heating in a pulsed magnet. The calculations are in good agreement with test results for a large range of different coils; this provides useful insights for optimised coil design. Stresses and strains in the mid-plane of the coil are analytically calculated by solving the system of equations describing the displacement in each layer of the coil. Non-linear stress-strain characteristics and the propagation of the plastic deformation are taken into account by sub- dividing each layer of the coil in the radial direction and changing the elastic-plastic matrix at each transition point. Conductors, insulating materials and techniques used for pulsed magnets are discussed in detail. More than 80 pulsed magnets with optimised combinations of conductors and reinforcements have been built and tested, with peak fields in the range 45-73 T and a bore size from 8 mm-35 mm. The pulse duration is of the order of 10 milliseconds. A dual stage pulsed magnet for use at a free electron laser has been developed. This has a rise time of 10 microseconds and enables magneto-optical experiments in a parameter range previously inaccessible to condensed matter physicists. The joint of superconducting cables can be modelled by means of distributed circuit elements that characterise current diffusion.

  2. High-performance green semiconductor devices: materials, designs, and fabrication

    Science.gov (United States)

    Jung, Yei Hwan; Zhang, Huilong; Gong, Shaoqin; Ma, Zhenqiang

    2017-06-01

    From large industrial computers to non-portable home appliances and finally to light-weight portable gadgets, the rapid evolution of electronics has facilitated our daily pursuits and increased our life comforts. However, these rapid advances have led to a significant decrease in the lifetime of consumer electronics. The serious environmental threat that comes from electronic waste not only involves materials like plastics and heavy metals, but also includes toxic materials like mercury, cadmium, arsenic, and lead, which can leak into the ground and contaminate the water we drink, the food we eat, and the animals that live around us. Furthermore, most electronics are comprised of non-renewable, non-biodegradable, and potentially toxic materials. Difficulties in recycling the increasing amount of electronic waste could eventually lead to permanent environmental pollution. As such, discarded electronics that can naturally degrade over time would reduce recycling challenges and minimize their threat to the environment. This review provides a snapshot of the current developments and challenges of green electronics at the semiconductor device level. It looks at the developments that have been made in an effort to help reduce the accumulation of electronic waste by utilizing unconventional, biodegradable materials as components. While many semiconductors are classified as non-biodegradable, a few biodegradable semiconducting materials exist and are used as electrical components. This review begins with a discussion of biodegradable materials for electronics, followed by designs and processes for the manufacturing of green electronics using different techniques and designs. In the later sections of the review, various examples of biodegradable electrical components, such as sensors, circuits, and batteries, that together can form a functional electronic device, are discussed and new applications using green electronics are reviewed.

  3. Design and Control of High Temperature PEM Fuel Cell System

    DEFF Research Database (Denmark)

    Andreasen, Søren Juhl

    . Converting a liquid renewable fuel such as methanol in a chemical reactor, a reformer system, can provide the high temperature PEM fuel cells with a hydrogen rich gas that e-ciently produces electricity and heat at similar e-ciencies as with pure hydrogen. The systems retain their small and simple...... configuration, because the high quality waste heat of the fuel cells can be used to support the steam reforming process and the heat and evaporation of the liquid methanol/water mixture. If e-cient heat integration is manageable, similar performance to hydrogen based systems can be expected. In many......E-cient fuel cell systems have started to appear in many dierent commercial applications and large scale production facilities are already operating to supply fuel cells to support an ever growing market. Fuel cells are typically considered to replace leadacid batteries in applications where...

  4. Design and High Precision Monitoring of Detector Structures at CERN

    CERN Document Server

    Lackner, Friedrich; Riegler, Werner

    2007-01-01

    Situated on the outskirts of Geneva, CERN is the leading center for particle physics in the world. The Large Hadron Collider (LHC) with its 27 km ringshaped accelerator, which is currently under construction and will be operational in 2008, will begin a new era in high energy physics by revealing the basic constituents of the universe. One of the experiments is ALICE (A Large Ion - Colliding - Experiment), a detector consisting of multiple layers of sub detectors around the collision point to detect dierent types and properties of particles created in the collisions. Those particles are identified via their energy, momentum, track and decay products, and it is therefore important to align the various sub detectors very precisely to each other and monitor their position. The monitoring systems have to operate for an extended period of time under extreme conditions (e.g. high radiation) and must not absorb too many of the particles created in the collisions. This dissertation describes monitoring systems develo...

  5. Modern trends in designing high-speed trains

    OpenAIRE

    Golubović Snežana D.; Rašuo Boško P.; Lučanin Vojkan J.

    2015-01-01

    Increased advantages of railway transportation systems over other types of transportation systems in the past sixty years have been a result of an intensive development of the new generations of high-speed trains. Not only do these types of trains comply with the need for increased speed of transportation and make the duration of the journey shorter, but they also meet the demands for increased reliability, safety and direct application of energy efficiency to the transportation system itself...

  6. Design and specification of a high speed transport protocol

    OpenAIRE

    McArthur, Robert C.

    1992-01-01

    Approved for public release; distribution is unlimited Due to the increase in data throughput potential provided by high speed (fiber optic) networks, existing transport protocols are becoming increasingly incapable of providing reli­able and timely transfer of data. Whereas in networks of the past it was the transmission medium that caused the greatest communications delay, it is the case today that the transport protocols themselves have become the bottleneck. This thesis provides de...

  7. Design High Efficiency PWM Boost Converter for Wind Power Generation

    OpenAIRE

    SULAIMAN R. Diary; MUHAMMAD A. Aree

    2010-01-01

    The uses of renewable power source toprovide electric power as an alternative become amajor consideration than the costly classical powersources. However, due to research on very lowmaintenancedesigns, small wind turbines becomingmore popularity than economical ways to bring thebenefits of power production to home.The efficiency, size, and cost are the primaryadvantages of switching DC-DC boost powerconverters; it is offer high efficiency performance andprovides power management circuit desig...

  8. Storage and compression design of high speed CCD

    Science.gov (United States)

    Cai, Xichang; Zhai, LinPei

    2009-05-01

    In current field of CCD measurement, large area and high resolution CCD is used to obtain big measurement image, so that, speed and capacity of CCD requires high performance of later storage and process system. The paper discusses how to use SCSI hard disk to construct storage system and use DSPs and FPGA to realize image compression. As for storage subsystem, Because CCD is divided into multiplex output, SCSI array is used in RAID0 way. The storage system is com posed of high speed buffer, DM A controller, control M CU, SCSI protocol controller and SCSI hard disk. As for compression subsystem, according to requirement of communication and monitor system, the output is fixed resolution image and analog PA L signal. The compression means is JPEG 2000 standard, in which, 9/7 wavelets in lifting format is used. 2 DSPs and FPGA are used to com pose parallel compression system. The system is com posed of FPGA pre-processing module, DSP compression module, video decoder module, data buffer module and communication module. Firstly, discrete wavelet transform and quantization is realized in FPGA. Secondly, entropy coding and stream adaption is realized in DSPs. Last, analog PA L signal is output by Video decoder. Data buffer is realized in synchronous dual-port RAM and state of subsystem is transfer to controller. Through subjective and objective evaluation, the storage and compression system satisfies the requirement of system.

  9. Materials and Designs for High-Efficacy LED Light Engines

    Energy Technology Data Exchange (ETDEWEB)

    Ibbetson, James [Cree, Inc., Durham, NC (United States); Gresback, Ryan [Cree, Inc., Durham, NC (United States)

    2017-09-28

    Cree, Inc. conducted a narrow-band downconverter (NBD) materials development and implementation program which will lead to warm-white LED light engines with enhanced efficacy via improved spectral efficiency with respect to the human eye response. New red (600-630nm) NBD materials could result in as much as a 20% improvement in warm-white efficacy at high color quality relative to conventional phosphor-based light sources. Key program innovations included: high quantum yield; narrow peak width; minimized component-level losses due to “cross-talk” and light scattering among red and yellow-green downconverters; and improved reliability to reach parity with conventional phosphors. NBD-enabled downconversion efficiency gains relative to conventional phosphors yielded an end-of-project LED light engine efficacy of >160 lm/W at room temperature and 35 A/cm2, with a correlated color temperature (CCT) of ~3500K and >90 CRI (Color Rending Index). NBD-LED light engines exhibited equivalent luminous flux and color point maintenance at >1,000 hrs. of highly accelerated reliability testing as conventional phosphor LEDs. A demonstration luminaire utilizing an NBD-based LED light engine had a steady-state system efficacy of >150 lm/W at ~3500K and >90 CRI, which exceeded the 2014 DOE R&D Plan luminaire milestone for FY17 of >150 lm/W at just 80 CRI.

  10. High-$\\gamma$ Beta Beams within the LAGUNA design study

    CERN Document Server

    Orme, Christopher

    2010-01-01

    Within the LAGUNA design study, seven candidate sites are being assessed for their feasibility to host a next-generation, very large neutrino observatory. Such a detector will be expected to feature within a future European accelerator neutrino programme (Superbeam or Beta Beam), and hence the distance from CERN is of critical importance. In this article, the focus is a $^{18}$Ne and $^{6}$He Beta Beam sourced at CERN and directed towards a 50 kton Liquid Argon detector located at the LAGUNA sites: Slanic (L=1570 km) and Pyh\\"{a}salmi (L=2300 km). To improve sensitivity to the neutrino mass ordering, these baselines are then combined with a concurrent run with the same flux directed towards a large Water \\v{C}erenkov detector located at Canfranc (L=650 km). This degeneracy breaking combination is shown to provide comparable physics reach to the conservative Magic Baseline Beta Beam proposals. For $^{18}$Ne ions boosted to $\\gamma=570$ and $^{6}$He ions boosted to $\\gamma=350$, the correct mass ordering can be...

  11. Design and analysis of a high-accuracy flexure hinge.

    Science.gov (United States)

    Liu, Min; Zhang, Xianmin; Fatikow, Sergej

    2016-05-01

    This paper designs and analyzes a new kind of flexure hinge obtained by using a topology optimization approach, namely, a quasi-V-shaped flexure hinge (QVFH). Flexure hinges are formed by three segments: the left and right segments with convex shapes and the middle segment with straight line. According to the results of topology optimization, the curve equations of profiles of the flexure hinges are developed by numerical fitting. The in-plane dimensionless compliance equations of the flexure hinges are derived based on Castigliano's second theorem. The accuracy of rotation, which is denoted by the compliance of the center of rotation that deviates from the midpoint, is derived. The equations for evaluating the maximum stresses are also provided. These dimensionless equations are verified by finite element analysis and experimentation. The analytical results are within 8% uncertainty compared to the finite element analysis results and within 9% uncertainty compared to the experimental measurement data. Compared with the filleted V-shaped flexure hinge, the QVFH has a higher accuracy of rotation and better ability of preserving the center of rotation position but smaller compliance.

  12. Design of high-speed turnouts and crossings

    Science.gov (United States)

    Raif, Lukáš; Puda, Bohuslav; Havlík, Jiří; Smolka, Marek

    2017-09-01

    Recently, the new ways to improve the railway switches and crossings have been sought, as the railway transport increases its operating speed. The expectation of these adjustments is to decrease the dynamic load, which usually increases together with velocity, and this influences the comfort of the vehicle passage, the wear of the structural parts and the cost of maintenance. These adjustments are primarily the turnout elements such as the optimized geometry of the turnout branch line by means of transition curves application, which minimizes the lateral acceleration during the vehicle passage through the track curve. The rail inclination is solved either by means of inclination in fastening system, or by machining of the rail head shape, because this ways of adjustment retain the wheel-rail interaction characteristics along the whole length of the turnout. Secondly, it is the crossing with movable part, which excludes the interruption of the running surface and optimization of the railway stiffness throughout the whole turnout length as well. We can see that the different stiffness along the turnout influences the dynamic load and it is necessary to optimize the discontinuities in the stiffness along the whole length of the turnout. For this purpose, the numeric modeling is carried out to seek the areas with the highest stiffness and subsequently, the system of stiffness optimization will be designed.

  13. Smart Materials Technology for High Speed Adaptive Inlet/Nozzle Design Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Enabling a new generation of high speed civil aircraft will require breakthrough developments in propulsion design, including novel techniques to optimize inlet...

  14. Analysis, design and testing of high pressure waterjet nozzles

    Science.gov (United States)

    Mazzoleni, Andre P.

    1996-01-01

    The Hydroblast Research Cell at MSFC is both a research and a processing facility. The cell is used to investigate fundamental phenomena associated with waterjets as well as to clean hardware for various NASA and contractor projects. In the area of research, investigations are made regarding the use of high pressure waterjets to strip paint, grease, adhesive and thermal spray coatings from various substrates. Current industrial methods of cleaning often use ozone depleting chemicals (ODC) such as chlorinated solvents, and high pressure waterjet cleaning has proven to be a viable alternative. Standard methods of waterjet cleaning use hand held or robotically controlled nozzles. The nozzles used can be single-stream or multijet nozzles, and the multijet nozzles may be mounted in a rotating head or arranged in a fan-type shape. We consider in this paper the use of a rotating, multijet, high pressure water nozzle which is robotically controlled. This method enables rapid cleaning of a large area, but problems such as incomplete coverage (e.g. the formation of 'islands' of material not cleaned) and damage to the substrate from the waterjet have been observed. In addition, current stripping operations require the nozzle to be placed at a standoff distance of approximately 2 inches in order to achieve adequate performance. This close proximity of the nozzle to the target to be cleaned poses risks to the nozzle and the target in the event of robot error or the striking of unanticipated extrusions on the target surface as the nozzle sweeps past. Two key motivations of this research are to eliminate the formation of 'coating islands' and to increase the allowable standoff distance of the nozzle.

  15. Dual design resistor for high voltage conditioning and transmission lines

    Science.gov (United States)

    Siggins, Timothy Lynn [Newport News, VA; Murray, Charles W [Hayes, VA; Walker, Richard L [Norfolk, VA

    2007-01-23

    A dual resistor for eliminating the requirement for two different value resistors. The dual resistor includes a conditioning resistor at a high resistance value and a run resistor at a low resistance value. The run resistor can travel inside the conditioning resistor. The run resistor is capable of being advanced by a drive assembly until an electrical path is completed through the run resistor thereby shorting out the conditioning resistor and allowing the lower resistance run resistor to take over as the current carrier.

  16. Integrated Very High Frequency Switch Mode Power Supplies: Design Considerations

    DEFF Research Database (Denmark)

    Hertel, Jens Christian; Nour, Yasser; Knott, Arnold

    2017-01-01

    This paper presents a power supply using an increased switching frequency to minimize the size of energy storing components, thereby addressing the demands for increased power densities in power supplies. 100 MHz and higher switching frequencies have been used in resonant power converters, which...... simulations. The required spiral inductors was modeled, and simulations show Q values of as high as 14 at a switching frequency of 250 MHz. Simulations of the converter show an efficiency of 55 % with a self oscillating gate drive. However the modeled inductor was not adequate for operating with the self...

  17. Design alternatives for beam halo monitors in high intensity accelerators

    CERN Document Server

    Braun, H; Corsini, R; Lefèvre, T; Schulte, Daniel; Tecker, F A; Welsch, C P

    2005-01-01

    In future high intensity, high energy accelerators it must be ensured that particle losses are minimized as activation of the vacuum chambers or other components makes maintenance and upgrade work time consuming and costly. It is imperative to have a clear understanding of the mechanisms that can lead to halo formation and to have the possibility to test available theoretical models with an adequate experimental setup. Optical transition radiation (OTR) provides an interesting opportunity for linear real-time measurements of the transverse beam profile with a resolution which has been so far at best in the some μm range. However, the dynamic range of standard OTR systems is typically limited and needs to be improved for its application for halo measurements. In this contribution, the existing OTR system as it is installed in the CLIC test facility (CTF3) is analyzed and the contribution of each component to the final image quality discussed. Finally, possible halo measurement techniques based on OTR are pres...

  18. Mechanical Design of the NSTX High-k Scattering Diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Feder, R.; Mazzucato, E.; Munsat, T.; Park, H,; Smith, D. R.; Ellis, R.; Labik, G.; Priniski, C.

    2005-09-26

    The NSTX High-k Scattering Diagnostic measures small-scale density fluctuations by the heterodyne detection of waves scattered from a millimeter wave probe beam at 280 GHz and {lambda}=1.07 mm. To enable this measurement, major alterations were made to the NSTX vacuum vessel and Neutral Beam armor. Close collaboration between the PPPL physics and engineering staff resulted in a flexible system with steerable launch and detection optics that can position the scattering volume either near the magnetic axis ({rho} {approx} .1) or near the edge ({rho} {approx} .8). 150 feet of carefully aligned corrugated waveguide was installed for injection of the probe beam and collection of the scattered signal in to the detection electronics.

  19. High Temperature Electrolysis Pressurized Experiment Design, Operation, and Results

    Energy Technology Data Exchange (ETDEWEB)

    J.E. O' Brien; X. Zhang; G.K. Housley; K. DeWall; L. Moore-McAteer

    2012-09-01

    A new facility has been developed at the Idaho National Laboratory for pressurized testing of solid oxide electrolysis stacks. Pressurized operation is envisioned for large-scale hydrogen production plants, yielding higher overall efficiencies when the hydrogen product is to be delivered at elevated pressure for tank storage or pipelines. Pressurized operation also supports higher mass flow rates of the process gases with smaller components. The test stand can accommodate planar cells with dimensions up to 8.5 cm x 8.5 cm and stacks of up to 25 cells. It is also suitable for testing other cell and stack geometries including tubular cells. The pressure boundary for these tests is a water-cooled spool-piece pressure vessel designed for operation up to 5 MPa. Pressurized operation of a ten-cell internally manifolded solid oxide electrolysis stack has been successfully demonstrated up 1.5 MPa. The stack is internally manifolded and operates in cross-flow with an inverted-U flow pattern. Feed-throughs for gas inlets/outlets, power, and instrumentation are all located in the bottom flange. The entire spool piece, with the exception of the bottom flange, can be lifted to allow access to the internal furnace and test fixture. Lifting is accomplished with a motorized threaded drive mechanism attached to a rigid structural frame. Stack mechanical compression is accomplished using springs that are located inside of the pressure boundary, but outside of the hot zone. Initial stack heatup and performance characterization occurs at ambient pressure followed by lowering and sealing of the pressure vessel and subsequent pressurization. Pressure equalization between the anode and cathode sides of the cells and the stack surroundings is ensured by combining all of the process gases downstream of the stack. Steady pressure is maintained by means of a backpressure regulator and a digital pressure controller. A full description of the pressurized test apparatus is provided in this

  20. A conceptual high flux reactor design with scope for use in ADS ...

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 68; Issue 2. A conceptual high flux reactor design with scope for use in ADS applications. Usha Pal V Jagannathan ... A 100 MWt reactor design has been conceived to support flux level of the order of 1015 n/cm2/s in selected flux trap zones. The physics design ...

  1. Residential Interior Design as Complex Composition: A Case Study of a High School Senior's Composing Process

    Science.gov (United States)

    Smagorinsky, Peter; Zoss, Michelle; Reed, Patty M.

    2006-01-01

    This research analyzed the composing processes of one high school student as she designed the interiors of homes for a course in interior design. Data included field notes, an interview with the teacher, artifacts from the class, and the focal student's concurrent and retrospective protocols in relation to her design of home interiors. The…

  2. Manufacturing cereal bars with high nutritional value through experimental design

    Directory of Open Access Journals (Sweden)

    Roberta Covino

    2015-01-01

    Full Text Available Organizations responsible for public health throughout the world have been increasingly worrying about how to feed populations encouraging a nutritious and balanced diet in order to decrease the occurrence of chronic diseases, which are constantly related to an inadequate diet. Still, due to matters of modern lifestyle consumers are increasingly seeking convenient products. This being so, cereal bars have been an option when the matter is low calorie fast food which is also source of fiber. This study aimed at developing a cereal bar with high dietary fiber, iron, vitamins A and vitamin E, in order to easily enable adult population achieve the daily recommendation for such nutrients. Eight formulations plus the focal point were conducted through experimental planning; sensory analysis with 110 tasters for each block and texture. Afterwards, we conducted centesimal analysis for all three formulations presenting the best sensory results. After statistical analysis and comparison to the means for products available in the market, it was possible to conclude that the product developed presented great acceptance and fiber level more than twice as much as the means for commercial products.

  3. A highly efficient design strategy for regression with outcome pooling.

    Science.gov (United States)

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F

    2014-12-10

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Design method for automotive high-beam LED optics

    Science.gov (United States)

    Byzov, Egor V.; Moiseev, Mikhail A.; Doskolovich, Leonid L.; Kazanskiy, Nikolay L.

    2015-09-01

    New analytical method for the calculation of the LED secondary optics for automotive high-beam lamps is presented. Automotive headlamps should illuminate the road and the curb at the distance of 100-150 meters and create a bright, flat, relatively powerful light beam. To generate intensity distribution of this kind we propose to use TIR optical element (collimator working on the total internal reflection principle) with array of microlenses (optical corrector) on the upper surface. TIR part of the optical element enables reflection of the side rays to the front direction and provides a collimated beam which incidents on the microrelief. Microrelief, in its turn, dissipates the light flux in horizontal direction to meet the requirements of the Regulations 112, 113 and to provide well-illuminated area across the road in the far field. As an example, we computed and simulated the optical element with the diameter of 33 millimeters and the height of 22 millimeters. Simulation data shows that three illuminating modules including Cree XP-G2 LED and lens allow generating an appropriate intensity distribution for the class D of UNECE Regulations.

  5. Soil mixing design methods and construction techniques for use in high organic soils.

    Science.gov (United States)

    2015-06-01

    Organic soils present a difficult challenge for roadway designers and construction due to the high : compressibility of the soil structure and the often associated high water table and moisture content. For : other soft or loose inorganic soils, stab...

  6. Design of concrete for high flowability : Progress report of fib task group 4.3

    NARCIS (Netherlands)

    Schmidt, W.; Grunewald, S.; Ferrara, L.; Dehn, F.

    2015-01-01

    Flowable concretes can differ significantly from traditional vibrated concrete. Concrete types like selfcompacting concrete (SCC), ultra-high performance concrete (UHPC) and high performance fibre reinforced cementitious composites (HPFRCCs) require novel mix design approaches. This has consequences

  7. PENGUJIAN STRUKTUR MODAL OPTIMAL MELALUI POLA HUBUNGAN ANTAR VARIABEL LEVERAGE, PROFITABILITAS, DAN NILAI PERUSAHAAN

    Directory of Open Access Journals (Sweden)

    Harmono Harmono

    2017-03-01

    Full Text Available These research reviewed theoretical and empirical literature compared among thecapital structure theory, Modigliani and Miller model, traditional model, and combined withROE framework by Evans (2000 for testing optimum capital structure related with firm’svalue concepts. The research design of this study used descriptive and co-relational modelwith population which were some industries which had been published in Indonesia StockExchange, and purposive sampling for fulfillment of the capital structure assume, and rela-tionship with leverage concept. The finding of this research based on the descriptive and co-relational analysis was that, first, the average of capital structure for industries at level 55%consisted of; food and beverage 51%, automotive 51%, textile 68%, property 44%, andselected by descriptive analysis, the optimum capital structure analysis, in which debts tototal assets indicated at level 40%. The second, results of multiple regressions indicated thatfinancial performance had any significant to firm values, but not significant to financialleverage, and then the effect of leverage variable to firm value was significant. Thus, thetheoretical framework of optimum capital structure was testable having any correlation andsignificant with firm’s value.

  8. Holo3DGIS: Leveraging Microsoft HoloLens in 3D Geographic Information

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2018-02-01

    Full Text Available Three-dimensional geographic information systems (3D GIS attempt to understand and express the real world from the perspective of 3D space. Currently, 3D GIS perspective carriers are mainly 2D and not 3D, which influences how 3D information is expressed and further affects the user cognition and understanding of 3D information. Using mixed reality as a carrier of 3D GIS is promising and may overcome problems when using 2D perspective carriers in 3D GIS. The objective of this paper is to propose an architecture and method to leverage the Microsoft HoloLens in 3D geographic information (Holo3DGIS. The architecture is designed according to three processes for developing holographic 3D GIS; the three processes are the creation of a 3D asset, the development of a Holo3DGIS application, and the compiler deployment of the Holo3DGIS application. Basic geographic data of Philadelphia were used to test the proposed methods and Holo3DGIS. The experimental results showed that the Holo3DGIS can leverage 3D geographic information with the Microsoft HoloLens. By changing the traditional 3D geographic information carrier from a 2D computer screen perspective to mixed reality glasses using the HoloLens 3D holographic perspective, it changed the traditional vision, body sense, and interaction modes, which enables GIS users to experience real 3D GIS.

  9. Steps Toward Technology Design to Beat Health Inequality - Participatory Design Walks in a Neighbourhood with High Health Risks.

    Science.gov (United States)

    Bertelsen, Pernille; Kanstrup, Anne Marie; Madsen, Jacob

    2017-01-01

    This paper explores participatory design walks (PD walks) as a first step toward a participatory design of health information technology (HIT) aimed at tackling health inequality in a neighbourhood identified as a high-risk health area. Existing research shows that traditional methods for health promotion, such as campaigns and teaching, have little to no effect in high-risk health areas. Rather, initiatives must be locally anchored - integrated into the local culture, and based on social relationships and group activities. This paper explains how we conducted PD walks with residents and community workers in the neighbourhood and how this participatory approach supported a first step toward HIT design that tackles health inequality. This is important, as people in neighbourhoods with high health risks are not the target audience for the health technology innovation currently taking place despite the fact that this group suffers the most from health inequality and weigh most on the public healthcare services and costs. The study identifies social and cultural aspects that influence everyday health management and presents how a citizen-driven approach like PD walks, can contribute valuable insights for design of HIT. The paper provides concrete methodological recommendations on how to conduct PD walks that are valuable to HIT designers and developers who aim to do PD with neighbourhoods.

  10. A Quantitative Research Investigation into High School Design and Art Education in a Local High School in Texas

    Science.gov (United States)

    Lin, Yi-Hsien

    2013-01-01

    This study was designed to explore the differences between high school teachers with art and science backgrounds in terms of curriculum and student performance in art and design education, federal educational policy, and financial support. The study took place in a local independent school district in Texarkana, Texas. The independent school…

  11. High pressure humidification columns: Design equations, algorithm, and computer code

    Energy Technology Data Exchange (ETDEWEB)

    Enick, R.M. [Pittsburgh Univ., PA (United States). Dept. of Chemical and Petroleum Engineering; Klara, S.M. [USDOE Pittsburgh Energy Technology Center, PA (United States); Marano, J.J. [Burns and Roe Services Corp., Pittsburgh, PA (United States)

    1994-07-01

    This report describes the detailed development of a computer model to simulate the humidification of an air stream in contact with a water stream in a countercurrent, packed tower, humidification column. The computer model has been developed as a user model for the Advanced System for Process Engineering (ASPEN) simulator. This was done to utilize the powerful ASPEN flash algorithms as well as to provide ease of use when using ASPEN to model systems containing humidification columns. The model can easily be modified for stand-alone use by incorporating any standard algorithm for performing flash calculations. The model was primarily developed to analyze Humid Air Turbine (HAT) power cycles; however, it can be used for any application that involves a humidifier or saturator. The solution is based on a multiple stage model of a packed column which incorporates mass and energy, balances, mass transfer and heat transfer rate expressions, the Lewis relation and a thermodynamic equilibrium model for the air-water system. The inlet air properties, inlet water properties and a measure of the mass transfer and heat transfer which occur in the column are the only required input parameters to the model. Several example problems are provided to illustrate the algorithm`s ability to generate the temperature of the water, flow rate of the water, temperature of the air, flow rate of the air and humidity of the air as a function of height in the column. The algorithm can be used to model any high-pressure air humidification column operating at pressures up to 50 atm. This discussion includes descriptions of various humidification processes, detailed derivations of the relevant expressions, and methods of incorporating these equations into a computer model for a humidification column.

  12. Features of the design for highly hazardous facilities

    Directory of Open Access Journals (Sweden)

    Telichenko Valeriy Ivanovich

    2015-03-01

    Full Text Available In the part 1 of the article 48.1 of the town planning code of the Russian Federation we can find a list of objects that are especially dangerous. Article 2 paragraph 59 of the contract system establishes the responsibility of the customer to conduct electronic auction, provided that the purchase of goods, works, services is included in the list established by the Government of the Russian Federation, or in an additional list approved by the highest body of the Executive power of the subject of the Russian Federation. By order of the Government of the Russian Federation dated 31.10.2013 no. 2019-r there was approved a list of goods and services, in case of procurement of which the customer is obligated to conduct an auction in electronic form. The list included "Building works", related to the code 45 (excluding code 45.12 according to the all-Russian classifier of products by economic activity (OKPD OK 034-2007. The exception is the construction, reconstruction, overhaul of high-risk, technically complex objects of capital construction, provided that the cost of the purchase contract for the State needs is more than 150 million rubles; to provide municipal - 50 million rubles. Thus, the customer is obliged to conduct electronic auction in the case of procurement of construction works (code 45 OKPD OK 034-20071, besides the works relating to the code of 45.12 (drilling, if the initial (maximal cost of purchase for State needs does not exeed 150 million rubles, for municipal needs - 50 million rubles. Here is an example. In St. Petersburg, three competitions were announced by the customer on the site for the construction of four underground stations with total value of 940 million rubles. How to place an order-public audition with limited participation? The results of the audit conducted by the OFAS around St. Petersburg, led to the cancellation of the tender. In particular, due to an incorrect choice of customer ways to purchase. According to

  13. Leveraging Transcultural Enrollments to Enhance Application of the Scientific Method

    Science.gov (United States)

    Loudin, M.

    2013-12-01

    Continued growth of transcultural academic programs presents an opportunity for all of the students involved to improve utilization of the scientific method. Our own business success depends on how effectively we apply the scientific method, and so it is unsurprising that our hiring programs focus on three broad areas of capability among applicants which are strongly related to the scientific method. These are 1) ability to continually learn up-to-date earth science concepts, 2) ability to effectively and succinctly communicate in the English language, both oral and written, and 3) ability to employ behaviors that are advantageous with respect to the various phases of the scientific method. This third area is often the most difficult to develop, because neither so-called Western nor Eastern cultures encourage a suite of behaviors that are ideally suited. Generally, the acceptance of candidates into academic programs, together with subsequent high performance evidenced by grades, is a highly valid measure of continuous learning capability. Certainly, students for whom English is not a native language face additional challenges, but succinct and effective communication is an art which requires practice and development, regardless of native language. The ability to communicate in English is crucial, since it is today's lingua franca for both science and commerce globally. Therefore, we strongly support the use of frequent English written assignments and oral presentations as an integral part of all scientific academic programs. There is no question but that this poses additional work for faculty; nevertheless it is a key ingredient to the optimal development of students. No one culture has a monopoly with respect to behaviors that promote effective leveraging of the scientific method. For instance, the growing complexity of experimental protocols argues for a high degree of interdependent effort, which is more often associated with so-called Eastern than Western

  14. Cerenkov counter design for a high energy, high intensity secondary beam

    Energy Technology Data Exchange (ETDEWEB)

    Borcherding, F.O.

    1986-04-01

    A cerenkov counter design is given for operation in a 500 GeV/c secondary beam with 10/sup 9/ to 10/sup 11/ particles per 1 millisecond spill. The design allows the fractions of pions, kaons and protons to be determined. In particular the fraction of kaons should be measured with a relative accuracy of a few percent.

  15. Requirements for high level models supporting design space exploration in model-based systems engineering

    NARCIS (Netherlands)

    Haveman, Steven; Bonnema, Gerrit Maarten

    2013-01-01

    Most formal models are used in detailed design and focus on a single domain. Few effective approaches exist that can effectively tie these lower level models to a high level system model during design space exploration. This complicates the validation of high level system requirements during

  16. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Sterbentz, James William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nelson, Lee Orville [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gougar, Hans David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-01-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  17. High Temperature Gas-Cooled Test Reactor Point Design: Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Sterbentz, James William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Nelson, Lee Orville [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gougar, Hans David [Idaho National Lab. (INL), Idaho Falls, ID (United States); Kinsey, J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-03-01

    A point design has been developed for a 200-MW high-temperature gas-cooled test reactor. The point design concept uses standard prismatic blocks and 15.5% enriched uranium oxycarbide fuel. Reactor physics and thermal-hydraulics simulations have been performed to characterize the capabilities of the design. In addition to the technical data, overviews are provided on the technology readiness level, licensing approach, and costs of the test reactor point design.

  18. The effects of a model-eliciting activity on high school student design performance

    OpenAIRE

    Huffman, Tanner J

    2015-01-01

    Modeling allows students to become more effective designers. High school technology and engineering students engage in engineering design challenges as part of traditional instructional practices. Model-eliciting activities (MEA) present students with opportunities to elicit mathematically thinking that facilitates modeling. Students (n=266) from four schools completed a model-eliciting activity (MEA) and design challenge procedure. The research design utilized a quasi-experimental method, po...

  19. A flexible and highly pressure-sensitive graphene-polyurethane sponge based on fractured microstructure design.

    Science.gov (United States)

    Yao, Hong-Bin; Ge, Jin; Wang, Chang-Feng; Wang, Xu; Hu, Wei; Zheng, Zhi-Jun; Ni, Yong; Yu, Shu-Hong

    2013-12-10

    A fractured microstructure design: A new type of piezoresistive sensor with ultra-high-pressure sensitivity (0.26 kPa(-1) ) in low pressure range (design in a graphene-nanosheet-wrapped polyurethane (PU) sponge. This low-cost and easily scalable graphene-wrapped PU sponge pressure sensor has potential application in high-spatial-resolution, artificial skin without complex nanostructure design. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. The design of high performance asynchronous circuits for the Caltech MiniMIPS processor

    OpenAIRE

    Pénzes, Paul I.

    1998-01-01

    The purpose of this report is to describe the design and implementation of an asynchronous Fetch unit used in the high-performance Caltech MiniMIPS microprocesors. The Caltech MiniMIPS microprocesors was designed based on the Martin synthesis techniques. The main goals of this project were to investigate new architectural issues in asyncronous processor design and to develop new techniques and tools that can meet high throughput requirements.