Software metrics a rigorous and practical approach
Fenton, Norman
2014-01-01
A Framework for Managing, Measuring, and Predicting Attributes of Software Development Products and ProcessesReflecting the immense progress in the development and use of software metrics in the past decades, Software Metrics: A Rigorous and Practical Approach, Third Edition provides an up-to-date, accessible, and comprehensive introduction to software metrics. Like its popular predecessors, this third edition discusses important issues, explains essential concepts, and offers new approaches for tackling long-standing problems.New to the Third EditionThis edition contains new material relevant
Efficiency versus speed in quantum heat engines: Rigorous constraint from Lieb-Robinson bound
Shiraishi, Naoto; Tajima, Hiroyasu
2017-08-01
A long-standing open problem whether a heat engine with finite power achieves the Carnot efficiency is investgated. We rigorously prove a general trade-off inequality on thermodynamic efficiency and time interval of a cyclic process with quantum heat engines. In a first step, employing the Lieb-Robinson bound we establish an inequality on the change in a local observable caused by an operation far from support of the local observable. This inequality provides a rigorous characterization of the following intuitive picture that most of the energy emitted from the engine to the cold bath remains near the engine when the cyclic process is finished. Using this description, we prove an upper bound on efficiency with the aid of quantum information geometry. Our result generally excludes the possibility of a process with finite speed at the Carnot efficiency in quantum heat engines. In particular, the obtained constraint covers engines evolving with non-Markovian dynamics, which almost all previous studies on this topic fail to address.
DEFF Research Database (Denmark)
Gaspar, Jozsef; Ritschel, Tobias Kasper Skovborg; Jørgensen, John Bagterp
2017-01-01
-linear model based control to achieve optimal techno-economic performance. Accordingly, this work presents a computationally efficient and novel approach for solving a tray-by-tray equilibrium model and its implementation for open-loop optimal-control of a cryogenic distillation column. Here, the optimisation...... objective is to reduce the cost of compression in a volatile electricity market while meeting the production requirements, i.e. product flow rate and purity. This model is implemented in Matlab and uses the ThermoLib rigorous thermodynamic library. The present work represents a first step towards plant...
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
Evaluating Rigor in Qualitative Methodology and Research Dissemination
Trainor, Audrey A.; Graue, Elizabeth
2014-01-01
Despite previous and successful attempts to outline general criteria for rigor, researchers in special education have debated the application of rigor criteria, the significance or importance of small n research, the purpose of interpretivist approaches, and the generalizability of qualitative empirical results. Adding to these complications, the…
Recent Development in Rigorous Computational Methods in Dynamical Systems
Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł
2009-01-01
We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...
The MIXED framework: A novel approach to evaluating mixed-methods rigor.
Eckhardt, Ann L; DeVon, Holli A
2017-10-01
Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.
Rigorous bounds on the free energy of electron-phonon models
Raedt, Hans De; Michielsen, Kristel
1997-01-01
We present a collection of rigorous upper and lower bounds to the free energy of electron-phonon models with linear electron-phonon interaction. These bounds are used to compare different variational approaches. It is shown rigorously that the ground states corresponding to the sharpest bounds do
Application of the rigorous method to x-ray and neutron beam scattering on rough surfaces
International Nuclear Information System (INIS)
Goray, Leonid I.
2010-01-01
The paper presents a comprehensive numerical analysis of x-ray and neutron scattering from finite-conducting rough surfaces which is performed in the frame of the boundary integral equation method in a rigorous formulation for high ratios of characteristic dimension to wavelength. The single integral equation obtained involves boundary integrals of the single and double layer potentials. A more general treatment of the energy conservation law applicable to absorption gratings and rough mirrors is considered. In order to compute the scattering intensity of rough surfaces using the forward electromagnetic solver, Monte Carlo simulation is employed to average the deterministic diffraction grating efficiency due to individual surfaces over an ensemble of realizations. Some rules appropriate for numerical implementation of the theory at small wavelength-to-period ratios are presented. The difference between the rigorous approach and approximations can be clearly seen in specular reflectances of Au mirrors with different roughness parameters at wavelengths where grazing incidence occurs at close to or larger than the critical angle. This difference may give rise to wrong estimates of rms roughness and correlation length if they are obtained by comparing experimental data with calculations. Besides, the rigorous approach permits taking into account any known roughness statistics and allows exact computation of diffuse scattering.
Hidayat, D.; Nurlaelah, E.; Dahlan, J. A.
2017-09-01
The ability of mathematical creative and critical thinking are two abilities that need to be developed in the learning of mathematics. Therefore, efforts need to be made in the design of learning that is capable of developing both capabilities. The purpose of this research is to examine the mathematical creative and critical thinking ability of students who get rigorous mathematical thinking (RMT) approach and students who get expository approach. This research was quasi experiment with control group pretest-posttest design. The population were all of students grade 11th in one of the senior high school in Bandung. The result showed that: the achievement of mathematical creative and critical thinking abilities of student who obtain RMT is better than students who obtain expository approach. The use of Psychological tools and mediation with criteria of intentionality, reciprocity, and mediated of meaning on RMT helps students in developing condition in critical and creative processes. This achievement contributes to the development of integrated learning design on students’ critical and creative thinking processes.
Putrefactive rigor: apparent rigor mortis due to gas distension.
Gill, James R; Landi, Kristen
2011-09-01
Artifacts due to decomposition may cause confusion for the initial death investigator, leading to an incorrect suspicion of foul play. Putrefaction is a microorganism-driven process that results in foul odor, skin discoloration, purge, and bloating. Various decompositional gases including methane, hydrogen sulfide, carbon dioxide, and hydrogen will cause the body to bloat. We describe 3 instances of putrefactive gas distension (bloating) that produced the appearance of inappropriate rigor, so-called putrefactive rigor. These gases may distend the body to an extent that the extremities extend and lose contact with their underlying support surface. The medicolegal investigator must recognize that this is not true rigor mortis and the body was not necessarily moved after death for this gravity-defying position to occur.
Nakayama, Y; Aoki, Y; Niitsu, H; Saigusa, K
2001-04-15
Forensic dentistry plays an essential role in personal identification procedures. An adequate interincisal space of cadavers with rigor mortis is required to obtain detailed dental findings. We have developed intraoral and two directional approaches, for myotomy of the temporal muscles. The intraoral approach, in which the temporalis was dissected with scissors inserted via an intraoral incision, was adopted for elderly cadavers, females and emaciated or exhausted bodies, and had a merit of no incision on the face. The two directional approach, in which myotomy was performed with thread-wire saw from behind and with scissors via the intraoral incision, was designed for male muscular youths. Both approaches were effective to obtain a desired degree of an interincisal opening without facial damage.
Rigorous approach to the comparison between experiment and theory in Casimir force measurements
International Nuclear Information System (INIS)
Klimchitskaya, G L; Chen, F; Decca, R S; Fischbach, E; Krause, D E; Lopez, D; Mohideen, U; Mostepanenko, V M
2006-01-01
In most experiments on the Casimir force the comparison between measurement data and theory was done using the concept of the root-mean-square deviation, a procedure that has been criticized in the literature. Here we propose a special statistical analysis which should be performed separately for the experimental data and for the results of the theoretical computations. In so doing, the random, systematic and total experimental errors are found as functions of separation, taking into account the distribution laws for each error at 95% confidence. Independently, all theoretical errors are combined to obtain the total theoretical error at the same confidence. Finally, the confidence interval for the differences between theoretical and experimental values is obtained as a function of separation. This rigorous approach is applied to two recent experiments on the Casimir effect
Accelerating Biomedical Discoveries through Rigor and Transparency.
Hewitt, Judith A; Brown, Liliana L; Murphy, Stephanie J; Grieder, Franziska; Silberberg, Shai D
2017-07-01
Difficulties in reproducing published research findings have garnered a lot of press in recent years. As a funder of biomedical research, the National Institutes of Health (NIH) has taken measures to address underlying causes of low reproducibility. Extensive deliberations resulted in a policy, released in 2015, to enhance reproducibility through rigor and transparency. We briefly explain what led to the policy, describe its elements, provide examples and resources for the biomedical research community, and discuss the potential impact of the policy on translatability with a focus on research using animal models. Importantly, while increased attention to rigor and transparency may lead to an increase in the number of laboratory animals used in the near term, it will lead to more efficient and productive use of such resources in the long run. The translational value of animal studies will be improved through more rigorous assessment of experimental variables and data, leading to better assessments of the translational potential of animal models, for the benefit of the research community and society. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Interface-Resolving Simulation of Collision Efficiency of Cloud Droplets
Wang, Lian-Ping; Peng, Cheng; Rosa, Bodgan; Onishi, Ryo
2017-11-01
Small-scale air turbulence could enhance the geometric collision rate of cloud droplets while large-scale air turbulence could augment the diffusional growth of cloud droplets. Air turbulence could also enhance the collision efficiency of cloud droplets. Accurate simulation of collision efficiency, however, requires capture of the multi-scale droplet-turbulence and droplet-droplet interactions, which has only been partially achieved in the recent past using the hybrid direct numerical simulation (HDNS) approach. % where Stokes disturbance flow is assumed. The HDNS approach has two major drawbacks: (1) the short-range droplet-droplet interaction is not treated rigorously; (2) the finite-Reynolds number correction to the collision efficiency is not included. In this talk, using two independent numerical methods, we will develop an interface-resolved simulation approach in which the disturbance flows are directly resolved numerically, combined with a rigorous lubrication correction model for near-field droplet-droplet interaction. This multi-scale approach is first used to study the effect of finite flow Reynolds numbers on the droplet collision efficiency in still air. Our simulation results show a significant finite-Re effect on collision efficiency when the droplets are of similar sizes. Preliminary results on integrating this approach in a turbulent flow laden with droplets will also be presented. This work is partially supported by the National Science Foundation.
Krompecher, T
1981-01-01
Objective measurements were carried out to study the evolution of rigor mortis on rats at various temperatures. Our experiments showed that: (1) at 6 degrees C rigor mortis reaches full development between 48 and 60 hours post mortem, and is resolved at 168 hours post mortem; (2) at 24 degrees C rigor mortis reaches full development at 5 hours post mortem, and is resolved at 16 hours post mortem; (3) at 37 degrees C rigor mortis reaches full development at 3 hours post mortem, and is resolved at 6 hours post mortem; (4) the intensity of rigor mortis grows with increase in temperature (difference between values obtained at 24 degrees C and 37 degrees C); and (5) and 6 degrees C a "cold rigidity" was found, in addition to and independent of rigor mortis.
Rigorous simulation: a tool to enhance decision making
Energy Technology Data Exchange (ETDEWEB)
Neiva, Raquel; Larson, Mel; Baks, Arjan [KBC Advanced Technologies plc, Surrey (United Kingdom)
2012-07-01
The world refining industries continue to be challenged by population growth (increased demand), regional market changes and the pressure of regulatory requirements to operate a 'green' refinery. Environmental regulations are reducing the value and use of heavy fuel oils, and leading to convert more of the heavier products or even heavier crude into lighter products while meeting increasingly stringent transportation fuel specifications. As a result actions are required for establishing a sustainable advantage for future success. Rigorous simulation provides a key advantage improving the time and efficient use of capital investment and maximizing profitability. Sustainably maximizing profit through rigorous modeling is achieved through enhanced performance monitoring and improved Linear Programme (LP) model accuracy. This paper contains examples on these two items. The combination of both increases overall rates of return. As refiners consider optimizing existing assets and expanding projects, the process agreed to achieve these goals is key for a successful profit improvement. The benefit of rigorous kinetic simulation with detailed fractionation allows for optimizing existing asset utilization while focusing the capital investment in the new unit(s), and therefore optimizing the overall strategic plan and return on investment. Individual process unit's monitoring works as a mechanism for validating and optimizing the plant performance. Unit monitoring is important to rectify poor performance and increase profitability. The key to a good LP relies upon the accuracy of the data used to generate the LP sub-model data. The value of rigorous unit monitoring are that the results are heat and mass balanced consistently, and are unique for a refiners unit / refinery. With the improved match of the refinery operation, the rigorous simulation models will allow capturing more accurately the non linearity of those process units and therefore provide correct
Rigorous force field optimization principles based on statistical distance minimization
Energy Technology Data Exchange (ETDEWEB)
Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)
2015-10-14
We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.
The Relationship between Project-Based Learning and Rigor in STEM-Focused High Schools
Edmunds, Julie; Arshavsky, Nina; Glennie, Elizabeth; Charles, Karen; Rice, Olivia
2016-01-01
Project-based learning (PjBL) is an approach often favored in STEM classrooms, yet some studies have shown that teachers struggle to implement it with academic rigor. This paper explores the relationship between PjBL and rigor in the classrooms of ten STEM-oriented high schools. Utilizing three different data sources reflecting three different…
Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E
2011-12-22
Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.
Directory of Open Access Journals (Sweden)
Elie Nadal
2018-02-01
Full Text Available In this study we fabricate gold nanocomposites and model their optical properties. The nanocomposites are either homogeneous films or gratings containing gold nanoparticles embedded in a polymer matrix. The samples are fabricated using a recently developed technique making use of laser interferometry. The gratings present original plasmon-enhanced diffraction properties. In this work, we develop a new approach to model the optical properties of our composites. We combine the extended Maxwell–Garnett model of effective media with the Rigorous Coupled Wave Analysis (RCWA method and compute both the absorption spectra and the diffraction efficiency spectra of the gratings. We show that such a semi-analytical approach allows us to reproduce the original plasmonic features of the composites and can provide us with details about their inner structure. Such an approach, considering reasonably high particle concentrations, could be a simple and efficient tool to study complex micro-structured system based on plasmonic components, such as metamaterials.
The Rigor Mortis of Education: Rigor Is Required in a Dying Educational System
Mixon, Jason; Stuart, Jerry
2009-01-01
In an effort to answer the "Educational Call to Arms", our national public schools have turned to Advanced Placement (AP) courses as the predominate vehicle used to address the lack of academic rigor in our public high schools. Advanced Placement is believed by many to provide students with the rigor and work ethic necessary to…
Realizing rigor in the mathematics classroom
Hull, Ted H (Henry); Balka, Don S
2014-01-01
Rigor put within reach! Rigor: The Common Core has made it policy-and this first-of-its-kind guide takes math teachers and leaders through the process of making it reality. Using the Proficiency Matrix as a framework, the authors offer proven strategies and practical tools for successful implementation of the CCSS mathematical practices-with rigor as a central objective. You'll learn how to Define rigor in the context of each mathematical practice Identify and overcome potential issues, including differentiating instruction and using data
Mathematical framework for fast and rigorous track fit for the ZEUS detector
Energy Technology Data Exchange (ETDEWEB)
Spiridonov, Alexander
2008-12-15
In this note we present a mathematical framework for a rigorous approach to a common track fit for trackers located in the inner region of the ZEUS detector. The approach makes use of the Kalman filter and offers a rigorous treatment of magnetic field inhomogeneity, multiple scattering and energy loss. We describe mathematical details of the implementation of the Kalman filter technique with a reduced amount of computations for a cylindrical drift chamber, barrel and forward silicon strip detectors and a forward straw drift chamber. Options with homogeneous and inhomogeneous field are discussed. The fitting of tracks in one ZEUS event takes about of 20ms on standard PC. (orig.)
Rigorous time slicing approach to Feynman path integrals
Fujiwara, Daisuke
2017-01-01
This book proves that Feynman's original definition of the path integral actually converges to the fundamental solution of the Schrödinger equation at least in the short term if the potential is differentiable sufficiently many times and its derivatives of order equal to or higher than two are bounded. The semi-classical asymptotic formula up to the second term of the fundamental solution is also proved by a method different from that of Birkhoff. A bound of the remainder term is also proved. The Feynman path integral is a method of quantization using the Lagrangian function, whereas Schrödinger's quantization uses the Hamiltonian function. These two methods are believed to be equivalent. But equivalence is not fully proved mathematically, because, compared with Schrödinger's method, there is still much to be done concerning rigorous mathematical treatment of Feynman's method. Feynman himself defined a path integral as the limit of a sequence of integrals over finite-dimensional spaces which is obtained by...
International Nuclear Information System (INIS)
Yao Runming; Yang Yulan; Li Baizhan
2012-01-01
The assessment of building energy efficiency is one of the most effective measures for reducing building energy consumption. This paper proposes a holistic method (HMEEB) for assessing and certifying energy efficiency of buildings based on the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach. HMEEB has three main features: (i) it provides both a method to assess and certify building energy efficiency, and exists as an analytical tool to identify improvement opportunities; (ii) it combines a wealth of information on building energy efficiency assessment, including identification of indicators and a weighting mechanism; and (iii) it provides a method to identify and deal with inherent uncertainties within the assessment procedure. This paper demonstrates the robustness, flexibility and effectiveness of the proposed method, using two examples to assess the energy efficiency of two residential buildings, both located in the ‘Hot Summer and Cold Winter’ zone in China. The proposed certification method provides detailed recommendations for policymakers in the context of carbon emission reduction targets and promoting energy efficiency in the built environment. The method is transferable to other countries and regions, using an indicator weighting system to modify local climatic, economic and social factors. - Highlights: ► Assessing energy efficiency of buildings holistically; ► Applying the D-S (Dempster-Shafer) theory of evidence and the Evidential Reasoning (ER) approach; ► Involving large information and uncertainties in the energy efficiency decision-making process. ► rigorous measures for policymakers to meet carbon emission reduction targets.
Rigor, vigor, and the study of health disparities.
Adler, Nancy; Bush, Nicole R; Pantell, Matthew S
2012-10-16
Health disparities research spans multiple fields and methods and documents strong links between social disadvantage and poor health. Associations between socioeconomic status (SES) and health are often taken as evidence for the causal impact of SES on health, but alternative explanations, including the impact of health on SES, are plausible. Studies showing the influence of parents' SES on their children's health provide evidence for a causal pathway from SES to health, but have limitations. Health disparities researchers face tradeoffs between "rigor" and "vigor" in designing studies that demonstrate how social disadvantage becomes biologically embedded and results in poorer health. Rigorous designs aim to maximize precision in the measurement of SES and health outcomes through methods that provide the greatest control over temporal ordering and causal direction. To achieve precision, many studies use a single SES predictor and single disease. However, doing so oversimplifies the multifaceted, entwined nature of social disadvantage and may overestimate the impact of that one variable and underestimate the true impact of social disadvantage on health. In addition, SES effects on overall health and functioning are likely to be greater than effects on any one disease. Vigorous designs aim to capture this complexity and maximize ecological validity through more complete assessment of social disadvantage and health status, but may provide less-compelling evidence of causality. Newer approaches to both measurement and analysis may enable enhanced vigor as well as rigor. Incorporating both rigor and vigor into studies will provide a fuller understanding of the causes of health disparities.
Reconciling the Rigor-Relevance Dilemma in Intellectual Capital Research
Andriessen, Daniel
2004-01-01
This paper raises the issue of research methodology for intellectual capital and other types of management research by focusing on the dilemma of rigour versus relevance. The more traditional explanatory approach to research often leads to rigorous results that are not of much help to solve practical problems. This paper describes an alternative…
Krompecher, T; Bergerioux, C; Brandt-Casadevall, C; Gujer, H R
1983-07-01
The evolution of rigor mortis was studied in cases of nitrogen asphyxia, drowning and strangulation, as well as in fatal intoxications due to strychnine, carbon monoxide and curariform drugs, using a modified method of measurement. Our experiments demonstrated that: (1) Strychnine intoxication hastens the onset and passing of rigor mortis. (2) CO intoxication delays the resolution of rigor mortis. (3) The intensity of rigor may vary depending upon the cause of death. (4) If the stage of rigidity is to be used to estimate the time of death, it is necessary: (a) to perform a succession of objective measurements of rigor mortis intensity; and (b) to verify the eventual presence of factors that could play a role in the modification of its development.
Rigorous Science: a How-To Guide
Directory of Open Access Journals (Sweden)
Arturo Casadevall
2016-11-01
Full Text Available Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education.
Krompecher, T; Bergerioux, C
1988-01-01
The influence of electrocution on the evolution of rigor mortis was studied on rats. Our experiments showed that: (1) Electrocution hastens the onset of rigor mortis. After an electrocution of 90 s, a complete rigor develops already 1 h post-mortem (p.m.) compared to 5 h p.m. for the controls. (2) Electrocution hastens the passing of rigor mortis. After an electrocution of 90 s, the first significant decrease occurs at 3 h p.m. (8 h p.m. in the controls). (3) These modifications in rigor mortis evolution are less pronounced in the limbs not directly touched by the electric current. (4) In case of post-mortem electrocution, the changes are slightly less pronounced, the resistance is higher and the absorbed energy is lower as compared with the ante-mortem electrocution cases. The results are completed by two practical observations on human electrocution cases.
International Nuclear Information System (INIS)
Galatolo, Stefano; Monge, Maurizio; Nisoli, Isaia
2016-01-01
We study the problem of the rigorous computation of the stationary measure and of the rate of convergence to equilibrium of an iterated function system described by a stochastic mixture of two or more dynamical systems that are either all uniformly expanding on the interval, either all contracting. In the expanding case, the associated transfer operators satisfy a Lasota–Yorke inequality, we show how to compute a rigorous approximations of the stationary measure in the L "1 norm and an estimate for the rate of convergence. The rigorous computation requires a computer-aided proof of the contraction of the transfer operators for the maps, and we show that this property propagates to the transfer operators of the IFS. In the contracting case we perform a rigorous approximation of the stationary measure in the Wasserstein–Kantorovich distance and rate of convergence, using the same functional analytic approach. We show that a finite computation can produce a realistic computation of all contraction rates for the whole parameter space. We conclude with a description of the implementation and numerical experiments. (paper)
Development of rigor mortis is not affected by muscle volume.
Kobayashi, M; Ikegaya, H; Takase, I; Hatanaka, K; Sakurada, K; Iwase, H
2001-04-01
There is a hypothesis suggesting that rigor mortis progresses more rapidly in small muscles than in large muscles. We measured rigor mortis as tension determined isometrically in rat musculus erector spinae that had been cut into muscle bundles of various volumes. The muscle volume did not influence either the progress or the resolution of rigor mortis, which contradicts the hypothesis. Differences in pre-rigor load on the muscles influenced the onset and resolution of rigor mortis in a few pairs of samples, but did not influence the time taken for rigor mortis to reach its full extent after death. Moreover, the progress of rigor mortis in this muscle was biphasic; this may reflect the early rigor of red muscle fibres and the late rigor of white muscle fibres.
Long persistence of rigor mortis at constant low temperature.
Varetto, Lorenzo; Curto, Ombretta
2005-01-06
We studied the persistence of rigor mortis by using physical manipulation. We tested the mobility of the knee on 146 corpses kept under refrigeration at Torino's city mortuary at a constant temperature of +4 degrees C. We found a persistence of complete rigor lasting for 10 days in all the cadavers we kept under observation; and in one case, rigor lasted for 16 days. Between the 11th and the 17th days, a progressively increasing number of corpses showed a change from complete into partial rigor (characterized by partial bending of the articulation). After the 17th day, all the remaining corpses showed partial rigor and in the two cadavers that were kept under observation "à outrance" we found the absolute resolution of rigor mortis occurred on the 28th day. Our results prove that it is possible to find a persistence of rigor mortis that is much longer than the expected when environmental conditions resemble average outdoor winter temperatures in temperate zones. Therefore, this datum must be considered when a corpse is found in those environmental conditions so that when estimating the time of death, we are not misled by the long persistence of rigor mortis.
International Nuclear Information System (INIS)
Anon.
2008-01-01
This session: 'Novel approaches to improve energy efficiency at refineries', includes two key-note addresses. During the first one, entitled 'refinery energy efficiency today', Zoran Milosevic (KBC) has reported a survey of the current situation and of the improvements margins for the future. Hereafter, this work takes again in developing them, the main messages of the address of Jean-Bernard Sigaud, whose aim was to attract attention on the requirement of a rigorous methodological approach in order to avoid that the decisions taken to abate the energy consumption or the CO 2 releases at the local level, result in fine to the inverse result of those of the world level. This work shows particularly why the traditional perception of a refinery energy balance, which consists to assimilate the fuel consumption to an energy consumption, can lead to deep misinterpretations. It shows too in what the hydrogen transfer amounts to de-localize the energy consumption (which occurs essentially where the hydrogen is consumed) compared to the corresponding CO 2 releases which are produced where hydrogen is produced. At last, a comparison between the different channels for the electric power production and the synthesized fuels, illustrates the crucial importance to use each technology deliberately, while underlining the character sometimes under-intuitive of the good decision. (O.M.)
Directory of Open Access Journals (Sweden)
Christian Kobbernagel
2016-06-01
Full Text Available In the last couple of decades there has been an unprecedented explosion of news media platforms and formats, as a succession of digital and social media have joined the ranks of legacy media. We live in a ‘hybrid media system’ (Chadwick, 2013, in which people build their cross-media news repertoires from the ensemble of old and new media available. This article presents an innovative mixed-method approach with considerable explanatory power to the exploration of patterns of news media consumption. This approach tailors Q-methodology in the direction of a qualitative study of news consumption, in which a card sorting exercise serves to translate the participants’ news media preferences into a form that enables the researcher to undertake a rigorous factor-analytical construction of their news consumption repertoires. This interpretive, factor-analytical procedure, which results in the building of six audience news repertoires in Denmark, also preserves the qualitative thickness of the participants’ verbal accounts of the communicative figurations of their day-in-the-life with the news media.
A case of instantaneous rigor?
Pirch, J; Schulz, Y; Klintschar, M
2013-09-01
The question of whether instantaneous rigor mortis (IR), the hypothetic sudden occurrence of stiffening of the muscles upon death, actually exists has been controversially debated over the last 150 years. While modern German forensic literature rejects this concept, the contemporary British literature is more willing to embrace it. We present the case of a young woman who suffered from diabetes and who was found dead in an upright standing position with back and shoulders leaned against a punchbag and a cupboard. Rigor mortis was fully established, livor mortis was strong and according to the position the body was found in. After autopsy and toxicological analysis, it was stated that death most probably occurred due to a ketoacidotic coma with markedly increased values of glucose and lactate in the cerebrospinal fluid as well as acetone in blood and urine. Whereas the position of the body is most unusual, a detailed analysis revealed that it is a stable position even without rigor mortis. Therefore, this case does not further support the controversial concept of IR.
Mathematical Rigor in Introductory Physics
Vandyke, Michael; Bassichis, William
2011-10-01
Calculus-based introductory physics courses intended for future engineers and physicists are often designed and taught in the same fashion as those intended for students of other disciplines. A more mathematically rigorous curriculum should be more appropriate and, ultimately, more beneficial for the student in his or her future coursework. This work investigates the effects of mathematical rigor on student understanding of introductory mechanics. Using a series of diagnostic tools in conjunction with individual student course performance, a statistical analysis will be performed to examine student learning of introductory mechanics and its relation to student understanding of the underlying calculus.
A Framework for Rigorously Identifying Research Gaps in Qualitative Literature Reviews
DEFF Research Database (Denmark)
Müller-Bloch, Christoph; Kranz, Johann
2015-01-01
Identifying research gaps is a fundamental goal of literature reviewing. While it is widely acknowledged that literature reviews should identify research gaps, there are no methodological guidelines for how to identify research gaps in qualitative literature reviews ensuring rigor and replicability....... Our study addresses this gap and proposes a framework that should help scholars in this endeavor without stifling creativity. To develop the framework we thoroughly analyze the state-of-the-art procedure of identifying research gaps in 40 recent literature reviews using a grounded theory approach....... Based on the data, we subsequently derive a framework for identifying research gaps in qualitative literature reviews and demonstrate its application with an example. Our results provide a modus operandi for identifying research gaps, thus enabling scholars to conduct literature reviews more rigorously...
An exergy approach to efficiency evaluation of desalination
Ng, Kim Choon
2017-05-02
This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today\\'s combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.
An exergy approach to efficiency evaluation of desalination
Ng, Kim Choon; Shahzad, Muhammad Wakil; Son, Hyuk Soo; Hamed, Osman A.
2017-05-01
This paper presents an evaluation process efficiency based on the consumption of primary energy for all types of practical desalination methods available hitherto. The conventional performance ratio has, thus far, been defined with respect to the consumption of derived energy, such as the electricity or steam, which are susceptible to the conversion losses of power plants and boilers that burned the input primary fuels. As derived energies are usually expressed by the units, either kWh or Joules, these units cannot differentiate the grade of energy supplied to the processes accurately. In this paper, the specific energy consumption is revisited for the efficacy of all large-scale desalination plants. In today's combined production of electricity and desalinated water, accomplished with advanced cogeneration concept, the input exergy of fuels is utilized optimally and efficiently in a temperature cascaded manner. By discerning the exergy destruction successively in the turbines and desalination processes, the relative contribution of primary energy to the processes can be accurately apportioned to the input primary energy. Although efficiency is not a law of thermodynamics, however, a common platform for expressing the figures of merit explicit to the efficacy of desalination processes can be developed meaningfully that has the thermodynamic rigor up to the ideal or thermodynamic limit of seawater desalination for all scientists and engineers to aspire to.
"Rigor mortis" in a live patient.
Chakravarthy, Murali
2010-03-01
Rigor mortis is conventionally a postmortem change. Its occurrence suggests that death has occurred at least a few hours ago. The authors report a case of "Rigor Mortis" in a live patient after cardiac surgery. The likely factors that may have predisposed such premortem muscle stiffening in the reported patient are, intense low cardiac output status, use of unusually high dose of inotropic and vasopressor agents and likely sepsis. Such an event may be of importance while determining the time of death in individuals such as described in the report. It may also suggest requirement of careful examination of patients with muscle stiffening prior to declaration of death. This report is being published to point out the likely controversies that might arise out of muscle stiffening, which should not always be termed rigor mortis and/ or postmortem.
Classroom Talk for Rigorous Reading Comprehension Instruction
Wolf, Mikyung Kim; Crosson, Amy C.; Resnick, Lauren B.
2004-01-01
This study examined the quality of classroom talk and its relation to academic rigor in reading-comprehension lessons. Additionally, the study aimed to characterize effective questions to support rigorous reading comprehension lessons. The data for this study included 21 reading-comprehension lessons in several elementary and middle schools from…
Krompecher, T; Fryc, O
1978-01-01
The use of new methods and an appropriate apparatus has allowed us to make successive measurements of rigor mortis and a study of its evolution in the rat. By a comparative examination on the front and hind limbs, we have determined the following: (1) The muscular mass of the hind limbs is 2.89 times greater than that of the front limbs. (2) In the initial phase rigor mortis is more pronounced in the front limbs. (3) The front and hind limbs reach maximum rigor mortis at the same time and this state is maintained for 2 hours. (4) Resolution of rigor mortis is accelerated in the front limbs during the initial phase, but both front and hind limbs reach complete resolution at the same time.
[Experimental study of restiffening of the rigor mortis].
Wang, X; Li, M; Liao, Z G; Yi, X F; Peng, X M
2001-11-01
To observe changes of the length of sarcomere of rat when restiffening. We measured the length of sarcomere of quadriceps in 40 rats in different condition by scanning electron microscope. The length of sarcomere of rigor mortis without destroy is obviously shorter than that of restiffening. The length of sarcomere is negatively correlative to the intensity of rigor mortis. Measuring the length of sarcomere can determine the intensity of rigor mortis and provide evidence for estimation of time since death.
[Rigor mortis -- a definite sign of death?].
Heller, A R; Müller, M P; Frank, M D; Dressler, J
2005-04-01
In the past years an ongoing controversial debate exists in Germany, regarding quality of the coroner's inquest and declaration of death by physicians. We report the case of a 90-year old female, who was found after an unknown time following a suicide attempt with benzodiazepine. The examination of the patient showed livores (mortis?) on the left forearm and left lower leg. Moreover, rigor (mortis?) of the left arm was apparent which prevented arm flexion and extension. The hypothermic patient with insufficient respiration was intubated and mechanically ventilated. Chest compressions were not performed, because central pulses were (hardly) palpable and a sinus bradycardia 45/min (AV-block 2 degrees and sole premature ventricular complexes) was present. After placement of an intravenous line (17 G, external jugular vein) the hemodynamic situation was stabilized with intermittent boli of epinephrine and with sodium bicarbonate. With improved circulation livores and rigor disappeared. In the present case a minimal central circulation was noted, which could be stabilized, despite the presence of certain signs of death ( livores and rigor mortis). Considering the finding of an abrogated peripheral perfusion (livores), we postulate a centripetal collapse of glycogen and ATP supply in the patients left arm (rigor), which was restored after resuscitation and reperfusion. Thus, it appears that livores and rigor are not sensitive enough to exclude a vita minima, in particular in hypothermic patients with intoxications. Consequently a careful ABC-check should be performed even in the presence of apparently certain signs of death, to avoid underdiagnosing a vita minima. Additional ECG- monitoring is required to reduce the rate of false positive declarations of death. To what extent basic life support by paramedics should commence when rigor and livores are present until physician DNR order, deserves further discussion.
Monitoring muscle optical scattering properties during rigor mortis
Xia, J.; Ranasinghesagara, J.; Ku, C. W.; Yao, G.
2007-09-01
Sarcomere is the fundamental functional unit in skeletal muscle for force generation. In addition, sarcomere structure is also an important factor that affects the eating quality of muscle food, the meat. The sarcomere structure is altered significantly during rigor mortis, which is the critical stage involved in transforming muscle to meat. In this paper, we investigated optical scattering changes during the rigor process in Sternomandibularis muscles. The measured optical scattering parameters were analyzed along with the simultaneously measured passive tension, pH value, and histology analysis. We found that the temporal changes of optical scattering, passive tension, pH value and fiber microstructures were closely correlated during the rigor process. These results suggested that sarcomere structure changes during rigor mortis can be monitored and characterized by optical scattering, which may find practical applications in predicting meat quality.
International Nuclear Information System (INIS)
Hoffstaetter, G.H.
1994-12-01
Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)
Regional level approach for increasing energy efficiency
International Nuclear Information System (INIS)
Viholainen, Juha; Luoranen, Mika; Väisänen, Sanni; Niskanen, Antti; Horttanainen, Mika; Soukka, Risto
2016-01-01
Highlights: • Comprehensive snapshot of regional energy system for decision makers. • Connecting regional sustainability targets and energy planning. • Involving local players in energy planning. - Abstract: Actions for increasing the renewable share in the energy supply and improving both production and end-use energy efficiency are often built into the regional level sustainability targets. Because of this, many local stakeholders such as local governments, energy producers and distributors, industry, and public and private sector operators require information on the current state and development aspects of the regional energy efficiency. The drawback is that an overall view on the focal energy system operators, their energy interests, and future energy service needs in the region is often not available for the stakeholders. To support the local energy planning and management of the regional energy services, an approach for increasing the regional energy efficiency is being introduced. The presented approach can be seen as a solid framework for gathering the required data for energy efficiency analysis and also evaluating the energy system development, planned improvement actions, and the required energy services at the region. This study defines the theoretical structure of the energy efficiency approach and the required steps for revealing such energy system improvement actions that support the regional energy plan. To demonstrate the use of the approach, a case study of a Finnish small-town of Lohja is presented. In the case example, possible actions linked to the regional energy targets were evaluated with energy efficiency analysis. The results of the case example are system specific, but the conducted study can be seen as a justified example of generating easily attainable and transparent information on the impacts of different improvement actions on the regional energy system.
Measuring economy-wide energy efficiency performance: A parametric frontier approach
International Nuclear Information System (INIS)
Zhou, P.; Ang, B.W.; Zhou, D.Q.
2012-01-01
This paper proposes a parametric frontier approach to estimating economy-wide energy efficiency performance from a production efficiency point of view. It uses the Shephard energy distance function to define an energy efficiency index and adopts the stochastic frontier analysis technique to estimate the index. A case study of measuring the economy-wide energy efficiency performance of a sample of OECD countries using the proposed approach is presented. It is found that the proposed parametric frontier approach has higher discriminating power in energy efficiency performance measurement compared to its nonparametric frontier counterparts.
National Research Council Canada - National Science Library
Larson, C. W; Hargus, William A; Brown, Daniel L
2007-01-01
...) of the propellant jet on the conversion of anode electrical energy to jet kinetic energy. This enabled a mathematically rigorous distinction to be made between thrust efficiency and energy efficiency...
National Research Council Canada - National Science Library
Larson, C. W; Hargus, William A; Brown, Daniel L
2007-01-01
...) of the propellant jet on the conversion of anode electrical energy to jet kinetic energy. This enabled a mathematically rigorous distinction to be made between thrust efficiency and energy efficiency...
Efficient approach for reliability-based optimization based on weighted importance sampling approach
International Nuclear Information System (INIS)
Yuan, Xiukai; Lu, Zhenzhou
2014-01-01
An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology
Biomedical text mining for research rigor and integrity: tasks, challenges, directions.
Kilicoglu, Halil
2017-06-13
An estimated quarter of a trillion US dollars is invested in the biomedical research enterprise annually. There is growing alarm that a significant portion of this investment is wasted because of problems in reproducibility of research findings and in the rigor and integrity of research conduct and reporting. Recent years have seen a flurry of activities focusing on standardization and guideline development to enhance the reproducibility and rigor of biomedical research. Research activity is primarily communicated via textual artifacts, ranging from grant applications to journal publications. These artifacts can be both the source and the manifestation of practices leading to research waste. For example, an article may describe a poorly designed experiment, or the authors may reach conclusions not supported by the evidence presented. In this article, we pose the question of whether biomedical text mining techniques can assist the stakeholders in the biomedical research enterprise in doing their part toward enhancing research integrity and rigor. In particular, we identify four key areas in which text mining techniques can make a significant contribution: plagiarism/fraud detection, ensuring adherence to reporting guidelines, managing information overload and accurate citation/enhanced bibliometrics. We review the existing methods and tools for specific tasks, if they exist, or discuss relevant research that can provide guidance for future work. With the exponential increase in biomedical research output and the ability of text mining approaches to perform automatic tasks at large scale, we propose that such approaches can support tools that promote responsible research practices, providing significant benefits for the biomedical research enterprise. Published by Oxford University Press 2017. This work is written by a US Government employee and is in the public domain in the US.
An ultramicroscopic study on rigor mortis.
Suzuki, T
1976-01-01
Gastrocnemius muscles taken from decapitated mice at various intervals after death and from mice killed by 2,4-dinitrophenol or mono-iodoacetic acid injection to induce rigor mortis soon after death, were observed by electron microscopy. The prominent appearance of many fine cross striations in the myofibrils (occurring about every 400 A) was considered to be characteristic of rigor mortis. These striations were caused by minute granules studded along the surfaces of both thick and thin filaments and appeared to be the bridges connecting the 2 kinds of filaments and accounted for the hardness and rigidity of the muscle.
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama
2015-04-23
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Energy Efficiency - Spectral Efficiency Trade-off: A Multiobjective Optimization Approach
Amin, Osama; Bedeer, Ebrahim; Ahmed, Mohamed; Dobre, Octavia
2015-01-01
In this paper, we consider the resource allocation problem for energy efficiency (EE) - spectral efficiency (SE) trade-off. Unlike traditional research that uses the EE as an objective function and imposes constraints either on the SE or achievable rate, we propound a multiobjective optimization approach that can flexibly switch between the EE and SE functions or change the priority level of each function using a trade-off parameter. Our dynamic approach is more tractable than the conventional approaches and more convenient to realistic communication applications and scenarios. We prove that the multiobjective optimization of the EE and SE is equivalent to a simple problem that maximizes the achievable rate/SE and minimizes the total power consumption. Then we apply the generalized framework of the resource allocation for the EE-SE trade-off to optimally allocate the subcarriers’ power for orthogonal frequency division multiplexing (OFDM) with imperfect channel estimation. Finally, we use numerical results to discuss the choice of the trade-off parameter and study the effect of the estimation error, transmission power budget and channel-to-noise ratio on the multiobjective optimization.
Tenderness of pre- and post rigor lamb longissimus muscle.
Geesink, Geert; Sujang, Sadi; Koohmaraie, Mohammad
2011-08-01
Lamb longissimus muscle (n=6) sections were cooked at different times post mortem (prerigor, at rigor, 1dayp.m., and 7 days p.m.) using two cooking methods. Using a boiling waterbath, samples were either cooked to a core temperature of 70 °C or boiled for 3h. The latter method was meant to reflect the traditional cooking method employed in countries where preparation of prerigor meat is practiced. The time postmortem at which the meat was prepared had a large effect on the tenderness (shear force) of the meat (PCooking prerigor and at rigor meat to 70 °C resulted in higher shear force values than their post rigor counterparts at 1 and 7 days p.m. (9.4 and 9.6 vs. 7.2 and 3.7 kg, respectively). The differences in tenderness between the treatment groups could be largely explained by a difference in contraction status of the meat after cooking and the effect of ageing on tenderness. Cooking pre and at rigor meat resulted in severe muscle contraction as evidenced by the differences in sarcomere length of the cooked samples. Mean sarcomere lengths in the pre and at rigor samples ranged from 1.05 to 1.20 μm. The mean sarcomere length in the post rigor samples was 1.44 μm. Cooking for 3 h at 100 °C did improve the tenderness of pre and at rigor prepared meat as compared to cooking to 70 °C, but not to the extent that ageing did. It is concluded that additional intervention methods are needed to improve the tenderness of prerigor cooked meat. Copyright © 2011 Elsevier B.V. All rights reserved.
Differential algebras with remainder and rigorous proofs of long-term stability
International Nuclear Information System (INIS)
Berz, Martin
1997-01-01
It is shown how in addition to determining Taylor maps of general optical systems, it is possible to obtain rigorous interval bounds for the remainder term of the n-th order Taylor expansion. To this end, the three elementary operations of addition, multiplication, and differentiation in the Differential Algebraic approach are augmented by suitable interval operations in such a way that a remainder bound of the sum, product, and derivative is obtained from the Taylor polynomial and remainder bound of the operands. The method can be used to obtain bounds for the accuracy with which a Taylor map represents the true map of the particle optical system. In a more general sense, it is also useful for a variety of other numerical problems, including rigorous global optimization of highly complex functions. Combined with methods to obtain pseudo-invariants of repetitive motion and extensions of the Lyapunov- and Nekhoroshev stability theory, the latter can be used to guarantee stability for storage rings and other weakly nonlinear systems
A study into first-year engineering education success using a rigorous mixed methods approach
DEFF Research Database (Denmark)
van den Bogaard, M.E.D.; de Graaff, Erik; Verbraek, Alexander
2015-01-01
The aim of this paper is to combine qualitative and quantitative research methods into rigorous research into student success. Research methods have weaknesses that can be overcome by clever combinations. In this paper we use a situated study into student success as an example of how methods...... using statistical techniques. The main elements of the model were student behaviour and student disposition, which were influenced by the students’ perceptions of the education environment. The outcomes of the qualitative studies were useful in interpreting the outcomes of the structural equation...
Dosimetric effects of edema in permanent prostate seed implants: a rigorous solution
International Nuclear Information System (INIS)
Chen Zhe; Yue Ning; Wang Xiaohong; Roberts, Kenneth B.; Peschel, Richard; Nath, Ravinder
2000-01-01
Purpose: To derive a rigorous analytic solution to the dosimetric effects of prostate edema so that its impact on the conventional pre-implant and post-implant dosimetry can be studied for any given radioactive isotope and edema characteristics. Methods and Materials: The edema characteristics observed by Waterman et al (Int. J. Rad. Onc. Biol. Phys, 41:1069-1077; 1998) was used to model the time evolution of the prostate and the seed locations. The total dose to any part of prostate tissue from a seed implant was calculated analytically by parameterizing the dose fall-off from a radioactive seed as a single inverse power function of distance, with proper account of the edema-induced time evolution. The dosimetric impact of prostate edema was determined by comparing the dose calculated with full consideration of prostate edema to that calculated with the conventional dosimetry approach where the seed locations and the target volume are assumed to be stationary. Results: A rigorous analytic solution on the relative dosimetric effects of prostate edema was obtained. This solution proved explicitly that the relative dosimetric effects of edema, as found in the previous numerical studies by Yue et. al. (Int. J. Radiat. Oncol. Biol. Phys. 43, 447-454, 1999), are independent of the size and the shape of the implant target volume and are independent of the number and the locations of the seeds implanted. It also showed that the magnitude of relative dosimetric effects is independent of the location of dose evaluation point within the edematous target volume. It implies that the relative dosimetric effects of prostate edema are universal with respect to a given isotope and edema characteristic. A set of master tables for the relative dosimetric effects of edema were obtained for a wide range of edema characteristics for both 125 I and 103 Pd prostate seed implants. Conclusions: A rigorous analytic solution of the relative dosimetric effects of prostate edema has been
Probabilistic Forecasting of Photovoltaic Generation: An Efficient Statistical Approach
DEFF Research Database (Denmark)
Wan, Can; Lin, Jin; Song, Yonghua
2017-01-01
This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for P...... power generation is proposed based on extreme learning machine and quantile regression, featuring high reliability and computational efficiency. The proposed approach is validated through the numerical studies on PV data from Denmark.......This letter proposes a novel efficient probabilistic forecasting approach to accurately quantify the variability and uncertainty of the power production from photovoltaic (PV) systems. Distinguished from most existing models, a linear programming based prediction interval construction model for PV...
Scientific rigor through videogames.
Treuille, Adrien; Das, Rhiju
2014-11-01
Hypothesis-driven experimentation - the scientific method - can be subverted by fraud, irreproducibility, and lack of rigorous predictive tests. A robust solution to these problems may be the 'massive open laboratory' model, recently embodied in the internet-scale videogame EteRNA. Deploying similar platforms throughout biology could enforce the scientific method more broadly. Copyright © 2014 Elsevier Ltd. All rights reserved.
From everyday communicative figurations to rigorous audience news repertoires
DEFF Research Database (Denmark)
Kobbernagel, Christian; Schrøder, Kim Christian
2016-01-01
In the last couple of decades there has been an unprecedented explosion of news media platforms and formats, as a succession of digital and social media have joined the ranks of legacy media. We live in a ‘hybrid media system’ (Chadwick, 2013), in which people build their cross-media news...... repertoires from the ensemble of old and new media available. This article presents an innovative mixed-method approach with considerable explanatory power to the exploration of patterns of news media consumption. This approach tailors Q-methodology in the direction of a qualitative study of news consumption......, in which a card sorting exercise serves to translate the participants’ news media preferences into a form that enables the researcher to undertake a rigorous factor-analytical construction of their news consumption repertoires. This interpretive, factor-analytical procedure, which results in the building...
Emergency cricothyrotomy for trismus caused by instantaneous rigor in cardiac arrest patients.
Lee, Jae Hee; Jung, Koo Young
2012-07-01
Instantaneous rigor as muscle stiffening occurring in the moment of death (or cardiac arrest) can be confused with rigor mortis. If trismus is caused by instantaneous rigor, orotracheal intubation is impossible and a surgical airway should be secured. Here, we report 2 patients who had emergency cricothyrotomy for trismus caused by instantaneous rigor. This case report aims to help physicians understand instantaneous rigor and to emphasize the importance of securing a surgical airway quickly on the occurrence of trismus. Copyright © 2012 Elsevier Inc. All rights reserved.
New III-V cell design approaches for very high efficiency
Energy Technology Data Exchange (ETDEWEB)
Lundstrom, M.S.; Melloch, M.R.; Lush, G.B.; Patkar, M.P.; Young, M.P. (Purdue Univ., Lafayette, IN (United States))
1993-04-01
This report describes to examine new solar cell desip approaches for achieving very high conversion efficiencies. The program consists of two elements. The first centers on exploring new thin-film approaches specifically designed for M-III semiconductors. Substantial efficiency gains may be possible by employing light trapping techniques to confine the incident photons, as well as the photons emitted by radiative recombination. The thin-film approach is a promising route for achieving substantial performance improvements in the already high-efficiency, single-junction, III-V cell. The second element of the research involves exploring desip approaches for achieving high conversion efficiencies without requiring extremely high-quality material. This work has applications to multiple-junction cells, for which the selection of a component cell often involves a compromise between optimum band pp and optimum material quality. It could also be a benefit manufacturing environment by making the cell's efficiency less dependent on materialquality.
Rigor or mortis: best practices for preclinical research in neuroscience.
Steward, Oswald; Balice-Gordon, Rita
2014-11-05
Numerous recent reports document a lack of reproducibility of preclinical studies, raising concerns about potential lack of rigor. Examples of lack of rigor have been extensively documented and proposals for practices to improve rigor are appearing. Here, we discuss some of the details and implications of previously proposed best practices and consider some new ones, focusing on preclinical studies relevant to human neurological and psychiatric disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
Efficient Integrative Multi-SNP Association Analysis via Deterministic Approximation of Posteriors.
Wen, Xiaoquan; Lee, Yeji; Luca, Francesca; Pique-Regi, Roger
2016-06-02
With the increasing availability of functional genomic data, incorporating genomic annotations into genetic association analysis has become a standard procedure. However, the existing methods often lack rigor and/or computational efficiency and consequently do not maximize the utility of functional annotations. In this paper, we propose a rigorous inference procedure to perform integrative association analysis incorporating genomic annotations for both traditional GWASs and emerging molecular QTL mapping studies. In particular, we propose an algorithm, named deterministic approximation of posteriors (DAP), which enables highly efficient and accurate joint enrichment analysis and identification of multiple causal variants. We use a series of simulation studies to highlight the power and computational efficiency of our proposed approach and further demonstrate it by analyzing the cross-population eQTL data from the GEUVADIS project and the multi-tissue eQTL data from the GTEx project. In particular, we find that genetic variants predicted to disrupt transcription factor binding sites are enriched in cis-eQTLs across all tissues. Moreover, the enrichment estimates obtained across the tissues are correlated with the cell types for which the annotations are derived. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
NEW APPROACHES TO EFFICIENCY OF MASSIVE ONLINE COURSE
Directory of Open Access Journals (Sweden)
Liubov S. Lysitsina
2014-09-01
Full Text Available This paper is focused on efficiency of e-learning, in general, and massive online course in programming and information technology, in particular. Several innovative approaches and scenarios have been proposed, developed, implemented and verified by the authors, including 1 a new approach to organize and use automatic immediate feedback that significantly helps a learner to verify developed code and increases an efficiency of learning, 2 a new approach to construct learning interfaces – it is based on “develop a code – get a result – validate a code” technique, 3 three scenarios of visualization and verification of developed code, 4 a new multi-stage approach to solve complex programming assignments, 5 a new implementation of “perfectionism” game mechanics in a massive online course. Overall, due to implementation of proposed and developed approaches, the efficiency of massive online course has been considerably increased, particularly 1 the additional 27.9 % of students were able to complete successfully “Web design and development using HTML5 and CSS3” massive online course at ITMO University, and 2 based on feedback from 5588 students a “perfectionism” game mechanics noticeably improves students’ involvement into course activities and retention factor.
Rigor in Qualitative Supply Chain Management Research
DEFF Research Database (Denmark)
Goffin, Keith; Raja, Jawwad; Claes, Björn
2012-01-01
, reliability, and theoretical saturation. Originality/value – It is the authors' contention that the addition of the repertory grid technique to the toolset of methods used by logistics and supply chain management researchers can only enhance insights and the building of robust theories. Qualitative studies......Purpose – The purpose of this paper is to share the authors' experiences of using the repertory grid technique in two supply chain management studies. The paper aims to demonstrate how the two studies provided insights into how qualitative techniques such as the repertory grid can be made more...... rigorous than in the past, and how results can be generated that are inaccessible using quantitative methods. Design/methodology/approach – This paper presents two studies undertaken using the repertory grid technique to illustrate its application in supply chain management research. Findings – The paper...
Statistical mechanics rigorous results
Ruelle, David
1999-01-01
This classic book marks the beginning of an era of vigorous mathematical progress in equilibrium statistical mechanics. Its treatment of the infinite system limit has not been superseded, and the discussion of thermodynamic functions and states remains basic for more recent work. The conceptual foundation provided by the Rigorous Results remains invaluable for the study of the spectacular developments of statistical mechanics in the second half of the 20th century.
Rigor force responses of permeabilized fibres from fast and slow skeletal muscles of aged rats.
Plant, D R; Lynch, G S
2001-09-01
1. Ageing is generally associated with a decline in skeletal muscle mass and strength and a slowing of muscle contraction, factors that impact upon the quality of life for the elderly. The mechanisms underlying this age-related muscle weakness have not been fully resolved. The purpose of the present study was to determine whether the decrease in muscle force as a consequence of age could be attributed partly to a decrease in the number of cross-bridges participating during contraction. 2. Given that the rigor force is proportional to the approximate total number of interacting sites between the actin and myosin filaments, we tested the null hypothesis that the rigor force of permeabilized muscle fibres from young and old rats would not be different. 3. Permeabilized fibres from the extensor digitorum longus (fast-twitch; EDL) and soleus (predominantly slow-twitch) muscles of young (6 months of age) and old (27 months of age) male F344 rats were activated in Ca2+-buffered solutions to determine force-pCa characteristics (where pCa = -log(10)[Ca2+]) and then in solutions lacking ATP and Ca2+ to determine rigor force levels. 4. The rigor forces for EDL and soleus muscle fibres were not different between young and old rats, indicating that the approximate total number of cross-bridges that can be formed between filaments did not decline with age. We conclude that the age-related decrease in force output is more likely attributed to a decrease in the force per cross-bridge and/or decreases in the efficiency of excitation-contraction coupling.
High and low rigor temperature effects on sheep meat tenderness and ageing.
Devine, Carrick E; Payne, Steven R; Peachey, Bridget M; Lowe, Timothy E; Ingram, John R; Cook, Christian J
2002-02-01
Immediately after electrical stimulation, the paired m. longissimus thoracis et lumborum (LT) of 40 sheep were boned out and wrapped tightly with a polyethylene cling film. One of the paired LT's was chilled in 15°C air to reach a rigor mortis (rigor) temperature of 18°C and the other side was placed in a water bath at 35°C and achieved rigor at this temperature. Wrapping reduced rigor shortening and mimicked meat left on the carcass. After rigor, the meat was aged at 15°C for 0, 8, 26 and 72 h and then frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1×1 cm cross-section. The shear force values of meat for 18 and 35°C rigor were similar at zero ageing, but as ageing progressed, the 18 rigor meat aged faster and became more tender than meat that went into rigor at 35°C (Prigor at each ageing time were significantly different (Prigor were still significantly greater. Thus the toughness of 35°C meat was not a consequence of muscle shortening and appears to be due to both a faster rate of tenderisation and the meat tenderising to a greater extent at the lower temperature. The cook loss at 35°C rigor (30.5%) was greater than that at 18°C rigor (28.4%) (P<0.01) and the colour Hunter L values were higher at 35°C (P<0.01) compared with 18°C, but there were no significant differences in a or b values.
Physiological studies of muscle rigor mortis in the fowl
International Nuclear Information System (INIS)
Nakahira, S.; Kaneko, K.; Tanaka, K.
1990-01-01
A simple system was developed for continuous measurement of muscle contraction during nor mortis. Longitudinal muscle strips dissected from the Peroneus Longus were suspended in a plastic tube containing liquid paraffin. Mechanical activity was transmitted to a strain-gauge transducer which is connected to a potentiometric pen-recorder. At the onset of measurement 1.2g was loaded on the muscle strip. This model was used to study the muscle response to various treatments during nor mortis. All measurements were carried out under the anaerobic condition at 17°C, except otherwise stated. 1. The present system was found to be quite useful for continuous measurement of muscle rigor course. 2. Muscle contraction under the anaerobic condition at 17°C reached a peak about 2 hours after the onset of measurement and thereafter it relaxed at a slow rate. In contrast, the aerobic condition under a high humidity resulted in a strong rigor, about three times stronger than that in the anaerobic condition. 3. Ultrasonic treatment (37, 000-47, 000Hz) at 25°C for 10 minutes resulted in a moderate muscle rigor. 4. Treatment of muscle strip with 2mM EGTA at 30°C for 30 minutes led to a relaxation of the muscle. 5. The muscle from the birds killed during anesthesia with pentobarbital sodium resulted in a slow rate of rigor, whereas the birds killed one day after hypophysectomy led to a quick muscle rigor as seen in intact controls. 6. A slight muscle rigor was observed when muscle strip was placed in a refrigerator at 0°C for 18.5 hours and thereafter temperature was kept at 17°C. (author)
Promoting Energy Efficiency Best Practices in Cities
Energy Technology Data Exchange (ETDEWEB)
NONE
2008-07-01
This pilot project is the first attempt to address the lack of rigorous and transparent approach to defining best practice in city energy efficiency programmes. The project has provided interesting insights into a range of exciting projects being implemented in cities around the world. However, the potential exists for far greater benefit. The study has found that it is possible to collate the detailed information needed to identify best practice energy efficiency projects in cities. However, gathering the data is not easy. The data is often not recorded in an easily accessible format. Nor is it easy to get city officials to allocate time to the necessary data collation given the many other competing demands on their time. A key area that this project identifies as requiring urgent attention is the development of a common data management format for energy efficiency projects by Cas. Further work could also focus on refining the criteria used to define best practice, and broadening the scope of projects beyond energy efficiency.
Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D
Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas
2017-11-01
One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.
Estimation of the breaking of rigor mortis by myotonometry.
Vain, A; Kauppila, R; Vuori, E
1996-05-31
Myotonometry was used to detect breaking of rigor mortis. The myotonometer is a new instrument which measures the decaying oscillations of a muscle after a brief mechanical impact. The method gives two numerical parameters for rigor mortis, namely the period and decrement of the oscillations, both of which depend on the time period elapsed after death. In the case of breaking the rigor mortis by muscle lengthening, both the oscillation period and decrement decreased, whereas, shortening the muscle caused the opposite changes. Fourteen h after breaking the stiffness characteristics of the right and left m. biceps brachii, or oscillation periods, were assimilated. However, the values for decrement of the muscle, reflecting the dissipation of mechanical energy, maintained their differences.
International Nuclear Information System (INIS)
Havu, V.; Blum, V.; Havu, P.; Scheffler, M.
2009-01-01
We consider the problem of developing O(N) scaling grid-based operations needed in many central operations when performing electronic structure calculations with numeric atom-centered orbitals as basis functions. We outline the overall formulation of localized algorithms, and specifically the creation of localized grid batches. The choice of the grid partitioning scheme plays an important role in the performance and memory consumption of the grid-based operations. Three different top-down partitioning methods are investigated, and compared with formally more rigorous yet much more expensive bottom-up algorithms. We show that a conceptually simple top-down grid partitioning scheme achieves essentially the same efficiency as the more rigorous bottom-up approaches.
Rigorous solution to Bargmann-Wigner equation for integer spin
Huang Shi Zhong; Wu Ning; Zheng Zhi Peng
2002-01-01
A rigorous method is developed to solve the Bargamann-Wigner equation for arbitrary integer spin in coordinate representation in a step by step way. The Bargmann-Wigner equation is first transformed to a form easier to solve, the new equations are then solved rigorously in coordinate representation, and the wave functions in a closed form are thus derived
RIGOR MORTIS AND THE INFLUENCE OF CALCIUM AND MAGNESIUM SALTS UPON ITS DEVELOPMENT.
Meltzer, S J; Auer, J
1908-01-01
Calcium salts hasten and magnesium salts retard the development of rigor mortis, that is, when these salts are administered subcutaneously or intravenously. When injected intra-arterially, concentrated solutions of both kinds of salts cause nearly an immediate onset of a strong stiffness of the muscles which is apparently a contraction, brought on by a stimulation caused by these salts and due to osmosis. This contraction, if strong, passes over without a relaxation into a real rigor. This form of rigor may be classed as work-rigor (Arbeitsstarre). In animals, at least in frogs, with intact cords, the early contraction and the following rigor are stronger than in animals with destroyed cord. If M/8 solutions-nearly equimolecular to "physiological" solutions of sodium chloride-are used, even when injected intra-arterially, calcium salts hasten and magnesium salts retard the onset of rigor. The hastening and retardation in this case as well as in the cases of subcutaneous and intravenous injections, are ion effects and essentially due to the cations, calcium and magnesium. In the rigor hastened by calcium the effects of the extensor muscles mostly prevail; in the rigor following magnesium injection, on the other hand, either the flexor muscles prevail or the muscles become stiff in the original position of the animal at death. There seems to be no difference in the degree of stiffness in the final rigor, only the onset and development of the rigor is hastened in the case of the one salt and retarded in the other. Calcium hastens also the development of heat rigor. No positive facts were obtained with regard to the effect of magnesium upon heat vigor. Calcium also hastens and magnesium retards the onset of rigor in the left ventricle of the heart. No definite data were gathered with regard to the effects of these salts upon the right ventricle.
Multiscale approaches to high efficiency photovoltaics
Directory of Open Access Journals (Sweden)
Connolly James Patrick
2016-01-01
Full Text Available While renewable energies are achieving parity around the globe, efforts to reach higher solar cell efficiencies becomes ever more difficult as they approach the limiting efficiency. The so-called third generation concepts attempt to break this limit through a combination of novel physical processes and new materials and concepts in organic and inorganic systems. Some examples of semi-empirical modelling in the field are reviewed, in particular for multispectral solar cells on silicon (French ANR project MultiSolSi. Their achievements are outlined, and the limits of these approaches shown. This introduces the main topic of this contribution, which is the use of multiscale experimental and theoretical techniques to go beyond the semi-empirical understanding of these systems. This approach has already led to great advances at modelling which have led to modelling software, which is widely known. Yet, a survey of the topic reveals a fragmentation of efforts across disciplines, firstly, such as organic and inorganic fields, but also between the high efficiency concepts such as hot carrier cells and intermediate band concepts. We show how this obstacle to the resolution of practical research obstacles may be lifted by inter-disciplinary cooperation across length scales, and across experimental and theoretical fields, and finally across materials systems. We present a European COST Action “MultiscaleSolar” kicking off in early 2015, which brings together experimental and theoretical partners in order to develop multiscale research in organic and inorganic materials. The goal of this defragmentation and interdisciplinary collaboration is to develop understanding across length scales, which will enable the full potential of third generation concepts to be evaluated in practise, for societal and industrial applications.
Directory of Open Access Journals (Sweden)
Kun Hu
2016-09-01
Full Text Available High precision geometric rectification of High Resolution Satellite Imagery (HRSI is the basis of digital mapping and Three-Dimensional (3D modeling. Taking advantage of line features as basic geometric control conditions instead of control points, the Line-Based Transformation Model (LBTM provides a practical and efficient way of image rectification. It is competent to build the mathematical relationship between image space and the corresponding object space accurately, while it reduces the workloads of ground control and feature recognition dramatically. Based on generalization and the analysis of existing LBTMs, a novel rigorous LBTM is proposed in this paper, which can further eliminate the geometric deformation caused by sensor inclination and terrain variation. This improved nonlinear LBTM is constructed based on a generalized point strategy and resolved by least squares overall adjustment. Geo-positioning accuracy experiments with IKONOS, GeoEye-1 and ZiYuan-3 satellite imagery are performed to compare rigorous LBTM with other relevant line-based and point-based transformation models. Both theoretic analysis and experimental results demonstrate that the rigorous LBTM is more accurate and reliable without adding extra ground control. The geo-positioning accuracy of satellite imagery rectified by rigorous LBTM can reach about one pixel with eight control lines and can be further improved by optimizing the horizontal and vertical distribution of control lines.
Absolute determination of photoluminescence quantum efficiency using an integrating sphere setup
International Nuclear Information System (INIS)
Leyre, S.; Coutino-Gonzalez, E.; Hofkens, J.; Joos, J. J.; Poelman, D.; Smet, P. F.; Ryckaert, J.; Meuret, Y.; Durinck, G.; Hanselaer, P.; Deconinck, G.
2014-01-01
An integrating sphere-based setup to obtain a quick and reliable determination of the internal quantum efficiency of strongly scattering luminescent materials is presented. In literature, two distinct but similar measurement procedures are frequently mentioned: a “two measurement” and a “three measurement” approach. Both methods are evaluated by applying the rigorous integrating sphere theory. It was found that both measurement procedures are valid. Additionally, the two methods are compared with respect to the uncertainty budget of the obtained values of the quantum efficiency. An inter-laboratory validation using the two distinct procedures was performed. The conclusions from the theoretical study were confirmed by the experimental data
Trends: Rigor Mortis in the Arts.
Blodget, Alden S.
1991-01-01
Outlines how past art education provided a refuge for students from the rigors of other academic subjects. Observes that in recent years art education has become "discipline based." Argues that art educators need to reaffirm their commitment to a humanistic way of knowing. (KM)
Using grounded theory as a method for rigorously reviewing literature
Wolfswinkel, J.; Furtmueller-Ettinger, Elfriede; Wilderom, Celeste P.M.
2013-01-01
This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously
Efficient shortcut techniques in evanescently coupled waveguides
Paul, Koushik; Sarma, Amarendra K.
2016-10-01
Shortcut to Adiabatic Passage (SHAPE) technique, in the context of coherent control of atomic systems has gained considerable attention in last few years. It is primarily because of its ability to manipulate population among the quantum states infinitely fast compared to the adiabatic processes. Two methods in this regard have been explored rigorously, namely the transitionless quantum driving and the Lewis-Riesenfeld invariant approach. We have applied these two methods to realize SHAPE in adiabatic waveguide coupler. Waveguide couplers are integral components of photonic circuits, primarily used as switching devices. Our study shows that with appropriate engineering of the coupling coefficient and propagation constants of the coupler it is possible to achieve efficient and complete power switching. We also observed that the coupler length could be reduced significantly without affecting the coupling efficiency of the system.
How Individual Scholars Can Reduce the Rigor-Relevance Gap in Management Research
Wolf, Joachim; Rosenberg, Timo
2012-01-01
This paper discusses a number of avenues management scholars could follow to reduce the existing gap between scientific rigor and practical relevance without relativizing the importance of the first goal dimension. Such changes are necessary because many management studies do not fully exploit the possibilities to increase their practical relevance while maintaining scientific rigor. We argue that this rigor-relevance gap is not only the consequence of the currently prevailing institutional c...
A Novel Energy-Efficient Approach for Human Activity Recognition.
Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru
2017-09-08
In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.
Onset of rigor mortis is earlier in red muscle than in white muscle.
Kobayashi, M; Takatori, T; Nakajima, M; Sakurada, K; Hatanaka, K; Ikegaya, H; Matsuda, Y; Iwase, H
2000-01-01
Rigor mortis is thought to be related to falling ATP levels in muscles postmortem. We measured rigor mortis as tension determined isometrically in three rat leg muscles in liquid paraffin kept at 37 degrees C or 25 degrees C--two red muscles, red gastrocnemius (RG) and soleus (SO) and one white muscle, white gastrocnemius (WG). Onset, half and full rigor mortis occurred earlier in RG and SO than in WG both at 37 degrees C and at 25 degrees C even though RG and WG were portions of the same muscle. This suggests that rigor mortis directly reflects the postmortem intramuscular ATP level, which decreases more rapidly in red muscle than in white muscle after death. Rigor mortis was more retarded at 25 degrees C than at 37 degrees C in each type of muscle.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by
Photoconductivity of amorphous silicon-rigorous modelling
International Nuclear Information System (INIS)
Brada, P.; Schauer, F.
1991-01-01
It is our great pleasure to express our gratitude to Prof. Grigorovici, the pioneer of the exciting field of amorphous state by our modest contribution to this area. In this paper are presented the outline of the rigorous modelling program of the steady-state photoconductivity in amorphous silicon and related materials. (Author)
Enhanced efficiency of a fluorescing nanoparticle with a silver shell
Energy Technology Data Exchange (ETDEWEB)
Choy, Wallace C H; Chen Xuewen [Department of Electrical and Electronic Engineering, University of Hong Kong, Pokfulam Road (Hong Kong); He Sailing [Centre for Optical and Electromagnetic Research, Zhejiang University, Zhijingang campus, Hangzhou 310058 (China)], E-mail: chchoy@eee.hku.hk
2009-09-01
Spontaneous emission (SE) rate and the fluorescence efficiency of a bare fluorescing nanoparticle (NP) and the NP with a silver nanoshell are analyzed rigorously by using a classical electromagnetic approach with the consideration of the nonlocal effect of the silver nano-shell. The dependences of the SE rate and the fluorescence efficiency on the core-shell structure are carefully studied and the physical interpretations of the results are addressed. The results show that the SE rate of a bare NP is much slower than that in the infinite medium by almost an order of magnitude and consequently the fluorescence efficiency is usually low. However, by encapsulating the NP with a silver shell, highly efficient fluorescence can be achieved as a result of a large Purcell enhancement and high out-coupling efficiency (OQE) for a well-designed core-shell structure. We also show that a higher SE rate may not offer a larger fluorescence efficiency since the fluorescence efficiency not only depends on the internal quantum yield but also the OQE.
Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U
2017-12-01
The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.
Reframing Rigor: A Modern Look at Challenge and Support in Higher Education
Campbell, Corbin M.; Dortch, Deniece; Burt, Brian A.
2018-01-01
This chapter describes the limitations of the traditional notions of academic rigor in higher education, and brings forth a new form of rigor that has the potential to support student success and equity.
Discovering the Network Topology: An Efficient Approach for SDN
Directory of Open Access Journals (Sweden)
Leonardo OCHOA-ADAY
2016-11-01
Full Text Available Network topology is a physical description of the overall resources in the network. Collecting this information using efficient mechanisms becomes a critical task for important network functions such as routing, network management, quality of service (QoS, among many others. Recent technologies like Software-Defined Networks (SDN have emerged as promising approaches for managing the next generation networks. In order to ensure a proficient topology discovery service in SDN, we propose a simple agents-based mechanism. This mechanism improves the overall efficiency of the topology discovery process. In this paper, an algorithm for a novel Topology Discovery Protocol (SD-TDP is described. This protocol will be implemented in each switch through a software agent. Thus, this approach will provide a distributed solution to solve the problem of network topology discovery in a more simple and efficient way.
Analyzing the Approaches to the Interpretation of Efficiency of Activity of Enterprises
Directory of Open Access Journals (Sweden)
Gerasymov Oleksandr K.
2017-10-01
Full Text Available The article is aimed at studying, systematizing and analyzing scientific approaches to the definition of the category of «efficiency», evolution of the formation and development of scientific schools, defining of the basic and the general theories of efficiency: economic, dynamic, statistical, adaptive, and synergistic. The results of the study show that there is no uniform approach to understanding the concept of «efficiency» in the current circumstances. Efficiency presents itself as an indicator of the development of actor (phenomenon and works as an incentive for implementing entrepreneurial activity. Efficiency is the target guideline of managerial activities for leaders of enterprises who direct their activities towards substantiation, necessity, justification, and sufficiency. Prospect for further research in this area is the development of an organizational-economic mechanism for the marketing provision of enterprise, as well as a methodical approach to assessing the efficiency of the enterprise performance along with its marketing subsystem.
A two-stage DEA approach for environmental efficiency measurement.
Song, Malin; Wang, Shuhong; Liu, Wei
2014-05-01
The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.
Materials Approach to Fuel Efficient Tires
Energy Technology Data Exchange (ETDEWEB)
Votruba-Drzal, Peter [PPG Industries, Monroeville, PA (United States); Kornish, Brian [PPG Industries, Monroeville, PA (United States)
2015-06-30
The objective of this project was to design, develop, and demonstrate fuel efficient and safety regulation compliant tire filler and barrier coating technologies that will improve overall fuel efficiency by at least 2%. The program developed and validated two complementary approaches to improving fuel efficiency through tire improvements. The first technology was a modified silica-based product that is 15% lower in cost and/or enables a 10% improvement in tread wear while maintaining the already demonstrated minimum of 2% improvement in average fuel efficiency. The second technology was a barrier coating with reduced oxygen transmission rate compared to the state-of-the-art halobutyl rubber inner liners that will provide extended placarded tire pressure retention at significantly reduced material usage. A lower-permeance, thinner inner liner coating which retains tire pressure was expected to deliver the additional 2% reduction in fleet fuel consumption. From the 2006 Transportation Research Board Report1, a 10 percent reduction in rolling resistance can reduce consumer fuel expenditures by 1 to 2 percent for typical vehicles. This savings is equivalent to 6 to 12 gallons per year. A 1 psi drop in inflation pressure increases the tire's rolling resistance by about 1.4 percent.
Energy sustainability: consumption, efficiency, and ...
One of the critical challenges in achieving sustainability is finding a way to meet the energy consumption needs of a growing population in the face of increasing economic prosperity and finite resources. According to ecological footprint computations, the global resource consumption began exceeding planetary supply in 1977 and by 2030, global energy demand, population, and gross domestic product are projected to greatly increase over 1977 levels. With the aim of finding sustainable energy solutions, we present a simple yet rigorous procedure for assessing and counterbalancing the relationship between energy demand, environmental impact, population, GDP, and energy efficiency. Our analyses indicated that infeasible increases in energy efficiency (over 100 %) would be required by 2030 to return to 1977 environmental impact levels and annual reductions (2 and 3 %) in energy demand resulted in physical, yet impractical requirements; hence, a combination of policy and technology approaches is needed to tackle this critical challenge. This work emphasizes the difficulty in moving toward energy sustainability and helps to frame possible solutions useful for policy and management. Based on projected energy consumption, environmental impact, human population, gross domestic product (GDP), and energy efficiency, for this study, we explore the increase in energy-use efficiency and the decrease in energy use intensity required to achieve sustainable environmental impact le
Single-case synthesis tools I: Comparing tools to evaluate SCD quality and rigor.
Zimmerman, Kathleen N; Ledford, Jennifer R; Severini, Katherine E; Pustejovsky, James E; Barton, Erin E; Lloyd, Blair P
2018-03-03
Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional Children, What Works Clearinghouse, and Single-Case Analysis and Design Framework) were compared to determine if conclusions regarding the effectiveness of antecedent sensory-based interventions for young children changed based on choice of quality evaluation tool. Evaluation of SCD quality differed across tools, suggesting selection of quality evaluation tools impacts evaluation findings. Suggestions for selecting an appropriate quality and rigor assessment tool are provided and across-tool conclusions are drawn regarding the quality and rigor of studies. Finally, authors provide guidance for using quality evaluations in conjunction with outcome analyses when conducting syntheses of interventions evaluated in the context of SCD. Copyright © 2018 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Ahmadi, A.; Meyer, M.; Rouzineau, D.; Prevost, M.; Alix, P.; Laloue, N.
2010-01-01
This paper gives the first step of the development of a rigorous multicomponent reactive separation model. Such a model is highly essential to further the optimization of acid gases removal plants (CO 2 capture, gas treating, etc.) in terms of size and energy consumption, since chemical solvents are conventionally used. Firstly, two main modelling approaches are presented: the equilibrium-based and the rate-based approaches. Secondly, an extended rate-based model with rigorous modelling methodology for diffusion-reaction phenomena is proposed. The film theory and the generalized Maxwell-Stefan equations are used in order to characterize multicomponent interactions. The complete chain of chemical reactions is taken into account. The reactions can be kinetically controlled or at chemical equilibrium, and they are considered for both liquid film and liquid bulk. Thirdly, the method of numerical resolution is described. Coupling the generalized Maxwell-Stefan equations with chemical equilibrium equations leads to a highly non-linear Differential-Algebraic Equations system known as DAE index 3. The set of equations is discretized with finite-differences as its integration by Gear method is complex. The resulting algebraic system is resolved by the Newton- Raphson method. Finally, the present model and the associated methods of numerical resolution are validated for the example of esterification of methanol. This archetype non-electrolytic system permits an interesting analysis of reaction impact on mass transfer, especially near the phase interface. The numerical resolution of the model by Newton-Raphson method gives good results in terms of calculation time and convergence. The simulations show that the impact of reactions at chemical equilibrium and that of kinetically controlled reactions with high kinetics on mass transfer is relatively similar. Moreover, the Fick's law is less adapted for multicomponent mixtures where some abnormalities such as counter
Studies on the estimation of the postmortem interval. 3. Rigor mortis (author's transl).
Suzutani, T; Ishibashi, H; Takatori, T
1978-11-01
The authors have devised a method for classifying rigor mortis into 10 types based on its appearance and strength in various parts of a cadaver. By applying the method to the findings of 436 cadavers which were subjected to medico-legal autopsies in our laboratory during the last 10 years, it has been demonstrated that the classifying method is effective for analyzing the phenomenon of onset, persistence and disappearance of rigor mortis statistically. The investigation of the relationship between each type of rigor mortis and the postmortem interval has demonstrated that rigor mortis may be utilized as a basis for estimating the postmortem interval but the values have greater deviation than those described in current textbooks.
Matrix approach to consistency of the additive efficient normalization of semivalues
Xu, G.; Driessen, Theo; Sun, H.; Sun, H.
2007-01-01
In fact the Shapley value is the unique efficient semivalue. This motivated Ruiz et al. to do additive efficient normalization for semivalues. In this paper, by matrix approach we derive the relationship between the additive efficient normalization of semivalues and the Shapley value. Based on the
Rigor mortis in an unusual position: Forensic considerations.
D'Souza, Deepak H; Harish, S; Rajesh, M; Kiran, J
2011-07-01
We report a case in which the dead body was found with rigor mortis in an unusual position. The dead body was lying on its back with limbs raised, defying gravity. Direction of the salivary stains on the face was also defying the gravity. We opined that the scene of occurrence of crime is unlikely to be the final place where the dead body was found. The clues were revealing a homicidal offence and an attempt to destroy the evidence. The forensic use of 'rigor mortis in an unusual position' is in furthering the investigations, and the scientific confirmation of two facts - the scene of death (occurrence) is different from the scene of disposal of dead body, and time gap between the two places.
The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.
Liu, Chunping; Laporte, Audrey; Ferguson, Brian S
2008-09-01
In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.
A new method for deriving rigorous results on ππ scattering
International Nuclear Information System (INIS)
Caprini, I.; Dita, P.
1979-06-01
We develop a new approach to the problem of constraining the ππ scattering amplitudes by means of the axiomatically proved properties of unitarity, analyticity and crossing symmetry. The method is based on the solution of an extremal problem on a convex set of analytic functions and provides a global description of the domain of values taken by any finite number of partial waves at an arbitrary set of unphysical energies, compatible with unitarity, the bounds at complex energies derived from generalized dispersion relations and the crossing integral relations. From this doma domain we obtain new absolute bounds for the amplitudes as well as rigorous correlations between the values of various partial waves. (author)
A support approach for the conceptual design of energy-efficient cooker hoods
International Nuclear Information System (INIS)
Cicconi, Paolo; Landi, Daniele; Germani, Michele; Russo, Anna Costanza
2017-01-01
Highlights: •An eco-innovation approach to support the design of household appliances. •The research is focused on the energy labelling for kitchen hoods. •A software platform provides tools to configure and optimize new solutions. •A tool can calculate the energy efficiency indexes of a product configuration. -- Abstract: In Europe, kitchen hoods currently come with an energy label showing their energy efficiency class and other information regarding the energy consumption and noise level, as established by the European Energy Labelling Directive. Because of recent regulations, designs of cooker hoods must consider new issues, such as the evaluation of the energy efficiency, analysis of the energy consumption, and product lifecycle impact. Therefore, the development of eco-driven products requires Ecodesign tools to support eco-innovation and related sustainability improvements. The scope of the proposed research is to define a method and an agile and affordable platform tool that can support designers in the early estimation of product energy performance, including the calculation of energy efficiency indexes. The approach also considers the use of genetic algorithm methods to optimize the product configuration in terms of energy efficiency. The research context concerns large and small productions of kitchen hoods. The paper describes the methodological approach within the developed tool. The results show a good correlation between real efficiency values and calculated ones. A validation activity has been described, and a test case shows how to apply the proposed approach for the design of a new efficient product with an A-class Energy Efficiency Index.
Choi, Yun-Sang
2015-01-01
This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (psalting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle. PMID:26761884
Kim, Hyun-Wook; Hwang, Ko-Eun; Song, Dong-Heon; Kim, Yong-Jae; Ham, Youn-Kyung; Yeo, Eui-Joo; Jeong, Tae-Jun; Choi, Yun-Sang; Kim, Cheon-Jei
2015-01-01
This study was conducted to evaluate the effect of pre-rigor salting level (0-4% NaCl concentration) on physicochemical and textural properties of pre-rigor chicken breast muscles. The pre-rigor chicken breast muscles were de-boned 10 min post-mortem and salted within 25 min post-mortem. An increase in pre-rigor salting level led to the formation of high ultimate pH of chicken breast muscles at post-mortem 24 h. The addition of minimum of 2% NaCl significantly improved water holding capacity, cooking loss, protein solubility, and hardness when compared to the non-salting chicken breast muscle (prigor salting level caused the inhibition of myofibrillar protein degradation and the acceleration of lipid oxidation. However, the difference in NaCl concentration between 3% and 4% had no great differences in the results of physicochemical and textural properties due to pre-rigor salting effects (p>0.05). Therefore, our study certified the pre-rigor salting effect of chicken breast muscle salted with 2% NaCl when compared to post-rigor muscle salted with equal NaCl concentration, and suggests that the 2% NaCl concentration is minimally required to ensure the definite pre-rigor salting effect on chicken breast muscle.
Nugraheni, Z.; Budiyono, B.; Slamet, I.
2018-03-01
To reach higher order thinking skill, needed to be mastered the conceptual understanding and strategic competence as they are two basic parts of high order thinking skill (HOTS). RMT is a unique realization of the cognitive conceptual construction approach based on Feurstein with his theory of Mediated Learning Experience (MLE) and Vygotsky’s sociocultural theory. This was quasi-experimental research which compared the experimental class that was given Rigorous Mathematical Thinking (RMT) as learning method and the control class that was given Direct Learning (DL) as the conventional learning activity. This study examined whether there was different effect of two learning model toward conceptual understanding and strategic competence of Junior High School Students. The data was analyzed by using Multivariate Analysis of Variance (MANOVA) and obtained a significant difference between experimental and control class when considered jointly on the mathematics conceptual understanding and strategic competence (shown by Wilk’s Λ = 0.84). Further, by independent t-test is known that there was significant difference between two classes both on mathematical conceptual understanding and strategic competence. By this result is known that Rigorous Mathematical Thinking (RMT) had positive impact toward Mathematics conceptual understanding and strategic competence.
Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL
2015-01-01
The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.
Directory of Open Access Journals (Sweden)
Z. Halushka
2015-03-01
Full Text Available This article provides a comparison between scientific approaches to understanding the economic and social efficiency of market income distribution. Based on multidisciplinary approaches the essence of the concepts of fairness and efficiency in the distribution; explored approaches to combining efficiency and equity used in policy income distribution at different levels of management; the possible social and economic consequences of ineffective regulation of income in today’s economy. The analysis is based on comparing the four concepts of justice that are considered socially efficient. Considered: utilitarian, formulated by J. Bentham; egalitarian, which provides for equal distribution; market (liberal approach – to polar egalitarian and roulzianskyy that treats justice as fairness, approaches. Based on the generalization of existing approaches analyzed method of estimating social justice in the distribution and the possibility of its application. The structure of the article includes the following sections: 1.Views on terms of efficiency and equity in the distribution of resources and income; 2. Classical and modern approaches to combining efficiency and equity in the distribution; 3. Conflicts combination of the principles of fairness and efficiency in the distribution of incomes policy. The authors also noted that the uneven distribution of income acts as an objective reality, and the question is to prevent dangerous indicators of this unevenness. Market income distribution does not guarantee every person an acceptable level of income. The causes of irregularity are: differences in abilities, mental as well as physical; differences in possession of the property, in the educational level and group reasons associated with luck, chance, surprise win more. This is a definite social injustice market. State, taking a significant share of responsibility for maintaining a basic human right to a dignified life, organizes redistribution.
Student’s rigorous mathematical thinking based on cognitive style
Fitriyani, H.; Khasanah, U.
2017-12-01
The purpose of this research was to determine the rigorous mathematical thinking (RMT) of mathematics education students in solving math problems in terms of reflective and impulsive cognitive styles. The research used descriptive qualitative approach. Subjects in this research were 4 students of the reflective and impulsive cognitive style which was each consisting male and female subjects. Data collection techniques used problem-solving test and interview. Analysis of research data used Miles and Huberman model that was reduction of data, presentation of data, and conclusion. The results showed that impulsive male subjects used three levels of the cognitive function required for RMT that were qualitative thinking, quantitative thinking with precision, and relational thinking completely while the other three subjects were only able to use cognitive function at qualitative thinking level of RMT. Therefore the subject of impulsive male has a better RMT ability than the other three research subjects.
Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation
Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe
2018-04-01
In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.
Holistic Approach to Data Center Energy Efficiency
Energy Technology Data Exchange (ETDEWEB)
Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-18
This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.
ALTERNATIVE APPROACHES TO EFFICIENCY EVALUATION OF HIGHER EDUCATION INSTITUTIONS
Directory of Open Access Journals (Sweden)
Furková, Andrea
2013-09-01
Full Text Available Evaluation of efficiency and ranking of higher education institutions is very popular and important topic of public policy. The assessment of the quality of higher education institutions can stimulate positive changes in higher education. In this study we focus on assessment and ranking of Slovak economic faculties. We try to apply two different quantitative approaches for evaluation Slovak economic faculties - Stochastic Frontier Analysis (SFA as an econometric approach and PROMETHEE II as multicriteria decision making method. Via SFA we examine faculties’ success from scientific point of view, i.e. their success in area of publications and citations. Next part of analysis deals with assessing of Slovak economic sciences faculties from overall point of view through the multicriteria decision making method. In the analysis we employ panel data covering 11 economic faculties observed over the period of 5 years. Our main aim is to point out other quantitative approaches to efficiency estimation of higher education institutions.
Moving beyond Data Transcription: Rigor as Issue in Representation of Digital Literacies
Hagood, Margaret Carmody; Skinner, Emily Neil
2015-01-01
Rigor in qualitative research has been based upon criteria of credibility, dependability, confirmability, and transferability. Drawing upon articles published during our editorship of the "Journal of Adolescent & Adult Literacy," we illustrate how the use of digital data in research study reporting may enhance these areas of rigor,…
Increased scientific rigor will improve reliability of research and effectiveness of management
Sells, Sarah N.; Bassing, Sarah B.; Barker, Kristin J.; Forshee, Shannon C.; Keever, Allison; Goerz, James W.; Mitchell, Michael S.
2018-01-01
Rigorous science that produces reliable knowledge is critical to wildlife management because it increases accurate understanding of the natural world and informs management decisions effectively. Application of a rigorous scientific method based on hypothesis testing minimizes unreliable knowledge produced by research. To evaluate the prevalence of scientific rigor in wildlife research, we examined 24 issues of the Journal of Wildlife Management from August 2013 through July 2016. We found 43.9% of studies did not state or imply a priori hypotheses, which are necessary to produce reliable knowledge. We posit that this is due, at least in part, to a lack of common understanding of what rigorous science entails, how it produces more reliable knowledge than other forms of interpreting observations, and how research should be designed to maximize inferential strength and usefulness of application. Current primary literature does not provide succinct explanations of the logic behind a rigorous scientific method or readily applicable guidance for employing it, particularly in wildlife biology; we therefore synthesized an overview of the history, philosophy, and logic that define scientific rigor for biological studies. A rigorous scientific method includes 1) generating a research question from theory and prior observations, 2) developing hypotheses (i.e., plausible biological answers to the question), 3) formulating predictions (i.e., facts that must be true if the hypothesis is true), 4) designing and implementing research to collect data potentially consistent with predictions, 5) evaluating whether predictions are consistent with collected data, and 6) drawing inferences based on the evaluation. Explicitly testing a priori hypotheses reduces overall uncertainty by reducing the number of plausible biological explanations to only those that are logically well supported. Such research also draws inferences that are robust to idiosyncratic observations and
Trends in Methodological Rigor in Intervention Research Published in School Psychology Journals
Burns, Matthew K.; Klingbeil, David A.; Ysseldyke, James E.; Petersen-Brown, Shawna
2012-01-01
Methodological rigor in intervention research is important for documenting evidence-based practices and has been a recent focus in legislation, including the No Child Left Behind Act. The current study examined the methodological rigor of intervention research in four school psychology journals since the 1960s. Intervention research has increased…
Efficiency bounds for nonequilibrium heat engines
International Nuclear Information System (INIS)
Mehta, Pankaj; Polkovnikov, Anatoli
2013-01-01
We analyze the efficiency of thermal engines (either quantum or classical) working with a single heat reservoir like an atmosphere. The engine first gets an energy intake, which can be done in an arbitrary nonequilibrium way e.g. combustion of fuel. Then the engine performs the work and returns to the initial state. We distinguish two general classes of engines where the working body first equilibrates within itself and then performs the work (ergodic engine) or when it performs the work before equilibrating (non-ergodic engine). We show that in both cases the second law of thermodynamics limits their efficiency. For ergodic engines we find a rigorous upper bound for the efficiency, which is strictly smaller than the equivalent Carnot efficiency. I.e. the Carnot efficiency can be never achieved in single reservoir heat engines. For non-ergodic engines the efficiency can be higher and can exceed the equilibrium Carnot bound. By extending the fundamental thermodynamic relation to nonequilibrium processes, we find a rigorous thermodynamic bound for the efficiency of both ergodic and non-ergodic engines and show that it is given by the relative entropy of the nonequilibrium and initial equilibrium distributions. These results suggest a new general strategy for designing more efficient engines. We illustrate our ideas by using simple examples. -- Highlights: ► Derived efficiency bounds for heat engines working with a single reservoir. ► Analyzed both ergodic and non-ergodic engines. ► Showed that non-ergodic engines can be more efficient. ► Extended fundamental thermodynamic relation to arbitrary nonequilibrium processes
US residential energy demand and energy efficiency: A stochastic demand frontier approach
International Nuclear Information System (INIS)
Filippini, Massimo; Hunt, Lester C.
2012-01-01
This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.
Ehret, Gerd; Bodermann, Bernd; Woehler, Martin
2007-06-01
The optical microscopy is an important instrument for dimensional characterisation or calibration of micro- and nanostructures, e.g. chrome structures on photomasks. In comparison to scanning electron microscopy (possible contamination of the sample) and atomic force microscopy (slow, risk of damage) optical microscopy is a fast and non destructive metrology method. The precise quantitative determination of the linewidth from the microscope image is, however, only possible by knowledge of the geometry of the structures and their consideration in the optical modelling. We compared two different rigorous model approaches, the Rigorous Coupled Wave Analysis (RCWA) and the Finite Elements Method (FEM) for modelling of structures with different edge angles, linewidths, line to space ratios and polarisations. The RCWA method can adapt inclined edges profiles only by a staircase approximation leading to increased modelling errors of the RCWA method. Even today's sophisticated rigorous methods still show problems with TM-polarisation. Therefore both rigorous methods are compared in terms of their convergence for TE and TM- polarisation. Beyond that also the influence of typical illumination wavelengths (365 nm, 248 nm and 193 nm) on the microscope images and their contribution to the measuring uncertainty budget will be discussed.
Using Project Complexity Determinations to Establish Required Levels of Project Rigor
Energy Technology Data Exchange (ETDEWEB)
Andrews, Thomas D.
2015-10-01
This presentation discusses the project complexity determination process that was developed by National Security Technologies, LLC, for the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office for implementation at the Nevada National Security Site (NNSS). The complexity determination process was developed to address the diversity of NNSS project types, size, and complexity; to fill the need for one procedure but with provision for tailoring the level of rigor to the project type, size, and complexity; and to provide consistent, repeatable, effective application of project management processes across the enterprise; and to achieve higher levels of efficiency in project delivery. These needs are illustrated by the wide diversity of NNSS projects: Defense Experimentation, Global Security, weapons tests, military training areas, sensor development and testing, training in realistic environments, intelligence community support, sensor development, environmental restoration/waste management, and disposal of radioactive waste, among others.
Memory sparing, fast scattering formalism for rigorous diffraction modeling
Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.
2017-07-01
The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.
An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method
Directory of Open Access Journals (Sweden)
Jibum Kim
2014-01-01
Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.
An efficient and extensible approach for compressing phylogenetic trees
Matthews, Suzanne J; Williams, Tiffani L
2011-01-01
Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend
Systems approach used in the Gas Centrifuge Enrichment Plant
International Nuclear Information System (INIS)
Rooks, W.A. Jr.
1982-01-01
A requirement exists for effective and efficient transfer of technical knowledge from the design engineering team to the production work force. Performance-Based Training (PBT) is a systematic approach to the design, development, and implementation of technical training. This approach has been successfully used by the US Armed Forces, industry, and other organizations. The advantages of the PBT approach are: cost-effectiveness (lowest life-cycle training cost), learning effectiveness, reduced implementation time, and ease of administration. The PBT process comprises five distinctive and rigorous phases: Analysis of Job Performance, Design of Instructional Strategy, Development of Training Materials and Instructional Media, Validation of Materials and Media, and Implementation of the Instructional Program. Examples from the Gas Centrifuge Enrichment Plant (GCEP) are used to illustrate the application of PBT
Systems approach used in the Gas Centrifuge Enrichment Plant
Energy Technology Data Exchange (ETDEWEB)
Rooks, W.A. Jr.
1982-01-01
A requirement exists for effective and efficient transfer of technical knowledge from the design engineering team to the production work force. Performance-Based Training (PBT) is a systematic approach to the design, development, and implementation of technical training. This approach has been successfully used by the US Armed Forces, industry, and other organizations. The advantages of the PBT approach are: cost-effectiveness (lowest life-cycle training cost), learning effectiveness, reduced implementation time, and ease of administration. The PBT process comprises five distinctive and rigorous phases: Analysis of Job Performance, Design of Instructional Strategy, Development of Training Materials and Instructional Media, Validation of Materials and Media, and Implementation of the Instructional Program. Examples from the Gas Centrifuge Enrichment Plant (GCEP) are used to illustrate the application of PBT.
Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I
2015-01-01
High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®
Rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets
International Nuclear Information System (INIS)
Yang, Z.R.
1993-10-01
We have exactly calculated the rigorous spin-spin correlation function of Ising model on a special kind of Sierpinski Carpets (SC's) by means of graph expansion and a combinatorial approach and investigated the asymptotic behaviour in the limit of long distance. The result show there is no long range correlation between spins at any finite temperature which indicates no existence of phase transition and thus finally confirms the conclusion produced by the renormalization group method and other physical arguments. (author). 7 refs, 6 figs
Cypress, Brigitte S
Issues are still raised even now in the 21st century by the persistent concern with achieving rigor in qualitative research. There is also a continuing debate about the analogous terms reliability and validity in naturalistic inquiries as opposed to quantitative investigations. This article presents the concept of rigor in qualitative research using a phenomenological study as an exemplar to further illustrate the process. Elaborating on epistemological and theoretical conceptualizations by Lincoln and Guba, strategies congruent with qualitative perspective for ensuring validity to establish the credibility of the study are described. A synthesis of the historical development of validity criteria evident in the literature during the years is explored. Recommendations are made for use of the term rigor instead of trustworthiness and the reconceptualization and renewed use of the concept of reliability and validity in qualitative research, that strategies for ensuring rigor must be built into the qualitative research process rather than evaluated only after the inquiry, and that qualitative researchers and students alike must be proactive and take responsibility in ensuring the rigor of a research study. The insights garnered here will move novice researchers and doctoral students to a better conceptual grasp of the complexity of reliability and validity and its ramifications for qualitative inquiry.
Some rigorous results concerning spectral theory for ideal MHD
International Nuclear Information System (INIS)
Laurence, P.
1986-01-01
Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first- and second-order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first-order formulation satisfies the conditions of the Hille--Yosida theorem. A foundation is laid thereby within which the domains associated with the first- and second-order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0
Some rigorous results concerning spectral theory for ideal MHD
International Nuclear Information System (INIS)
Laurence, P.
1985-05-01
Spectral theory for linear ideal MHD is laid on a firm foundation by defining appropriate function spaces for the operators associated with both the first and second order (in time and space) partial differential operators. Thus, it is rigorously established that a self-adjoint extension of F(xi) exists. It is shown that the operator L associated with the first order formulation satisfies the conditions of the Hille-Yosida theorem. A foundation is laid thereby within which the domains associated with the first and second order formulations can be compared. This allows future work in a rigorous setting that will clarify the differences (in the two formulations) between the structure of the generalized eigenspaces corresponding to the marginal point of the spectrum ω = 0
Optimal correction and design parameter search by modern methods of rigorous global optimization
International Nuclear Information System (INIS)
Makino, K.; Berz, M.
2011-01-01
Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle
Policy Pathways: Joint Public-Private Approaches for Energy Efficiency Finance
Energy Technology Data Exchange (ETDEWEB)
NONE
2012-09-06
This Policy Pathway outlines, through the experiences and lessons learned from country examples, the critical elements to put in place a public-private partnership to finance energy efficiency. It focuses on three mechanisms - dedicated credit lines, risk guarantees, and energy performance service contracts and presents the planning, implementing, monitoring, and evaluating phases of implemention. Accelerating and scaling up private investment in energy efficiency is crucial to exploit the potential of energy efficiency. However many barriers remain to private investment such as access to capital, uncertainty of future energy prices, transaction costs, perceived higher risk, and lack of knowledge. As part of the IEA 25 Energy Efficiency Policy Recommendations, the IEA recommends that governments support private investment in energy efficiency. A joint public-private approach can use public finance and regulatory policy to support the scaling up of private investment in energy efficiency.
Mitello, Lucia; D'Alba, Fabrizio; Milito, Francesca; Monaco, Cinzia; Orazi, Daniela; Battilana, Daniela; Marucci, Anna Rita; Longo, Angelo; Latina, Roberto
2017-01-01
The management of operating rooms (ORs) is a complex process which requires an effective organizational scheme. In order to amore convinient allocation of resources a rigorous monitoring plan is needed to ensure operating rooms performances. All the necessary actions should be taken to improve the quality of the planning and scheduling procedure. Between April-December, 2016 an organizational analysis has been carried out on the performances of the A.O. S. Camillo-Forlanini Hospital Operating Block applying the "process management" approach to the ORs efficiency. The project involved two different surgical areas of the same operating block the multi-specialist and elective surgery and cardio-vascular surgery . The analyses of the processes was made through the product, patient and safety approach and from different points of view: the "asis", process and stakeholder perspectives. Descriptive statistics was used to process raw data and Student's t-distribution was used to assess the difference between the two means (significant p value ˂0,05). The Coefficient of Variation (CV) was used to describe the variabilityamong data. The asis approach allowed us to describe the ORs inbound activities. For both operating block the most demanding weekly commitments in terms of time turned out to be the inventory management procedures of controlling and stocking medicines, general medical supplies and instruments (130[DS=±14] for BOE and 30[DS=±18] for CCH. The average time spent on preparing the operating room, separately calculated starting from the first surgical case, was of 27 minutes (SD=± 17) while for the following surgical procedures preparation time decreased to 15 minutes (SD= ± 10), which highlighted a meaningful difference of 12 minutes. A great variability was registered in CCH due to the unpredictability of these operations (CV 82%). The stakeholders' perspective revealed a reasonable level of satisfaction among nurses and surgeons (2.9 vs 2.3, respectively
Energy efficiency and the law: A multidisciplinary approach
Directory of Open Access Journals (Sweden)
Willemien du Plessis
2015-01-01
Full Text Available South Africa is an energy-intensive country. The inefficient use of, mostly, coal-generated energy is the cause of South Africa's per capita contribution to greenhouse gas emissions, pollution and environmental degradation and negative health impacts. The inefficient use of the country's energy also amounts to the injudicious use of natural resources. Improvements in energy efficiency are an important strategy to stabilise the country's energy crisis. Government responded to this challenge by introducing measures such as policies and legislation to change energy consumption patterns by, amongst others, incentivising the transition to improved energy efficiencies. A central tenet underpinning this review is that the law and energy nexus requires a multidisciplinary approach as well as a multi-pronged adoption of diverse policy instruments to effectively transform the country's energy use patterns. Numerous, innovative instruments are introduced by relevant legislation to encourage the transformation of energy generation and consumption patterns of South Africans. One such innovative instrument is the ISO 50001 energy management standard. It is a voluntary instrument, to plan for, measure and verify energy-efficiency improvements. These improvements may also trigger tax concessions. In this paper, the nature and extent of the various policy instruments and legislation that relate to energy efficiency are explored, while the interactions between the law and the voluntary ISO 50001 standard and between the law and the other academic disciplines are highlighted. The introduction of energy-efficiency measures into law requires a multidisciplinary approach, as lawyers may be challenged to address the scientific and technical elements that characterise these legal measures and instruments. Inputs by several other disciplines such as engineering, mathematics or statistics, accounting, environmental management and auditing may be needed. Law is often
Critical Analysis of Strategies for Determining Rigor in Qualitative Inquiry.
Morse, Janice M
2015-09-01
Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation. © The Author(s) 2015.
Cell sorting using efficient light shaping approaches
DEFF Research Database (Denmark)
Banas, Andrew; Palima, Darwin; Villangca, Mark Jayson
2016-01-01
distributions aimed at the positions of the detected cells. Furthermore, the beam shaping freedom provided by GPC can allow optimizations in the beam’s propagation and its interaction with the catapulted cells. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading...... is gentler, less invasive and more economical compared to conventional FACS systems. As cells are less responsive to plastic or glass beads commonly used in the optical manipulation literature, and since laser safety would be an issue in clinical use, we develop efficient approaches in utilizing lasers...... and light modulation devices. The Generalized Phase Contrast (GPC) method that can be used for efficiently illuminating spatial light modulators or creating well-defined contiguous optical traps is supplemented by diffractive techniques capable of integrating the available light and creating 2D or 3D beam...
An Efficient PageRank Approach for Urban Traffic Optimization
Directory of Open Access Journals (Sweden)
Florin Pop
2012-01-01
to determine optimal decisions for each traffic light, based on the solution given by Larry Page for page ranking in Web environment (Page et al. (1999. Our approach is similar with work presented by Sheng-Chung et al. (2009 and Yousef et al. (2010. We consider that the traffic lights are controlled by servers and a score for each road is computed based on efficient PageRank approach and is used in cost function to determine optimal decisions. We demonstrate that the cumulative contribution of each car in the traffic respects the main constrain of PageRank approach, preserving all the properties of matrix consider in our model.
"Snow White" Coating Protects SpaceX Dragon's Trunk Against Rigors of Space
McMahan, Tracy
2013-01-01
He described it as "snow white." But NASA astronaut Don Pettit was not referring to the popular children's fairy tale. Rather, he was talking about the white coating of the Space Exploration Technologies Corp. (SpaceX) Dragon spacecraft that reflected from the International Space Station s light. As it approached the station for the first time in May 2012, the Dragon s trunk might have been described as the "fairest of them all," for its pristine coating, allowing Pettit to clearly see to maneuver the robotic arm to grab the Dragon for a successful nighttime berthing. This protective thermal control coating, developed by Alion Science and Technology Corp., based in McLean, Va., made its bright appearance again with the March 1 launch of SpaceX's second commercial resupply mission. Named Z-93C55, the coating was applied to the cargo portion of the Dragon to protect it from the rigors of space. "For decades, Alion has produced coatings to protect against the rigors of space," said Michael Kenny, senior chemist with Alion. "As space missions evolved, there was a growing need to dissipate electrical charges that build up on the exteriors of spacecraft, or there could be damage to the spacecraft s electronics. Alion's research led us to develop materials that would meet this goal while also providing thermal controls. The outcome of this research was Alion's proprietary Z-93C55 coating."
A Framework for Comparative Assessments of Energy Efficiency Policy Measures
Energy Technology Data Exchange (ETDEWEB)
Blum, Helcio; Atkinson, Barbara; Lekov, Alex
2011-05-24
When policy makers propose new policies, there is a need to assess the costs and benefits of the proposed policy measures, to compare them to existing and alternative policies, and to rank them according to their effectiveness. In the case of equipment energy efficiency regulations, comparing the effects of a range of alternative policy measures requires evaluating their effects on consumers’ budgets, on national energy consumption and economics, and on the environment. Such an approach should be able to represent in a single framework the particularities of each policy measure and provide comparable results. This report presents an integrated methodological framework to assess prospectively the energy, economic, and environmental impacts of energy efficiency policy measures. The framework builds on the premise that the comparative assessment of energy efficiency policy measures should (a) rely on a common set of primary data and parameters, (b) follow a single functional approach to estimate the energy, economic, and emissions savings resulting from each assessed measure, and (c) present results through a set of comparable indicators. This framework elaborates on models that the U.S. Department of Energy (DOE) has used in support of its rulemakings on mandatory energy efficiency standards. In addition to a rigorous analysis of the impacts of mandatory standards, DOE compares the projected results of alternative policy measures to those projected to be achieved by the standards. The framework extends such an approach to provide a broad, generic methodology, with no geographic or sectoral limitations, that is useful for evaluating any type of equipment energy efficiency market intervention. The report concludes with a demonstration of how to use the framework to compare the impacts estimated for twelve policy measures focusing on increasing the energy efficiency of gas furnaces in the United States.
Estimation of the convergence order of rigorous coupled-wave analysis for OCD metrology
Ma, Yuan; Liu, Shiyuan; Chen, Xiuguo; Zhang, Chuanwei
2011-12-01
In most cases of optical critical dimension (OCD) metrology, when applying rigorous coupled-wave analysis (RCWA) to optical modeling, a high order of Fourier harmonics is usually set up to guarantee the convergence of the final results. However, the total number of floating point operations grows dramatically as the truncation order increases. Therefore, it is critical to choose an appropriate order to obtain high computational efficiency without losing much accuracy in the meantime. In this paper, the convergence order associated with the structural and optical parameters has been estimated through simulation. The results indicate that the convergence order is linear with the period of the sample when fixing the other parameters, both for planar diffraction and conical diffraction. The illuminated wavelength also affects the convergence of a final result. With further investigations concentrated on the ratio of illuminated wavelength to period, it is discovered that the convergence order decreases with the growth of the ratio, and when the ratio is fixed, convergence order jumps slightly, especially in a specific range of wavelength. This characteristic could be applied to estimate the optimum convergence order of given samples to obtain high computational efficiency.
Demonstration of an efficient cooling approach for SBIRS-Low
Nieczkoski, S. J.; Myers, E. A.
2002-05-01
The Space Based Infrared System-Low (SBIRS-Low) segment is a near-term Air Force program for developing and deploying a constellation of low-earth orbiting observation satellites with gimbaled optics cooled to cryogenic temperatures. The optical system design and requirements present unique challenges that make conventional cooling approaches both complicated and risky. The Cryocooler Interface System (CIS) provides a remote, efficient, and interference-free means of cooling the SBIRS-Low optics. Technology Applications Inc. (TAI), through a two-phase Small Business Innovative Research (SBIR) program with Air Force Research Laboratory (AFRL), has taken the CIS from initial concept feasibility through the design, build, and test of a prototype system. This paper presents the development and demonstration testing of the prototype CIS. Prototype system testing has demonstrated the high efficiency of this cooling approach, making it an attractive option for SBIRS-Low and other sensitive optical and detector systems that require low-impact cryogenic cooling.
Birkeland, S; Akse, L
2010-01-01
Improved slaughtering procedures in the salmon industry have caused a delayed onset of rigor mortis and, thus, a potential for pre-rigor secondary processing. The aim of this study was to investigate the effect of rigor status at time of processing on quality traits color, texture, sensory, microbiological, in injection salted, and cold-smoked Atlantic salmon (Salmo salar). Injection of pre-rigor fillets caused a significant (Prigor processed fillets; however, post-rigor (1477 ± 38 g) fillets had a significant (P>0.05) higher fracturability than pre-rigor fillets (1369 ± 71 g). Pre-rigor fillets were significantly (Prigor fillets (37.8 ± 0.8) and had significantly lower (Prigor processed fillets. This study showed that similar quality characteristics can be obtained in cold-smoked products processed either pre- or post-rigor when using suitable injection salting protocols and smoking techniques. © 2010 Institute of Food Technologists®
Krompecher, T
1994-10-21
The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.
DEFF Research Database (Denmark)
Cappeln, Gertrud; Jessen, Flemming
2002-01-01
Variation in glycogen, ATP, and IMP contents within individual cod muscles were studied in ice stored fish during the progress of rigor mortis. Rigor index was determined before muscle samples for chemical analyzes were taken at 16 different positions on the fish. During development of rigor......, the contents of glycogen and ATP decreased differently in relation to rigor index depending on sampling location. Although fish were considered to be in strong rigor according to the rigor index method, parts of the muscle were not in rigor as high ATP concentrations were found in dorsal and tall muscle....
An efficient statistical-based approach for road traffic congestion monitoring
Abdelhafid, Zeroual
2017-12-14
In this paper, we propose an effective approach which has to detect traffic congestion. The detection strategy is based on the combinational use of piecewise switched linear traffic (PWSL) model with exponentially-weighted moving average (EWMA) chart. PWSL model describes traffic flow dynamics. Then, PWSL residuals are used as the input of EWMA chart to detect traffic congestions. The evaluation results of the developed approach using data from a portion of the I210-W highway in Califorina showed the efficiency of the PWSL-EWMA approach in in detecting traffic congestions.
An efficient statistical-based approach for road traffic congestion monitoring
Abdelhafid, Zeroual; Harrou, Fouzi; Sun, Ying
2017-01-01
In this paper, we propose an effective approach which has to detect traffic congestion. The detection strategy is based on the combinational use of piecewise switched linear traffic (PWSL) model with exponentially-weighted moving average (EWMA) chart. PWSL model describes traffic flow dynamics. Then, PWSL residuals are used as the input of EWMA chart to detect traffic congestions. The evaluation results of the developed approach using data from a portion of the I210-W highway in Califorina showed the efficiency of the PWSL-EWMA approach in in detecting traffic congestions.
Applying rigorous decision analysis methodology to optimization of a tertiary recovery project
International Nuclear Information System (INIS)
Wackowski, R.K.; Stevens, C.E.; Masoner, L.O.; Attanucci, V.; Larson, J.L.; Aslesen, K.S.
1992-01-01
This paper reports that the intent of this study was to rigorously look at all of the possible expansion, investment, operational, and CO 2 purchase/recompression scenarios (over 2500) to yield a strategy that would maximize net present value of the CO 2 project at the Rangely Weber Sand Unit. Traditional methods of project management, which involve analyzing large numbers of single case economic evaluations, was found to be too cumbersome and inaccurate for an analysis of this scope. The decision analysis methodology utilized a statistical approach which resulted in a range of economic outcomes. Advantages of the decision analysis methodology included: a more organized approach to classification of decisions and uncertainties; a clear sensitivity method to identify the key uncertainties; an application of probabilistic analysis through the decision tree; and a comprehensive display of the range of possible outcomes for communication to decision makers. This range made it possible to consider the upside and downside potential of the options and to weight these against the Unit's strategies. Savings in time and manpower required to complete the study were also realized
Disciplining Bioethics: Towards a Standard of Methodological Rigor in Bioethics Research
Adler, Daniel; Shaul, Randi Zlotnik
2012-01-01
Contemporary bioethics research is often described as multi- or interdisciplinary. Disciplines are characterized, in part, by their methods. Thus, when bioethics research draws on a variety of methods, it crosses disciplinary boundaries. Yet each discipline has its own standard of rigor—so when multiple disciplinary perspectives are considered, what constitutes rigor? This question has received inadequate attention, as there is considerable disagreement regarding the disciplinary status of bioethics. This disagreement has presented five challenges to bioethics research. Addressing them requires consideration of the main types of cross-disciplinary research, and consideration of proposals aiming to ensure rigor in bioethics research. PMID:22686634
Measuring energy efficiency in economics: Shadow value approach
Khademvatani, Asgar
For decades, academic scholars and policy makers have commonly applied a simple average measure, energy intensity, for studying energy efficiency. In contrast, we introduce a distinctive marginal measure called energy shadow value (SV) for modeling energy efficiency drawn on economic theory. This thesis demonstrates energy SV advantages, conceptually and empirically, over the average measure recognizing marginal technical energy efficiency and unveiling allocative energy efficiency (energy SV to energy price). Using a dual profit function, the study illustrates how treating energy as quasi-fixed factor called quasi-fixed approach offers modeling advantages and is appropriate in developing an explicit model for energy efficiency. We address fallacies and misleading results using average measure and demonstrate energy SV advantage in inter- and intra-country energy efficiency comparison. Energy efficiency dynamics and determination of efficient allocation of energy use are shown through factors impacting energy SV: capital, technology, and environmental obligations. To validate the energy SV, we applied a dual restricted cost model using KLEM dataset for the 35 US sectors stretching from 1958 to 2000 and selected a sample of the four sectors. Following the empirical results, predicted wedges between energy price and the SV growth indicate a misallocation of energy use in stone, clay and glass (SCG) and communications (Com) sectors with more evidence in the SCG compared to the Com sector, showing overshoot in energy use relative to optimal paths and cost increases from sub-optimal energy use. The results show that energy productivity is a measure of technical efficiency and is void of information on the economic efficiency of energy use. Decomposing energy SV reveals that energy, capital and technology played key roles in energy SV increases helping to consider and analyze policy implications of energy efficiency improvement. Applying the marginal measure, we also
2016-04-01
AU/ACSC/2016 AIR COMMAND AND STAFF COLLEGE AIR UNIVERSITY MASTERS OF ANALYTICAL TRADECRAFT: CERTIFYING THE STANDARDS AND ANALYTIC RIGOR OF...establishing unit level certified Masters of Analytic Tradecraft (MAT) analysts to be trained and entrusted to evaluate and rate the standards and...cues) ideally should meet or exceed effective rigor (based on analytical process).4 To accomplish this, decision makers should not be left to their
Increasing rigor in NMR-based metabolomics through validated and open source tools.
Eghbalnia, Hamid R; Romero, Pedro R; Westler, William M; Baskaran, Kumaran; Ulrich, Eldon L; Markley, John L
2017-02-01
The metabolome, the collection of small molecules associated with an organism, is a growing subject of inquiry, with the data utilized for data-intensive systems biology, disease diagnostics, biomarker discovery, and the broader characterization of small molecules in mixtures. Owing to their close proximity to the functional endpoints that govern an organism's phenotype, metabolites are highly informative about functional states. The field of metabolomics identifies and quantifies endogenous and exogenous metabolites in biological samples. Information acquired from nuclear magnetic spectroscopy (NMR), mass spectrometry (MS), and the published literature, as processed by statistical approaches, are driving increasingly wider applications of metabolomics. This review focuses on the role of databases and software tools in advancing the rigor, robustness, reproducibility, and validation of metabolomics studies. Copyright © 2016. Published by Elsevier Ltd.
Fast and efficient indexing approach for object recognition
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
International Nuclear Information System (INIS)
Bian Yiwen; Yang Feng
2010-01-01
Data envelopment analysis (DEA) has been widely used in energy efficiency and environment efficiency analysis in recent years. Based on the existing environment DEA technology, this paper presents several DEA models for estimating the aggregated efficiency of resource and environment. These models can evaluate DMUs' energy efficiencies and environment efficiencies simultaneously. However, efficiency ranking results obtained from these models are not the same, and each model can provide some valuable information of DMUs' efficiencies, which we could not ignore. Under this situation, it may be hard for us to choose a specific model in practice. To address this kind of performance evaluation problem, the current paper extends Shannon-DEA procedure to establish a comprehensive efficiency measure for appraising DMUs' resource and environment efficiencies. In the proposed approach, the measure for evaluating a model's importance degree is provided, and the targets setting approach of inputs/outputs for DMU managers to improve DMUs' energy and environmental efficiencies is also discussed. We illustrate the proposed approach using real data set of 30 provinces in China.
Energy Technology Data Exchange (ETDEWEB)
Menkveld, M.; Jablonska, B. [ECN Beleidsstudies, Petten (Netherlands)
2013-05-15
Article 5 of the Energy Efficiency Directive (EED) is an annual obligation to renovate 3% of the building stock of central government. After renovation the buildings will meet the minimum energy performance requirements laid down in Article 4 of the EPBD. The Directive gives room to an alternative approach to achieve the same savings. The Ministry of Interior Affairs has asked ECN to assist with this alternative approach. ECN calculated what saving are achieved with the 3% renovation obligation under the directive. Then ECN looked for the possibilities for an alternative approach to achieve the same savings [Dutch] In artikel 5 van de Energie Efficiency Directive (EED) staat een verplichting om jaarlijks 3% van de gebouwvoorraad van de centrale overheid te renoveren. Die 3% van de gebouwvoorraad moet na renovatie voldoen aan de minimum eisen inzake energieprestatie die door het betreffende lidstaat zijn vastgelegd op grond van artikel 4 in de EPBD. De verplichting betreft gebouwen die in bezit en in gebruik zijn van de rijksoverheid met een gebruiksoppervlakte groter dan 500 m{sup 2}, vanaf juli 2015 groter dan 250 m{sup 2}. De gebouwen die eigendom zijn van de Rijksgebouwendienst betreft kantoren van rijksdiensten, gerechtsgebouwen, gebouwen van douane en politie en gevangenissen. Van de gebouwen van Defensie hoeven alleen kantoren en legeringsgebouwen aan de verplichting te voldoen.
Electrocardiogram artifact caused by rigors mimicking narrow complex tachycardia: a case report.
Matthias, Anne Thushara; Indrakumar, Jegarajah
2014-02-04
The electrocardiogram (ECG) is useful in the diagnosis of cardiac and non-cardiac conditions. Rigors due to shivering can cause electrocardiogram artifacts mimicking various cardiac rhythm abnormalities. We describe an 80-year-old Sri Lankan man with an abnormal electrocardiogram mimicking narrow complex tachycardia during the immediate post-operative period. Electrocardiogram changes caused by muscle tremor during rigors could mimic a narrow complex tachycardia. Identification of muscle tremor as a cause of electrocardiogram artifact can avoid unnecessary pharmacological and non-pharmacological intervention to prevent arrhythmias.
An Efficient Context-Aware Privacy Preserving Approach for Smartphones
Directory of Open Access Journals (Sweden)
Lichen Zhang
2017-01-01
Full Text Available With the proliferation of smartphones and the usage of the smartphone apps, privacy preservation has become an important issue. The existing privacy preservation approaches for smartphones usually have less efficiency due to the absent consideration of the active defense policies and temporal correlations between contexts related to users. In this paper, through modeling the temporal correlations among contexts, we formalize the privacy preservation problem to an optimization problem and prove its correctness and the optimality through theoretical analysis. To further speed up the running time, we transform the original optimization problem to an approximate optimal problem, a linear programming problem. By resolving the linear programming problem, an efficient context-aware privacy preserving algorithm (CAPP is designed, which adopts active defense policy and decides how to release the current context of a user to maximize the level of quality of service (QoS of context-aware apps with privacy preservation. The conducted extensive simulations on real dataset demonstrate the improved performance of CAPP over other traditional approaches.
Fadıloğlu, Eylem Ezgi; Serdaroğlu, Meltem
2018-01-01
Abstract This study was conducted to evaluate the effects of pre and post-rigor marinade injections on some quality parameters of Longissimus dorsi (LD) muscles. Three marinade formulations were prepared with 2% NaCl, 2% NaCl+0.5 M lactic acid and 2% NaCl+0.5 M sodium lactate. In this study marinade uptake, pH, free water, cooking loss, drip loss and color properties were analyzed. Injection time had significant effect on marinade uptake levels of samples. Regardless of marinate formulation, marinade uptake of pre-rigor samples injected with marinade solutions were higher than post rigor samples. Injection of sodium lactate increased pH values of samples whereas lactic acid injection decreased pH. Marinade treatment and storage period had significant effect on cooking loss. At each evaluation period interaction between marinade treatment and injection time showed different effect on free water content. Storage period and marinade application had significant effect on drip loss values. Drip loss in all samples increased during the storage. During all storage days, lowest CIE L* value was found in pre-rigor samples injected with sodium lactate. Lactic acid injection caused color fade in pre-rigor and post-rigor samples. Interaction between marinade treatment and storage period was found statistically significant (p<0.05). At day 0 and 3, the lowest CIE b* values obtained pre-rigor samples injected with sodium lactate and there were no differences were found in other samples. At day 6, no significant differences were found in CIE b* values of all samples. PMID:29805282
Diouf, Boucar; Rioux, Pierre
1999-01-01
Presents the rigor mortis process in brook charr (Salvelinus fontinalis) as a tool for better understanding skeletal muscle metabolism. Describes an activity that demonstrates how rigor mortis is related to the post-mortem decrease of muscular glycogen and ATP, how glycogen degradation produces lactic acid that lowers muscle pH, and how…
Warriss, P D; Brown, S N; Knowles, T G
2003-12-13
The degree of development of rigor mortis in the carcases of slaughter pigs was assessed subjectively on a three-point scale 35 minutes after they were exsanguinated, and related to the levels of cortisol, lactate and creatine kinase in blood collected at exsanguination. Earlier rigor development was associated with higher concentrations of these stress indicators in the blood. This relationship suggests that the mean rigor score, and the frequency distribution of carcases that had or had not entered rigor, could be used as an index of the degree of stress to which the pigs had been subjected.
Approaches to achieve high grain yield and high resource use efficiency in rice
Directory of Open Access Journals (Sweden)
Jianchang YANG
2015-06-01
Full Text Available This article discusses approaches to simultaneously increase grain yield and resource use efficiency in rice. Breeding nitrogen efficient cultivars without sacrificing rice yield potential, improving grain fill in later-flowering inferior spikelets and enhancing harvest index are three important approaches to achieving the dual goal of high grain yield and high resource use efficiency. Deeper root distribution and higher leaf photosynthetic N use efficiency at lower N rates could be used as selection criteria to develop N-efficient cultivars. Enhancing sink activity through increasing sugar-spikelet ratio at the heading time and enhancing the conversion efficiency from sucrose to starch though increasing the ratio of abscisic acid to ethylene in grains during grain fill could effectively improve grain fill in inferior spikelets. Several practices, such as post-anthesis controlled soil drying, an alternate wetting and moderate soil drying regime during the whole growing season, and non-flooded straw mulching cultivation, could substantially increase grain yield and water use efficiency, mainly via enhanced remobilization of stored carbon from vegetative tissues to grains and improved harvest index. Further research is needed to understand synergistic interaction between water and N on crop and soil and the mechanism underlying high resource use efficiency in high-yielding rice.
An efficient Bouc & Wen approach for seismic analysis of masonry tower
Directory of Open Access Journals (Sweden)
Luca Facchini
2014-07-01
Full Text Available The assessment of existing masonry towers under exceptional loads, such as earthquake loads, requires reliable, expedite and efficient methods of analysis. These approaches should take into account both the randomness that affects the masonry properties (in some cases also the distribution of the elastic parameters and, of course, the nonlinear behavior of masonry. Considering the need of simplified but effective methods to assess the seismic response of such structures, the paper proposes an efficient approach for seismic assessment of masonry towers assuming the material properties as a stochastic field. As a prototype of masonry towers a cantilever beam is analyzed assuming that the first modal shape governs the structural motion. With this hypothesis a nonlinear hysteretic Bouc & Wen model is employed to reproduce the system response which is subsequently employed to evaluate the response bounds. The results of the simplified approach are compared with the results of a finite element model to show the effectiveness of the method.
Einstein's Theory A Rigorous Introduction for the Mathematically Untrained
Grøn, Øyvind
2011-01-01
This book provides an introduction to the theory of relativity and the mathematics used in its processes. Three elements of the book make it stand apart from previously published books on the theory of relativity. First, the book starts at a lower mathematical level than standard books with tensor calculus of sufficient maturity to make it possible to give detailed calculations of relativistic predictions of practical experiments. Self-contained introductions are given, for example vector calculus, differential calculus and integrations. Second, in-between calculations have been included, making it possible for the non-technical reader to follow step-by-step calculations. Thirdly, the conceptual development is gradual and rigorous in order to provide the inexperienced reader with a philosophically satisfying understanding of the theory. Einstein's Theory: A Rigorous Introduction for the Mathematically Untrained aims to provide the reader with a sound conceptual understanding of both the special and genera...
Sonoelasticity to monitor mechanical changes during rigor and ageing.
Ayadi, A; Culioli, J; Abouelkaram, S
2007-06-01
We propose the use of sonoelasticity as a non-destructive method to monitor changes in the resistance of muscle fibres, unaffected by connective tissue. Vibrations were applied at low frequency to induce oscillations in soft tissues and an ultrasound transducer was used to detect the motions. The experiments were carried out on the M. biceps femoris muscles of three beef cattle. In addition to the sonoelasticity measurements, the changes in meat during rigor and ageing were followed by measurements of both the mechanical resistance of myofibres and pH. The variations of mechanical resistance and pH were compared to those of the sonoelastic variables (velocity and attenuation) at two frequencies. The relationships between pH and velocity or attenuation and between the velocity or attenuation and the stress at 20% deformation were highly correlated. We concluded that sonoelasticity is a non-destructive method that can be used to monitor mechanical changes in muscle fibers during rigor-mortis and ageing.
International Nuclear Information System (INIS)
Dominick, J.L.; Rasmussen, C.L.
2008-01-01
Several facilities and many projects at LLNL work exclusively with tritium. These operations have the potential to generate large quantities of Low-Level Radioactive Waste (LLW) with the same or similar radiological characteristics. A standardized documented approach to characterizing these waste materials for disposal as radioactive waste will enhance the ability of the Laboratory to manage them in an efficient and timely manner while ensuring compliance with all applicable regulatory requirements. This standardized characterization approach couples documented process knowledge with analytical verification and is very conservative, overestimating the radioactivity concentration of the waste. The characterization approach documented here is the Normalized Tritium Quantification Approach (NoTQA). This document will serve as a Technical Basis Document which can be referenced in radioactive waste characterization documentation packages such as the Information Gathering Document. In general, radiological characterization of waste consists of both developing an isotopic breakdown (distribution) of radionuclides contaminating the waste and using an appropriate method to quantify the radionuclides in the waste. Characterization approaches require varying degrees of rigor depending upon the radionuclides contaminating the waste and the concentration of the radionuclide contaminants as related to regulatory thresholds. Generally, as activity levels in the waste approach a regulatory or disposal facility threshold the degree of required precision and accuracy, and therefore the level of rigor, increases. In the case of tritium, thresholds of concern for control, contamination, transportation, and waste acceptance are relatively high. Due to the benign nature of tritium and the resulting higher regulatory thresholds, this less rigorous yet conservative characterization approach is appropriate. The scope of this document is to define an appropriate and acceptable
Efficient simulation of tail probabilities of sums of correlated lognormals
DEFF Research Database (Denmark)
Asmussen, Søren; Blanchet, José; Juneja, Sandeep
We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...
International Nuclear Information System (INIS)
Chen, Xin
2014-01-01
Understanding the roles of the temporary and spatial structures of quantum functional noise in open multilevel quantum molecular systems attracts a lot of theoretical interests. I want to establish a rigorous and general framework for functional quantum noises from the constructive and computational perspectives, i.e., how to generate the random trajectories to reproduce the kernel and path ordering of the influence functional with effective Monte Carlo methods for arbitrary spectral densities. This construction approach aims to unify the existing stochastic models to rigorously describe the temporary and spatial structure of Gaussian quantum noises. In this paper, I review the Euclidean imaginary time influence functional and propose the stochastic matrix multiplication scheme to calculate reduced equilibrium density matrices (REDM). In addition, I review and discuss the Feynman-Vernon influence functional according to the Gaussian quadratic integral, particularly its imaginary part which is critical to the rigorous description of the quantum detailed balance. As a result, I establish the conditions under which the influence functional can be interpreted as the average of exponential functional operator over real-valued Gaussian processes for open multilevel quantum systems. I also show the difference between the local and nonlocal phonons within this framework. With the stochastic matrix multiplication scheme, I compare the normalized REDM with the Boltzmann equilibrium distribution for open multilevel quantum systems
A Human Systems Integration Approach to Energy Efficiency in Ground Transportation
2015-12-01
was shown to be true in the previous research done for E2O in which a qualitative ethnographic approach using situational observations was used to...while creating a more effective and efficient workforce. This research was done through numerous interviews with a variety of personnel who use...used to achieve improved operational capabilities while creating a more effective and efficient workforce. This research was done through numerous
Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model
Acquah, H. de-Graft; Onumah, E. E.
2014-01-01
Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...
Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng
2015-01-09
The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.
A rigorous proof for the Landauer-Büttiker formula
DEFF Research Database (Denmark)
Cornean, Horia Decebal; Jensen, Arne; Moldoveanu, V.
Recently, Avron et al. shed new light on the question of quantum transport in mesoscopic samples coupled to particle reservoirs by semi-infinite leads. They rigorously treat the case when the sample undergoes an adiabatic evolution thus generating a current through th leads, and prove the so call...
PRO development: rigorous qualitative research as the crucial foundation.
Lasch, Kathryn Eilene; Marquis, Patrick; Vigneux, Marc; Abetz, Linda; Arnould, Benoit; Bayliss, Martha; Crawford, Bruce; Rosa, Kathleen
2010-10-01
Recently published articles have described criteria to assess qualitative research in the health field in general, but very few articles have delineated qualitative methods to be used in the development of Patient-Reported Outcomes (PROs). In fact, how PROs are developed with subject input through focus groups and interviews has been given relatively short shrift in the PRO literature when compared to the plethora of quantitative articles on the psychometric properties of PROs. If documented at all, most PRO validation articles give little for the reader to evaluate the content validity of the measures and the credibility and trustworthiness of the methods used to develop them. Increasingly, however, scientists and authorities want to be assured that PRO items and scales have meaning and relevance to subjects. This article was developed by an international, interdisciplinary group of psychologists, psychometricians, regulatory experts, a physician, and a sociologist. It presents rigorous and appropriate qualitative research methods for developing PROs with content validity. The approach described combines an overarching phenomenological theoretical framework with grounded theory data collection and analysis methods to yield PRO items and scales that have content validity.
2013-05-01
Based on a recent study on cost efficient alternative bridge approach slab (BAS) designs (Thiagarajan et : al. 2010) has recommended three new BAS designs for possible implementation by MoDOT namely a) 20 feet cast-inplace : slab with sleeper slab (C...
Characterization of rigor mortis of longissimus dorsi and triceps ...
African Journals Online (AJOL)
24 h) of the longissimus dorsi (LD) and triceps brachii (TB) muscles as well as the shear force (meat tenderness) and colour were evaluated, aiming at characterizing the rigor mortis in the meat during industrial processing. Data statistic treatment demonstrated that carcass temperature and pH decreased gradually during ...
Rigorous constraints on the matrix elements of the energy–momentum tensor
Directory of Open Access Journals (Sweden)
Peter Lowdon
2017-11-01
Full Text Available The structure of the matrix elements of the energy–momentum tensor play an important role in determining the properties of the form factors A(q2, B(q2 and C(q2 which appear in the Lorentz covariant decomposition of the matrix elements. In this paper we apply a rigorous frame-independent distributional-matching approach to the matrix elements of the Poincaré generators in order to derive constraints on these form factors as q→0. In contrast to the literature, we explicitly demonstrate that the vanishing of the anomalous gravitomagnetic moment B(0 and the condition A(0=1 are independent of one another, and that these constraints are not related to the specific properties or conservation of the individual Poincaré generators themselves, but are in fact a consequence of the physical on-shell requirement of the states in the matrix elements and the manner in which these states transform under Poincaré transformations.
Diffraction-based overlay measurement on dedicated mark using rigorous modeling method
Lu, Hailiang; Wang, Fan; Zhang, Qingyun; Chen, Yonghui; Zhou, Chang
2012-03-01
Diffraction Based Overlay (DBO) is widely evaluated by numerous authors, results show DBO can provide better performance than Imaging Based Overlay (IBO). However, DBO has its own problems. As well known, Modeling based DBO (mDBO) faces challenges of low measurement sensitivity and crosstalk between various structure parameters, which may result in poor accuracy and precision. Meanwhile, main obstacle encountered by empirical DBO (eDBO) is that a few pads must be employed to gain sufficient information on overlay-induced diffraction signature variations, which consumes more wafer space and costs more measuring time. Also, eDBO may suffer from mark profile asymmetry caused by processes. In this paper, we propose an alternative DBO technology that employs a dedicated overlay mark and takes a rigorous modeling approach. This technology needs only two or three pads for each direction, which is economic and time saving. While overlay measurement error induced by mark profile asymmetry being reduced, this technology is expected to be as accurate and precise as scatterometry technologies.
Forster, B; Ropohl, D; Raule, P
1977-07-05
The manual examination of rigor mortis as currently used and its often subjective evaluation frequently produced highly incorrect deductions. It is therefore desirable that such inaccuracies should be replaced by the objective measuring of rigor mortis at the extremities. To that purpose a method is described which can also be applied in on-the-spot investigations and a new formula for the determination of rigor mortis--indices (FRR) is introduced.
International Nuclear Information System (INIS)
Khatir, Zinedine; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili
2013-01-01
Highlights: ► A scientific framework for optimising oven operating conditions is presented. ► Experiments measuring local convective heat transfer coefficient are undertaken. ► An energy efficiency model is developed with experimentally calibrated CFD analysis. ► Designing ovens with optimum heat transfer coefficients reduces energy use. ► Results demonstrate a strong case to design and manufacture energy optimised ovens. - Abstract: Changing legislation and rising energy costs are bringing the need for efficient baking processes into much sharper focus. High-speed air impingement bread-baking ovens are complex systems using air flow to transfer heat to the product. In this paper, computational fluid dynamics (CFD) is combined with experimental analysis to develop a rigorous scientific framework for the rapid generation of forced convection oven designs. A design parameterisation of a three-dimensional generic oven model is carried out for a wide range of oven sizes and flow conditions to optimise desirable features such as temperature uniformity throughout the oven, energy efficiency and manufacturability. Coupled with the computational model, a series of experiments measuring the local convective heat transfer coefficient (h c ) are undertaken. The facility used for the heat transfer experiments is representative of a scaled-down production oven where the air temperature and velocity as well as important physical constraints such as nozzle dimensions and nozzle-to-surface distance can be varied. An efficient energy model is developed using a CFD analysis calibrated using experimentally determined inputs. Results from a range of oven designs are presented together with ensuing energy usage and savings
Evaluating Efficiencies of Dual AAV Approaches for Retinal Targeting
Directory of Open Access Journals (Sweden)
Livia S. Carvalho
2017-09-01
Full Text Available Retinal gene therapy has come a long way in the last few decades and the development and improvement of new gene delivery technologies has been exponential. The recent promising results from the first clinical trials for inherited retinal degeneration due to mutations in RPE65 have provided a major breakthrough in the field and have helped cement the use of recombinant adeno-associated viruses (AAV as the major tool for retinal gene supplementation. One of the key problems of AAV however, is its limited capacity for packaging genomic information to a maximum of around 4.8 kb. Previous studies have demonstrated that homologous recombination and/or inverted terminal repeat (ITR mediated concatemerization of two overlapping AAV vectors can partially overcome the size limitation and help deliver larger transgenes. The aim of this study was to investigate and compare the use of different AAV dual-vector approaches in the mouse retina using a systematic approach comparing efficiencies in vitro and in vivo using a unique oversized reporter construct. We show that the hybrid approach relying on vector genome concatemerization by highly recombinogenic sequences and ITRs sequence overlap offers the best levels of reconstitution both in vitro and in vivo compared to trans-splicing and overlap strategies. Our data also demonstrate that dose and vector serotype do not affect reconstitution efficiency but a discrepancy between mRNA and protein expression data suggests a bottleneck affecting translation.
Efficiency of supply chain management. Strategic and operational approach
Directory of Open Access Journals (Sweden)
Grzegorz Lichocik
2013-06-01
Full Text Available Background: One of the most important issues subject to theoretical considerations and empirical studies is the measurement of efficiency of activities in logistics and supply chain management. Simultaneously, efficiency is one of the terms interpreted in an ambiguous and multi-aspect manner, depending on the subject of a study. The multitude of analytical dimensions of this term results in the fact that, apart from economic efficiency being the basic study area, other dimensions perceived as an added value by different groups of supply chain participants become more and more important. Methods: The objective of this paper is to attempt to explain the problem of supply chain management efficiency in the context of general theoretical considerations relating to supply chain management. The authors have also highlighted determinants and practical implications of supply chain management efficiency in strategic and operational contexts. The study employs critical analyses of logistics literature and the free-form interview with top management representatives of a company operating in the TSL sector. Results: We must find a comprehensive approach to supply chain efficiency including all analytical dimensions connected with real goods and services flow. An effective supply chain must be cost-effective (ensuring economic efficiency of a chain, functional (reducing processes, lean, minimising the number of links in the chain to the necessary ones, adapting supply chain participants' internal processes to a common objective based on its efficiency and ensuring high quality of services (customer-oriented logistics systems. Conclusions: Efficiency of supply chains is not only a task for which a logistics department is responsible as it is a strategic decision taken by the management as regards the method of future company's operation. Correctly planned and fulfilled logistics tasks may result in improving performance of a company as well as the whole
Efficient, Differentially Private Point Estimators
Smith, Adam
2008-01-01
Differential privacy is a recent notion of privacy for statistical databases that provides rigorous, meaningful confidentiality guarantees, even in the presence of an attacker with access to arbitrary side information. We show that for a large class of parametric probability models, one can construct a differentially private estimator whose distribution converges to that of the maximum likelihood estimator. In particular, it is efficient and asymptotically unbiased. This result provides (furt...
Constellation modulation - an approach to increase spectral efficiency.
Dash, Soumya Sunder; Pythoud, Frederic; Hillerkuss, David; Baeuerle, Benedikt; Josten, Arne; Leuchtmann, Pascal; Leuthold, Juerg
2017-07-10
Constellation modulation (CM) is introduced as a new degree of freedom to increase the spectral efficiency and to further approach the Shannon limit. Constellation modulation is the art of encoding information not only in the symbols within a constellation but also by encoding information by selecting a constellation from a set of constellations that are switched from time to time. The set of constellations is not limited to sets of partitions from a given constellation but can e.g., be obtained from an existing constellation by applying geometrical transformations such as rotations, translations, scaling, or even more abstract transformations. The architecture of the transmitter and the receiver allows for constellation modulation to be used on top of existing modulations with little penalties on the bit-error ratio (BER) or on the required signal-to-noise ratio (SNR). The spectral bandwidth used by this modulation scheme is identical to the original modulation. Simulations demonstrate a particular advantage of the scheme for low SNR situations. So, for instance, it is demonstrated by simulation that a spectral efficiency increases by up to 33% and 20% can be obtained at a BER of 10 -3 and 2×10 -2 for a regular BPSK modulation format, respectively. Applying constellation modulation, we derive a most power efficient 4D-CM-BPSK modulation format that provides a spectral efficiency of 0.7 bit/s/Hz for an SNR of 0.2 dB at a BER of 2 × 10 -2 .
Sikes, Anita L; Mawson, Raymond; Stark, Janet; Warner, Robyn
2014-11-01
The delivery of a consistent quality product to the consumer is vitally important for the food industry. The aim of this study was to investigate the potential for using high frequency ultrasound applied to pre- and post-rigor beef muscle on the metabolism and subsequent quality. High frequency ultrasound (600kHz at 48kPa and 65kPa acoustic pressure) applied to post-rigor beef striploin steaks resulted in no significant effect on the texture (peak force value) of cooked steaks as measured by a Tenderometer. There was no added benefit of ultrasound treatment above that of the normal ageing process after ageing of the steaks for 7days at 4°C. Ultrasound treatment of post-rigor beef steaks resulted in a darkening of fresh steaks but after ageing for 7days at 4°C, the ultrasound-treated steaks were similar in colour to that of the aged, untreated steaks. High frequency ultrasound (2MHz at 48kPa acoustic pressure) applied to pre-rigor beef neck muscle had no effect on the pH, but the calculated exhaustion factor suggested that there was some effect on metabolism and actin-myosin interaction. However, the resultant texture of cooked, ultrasound-treated muscle was lower in tenderness compared to the control sample. After ageing for 3weeks at 0°C, the ultrasound-treated samples had the same peak force value as the control. High frequency ultrasound had no significant effect on the colour parameters of pre-rigor beef neck muscle. This proof-of-concept study showed no effect of ultrasound on quality but did indicate that the application of high frequency ultrasound to pre-rigor beef muscle shows potential for modifying ATP turnover and further investigation is warranted. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Rigorous derivation from Landau-de Gennes theory to Ericksen-Leslie theory
Wang, Wei; Zhang, Pingwen; Zhang, Zhifei
2013-01-01
Starting from Beris-Edwards system for the liquid crystal, we present a rigorous derivation of Ericksen-Leslie system with general Ericksen stress and Leslie stress by using the Hilbert expansion method.
An efficient multiple particle filter based on the variational Bayesian approach
Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim
2015-01-01
) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs
de-Graft Acquah, Henry
2014-01-01
This paper highlights the sensitivity of technical efficiency estimates to estimation approaches using empirical data. Firm specific technical efficiency and mean technical efficiency are estimated using the non parametric Data Envelope Analysis (DEA) and the parametric Corrected Ordinary Least Squares (COLS) and Stochastic Frontier Analysis (SFA) approaches. Mean technical efficiency is found to be sensitive to the choice of estimation technique. Analysis of variance and Tukeyâ€™s test sugge...
Striation Patterns of Ox Muscle in Rigor Mortis
Locker, Ronald H.
1959-01-01
Ox muscle in rigor mortis offers a selection of myofibrils fixed at varying degrees of contraction from sarcomere lengths of 3.7 to 0.7 µ. A study of this material by phase contrast and electron microscopy has revealed four distinct successive patterns of contraction, including besides the familiar relaxed and contracture patterns, two intermediate types (2.4 to 1.9 µ, 1.8 to 1.5 µ) not previously well described. PMID:14417790
Energy Technology Data Exchange (ETDEWEB)
Ahn, Hye-Kyung; Kim, Byoung Chan; Jun, Seung-Hyun; Chang, Mun Seock; Lopez-Ferrer, Daniel; Smith, Richard D.; Gu, Man Bock; Lee, Sang-Won; Kim, Beom S.; Kim, Jungbae
2010-12-15
An efficient protein digestion in proteomic analysis requires the stabilization of proteases such as trypsin. In the present work, trypsin was stabilized in the form of enzyme coating on electrospun polymer nanofibers (EC-TR), which crosslinks additional trypsin molecules onto covalently-attached trypsin (CA-TR). EC-TR showed better stability than CA-TR in rigorous conditions, such as at high temperatures of 40 °C and 50 °C, in the presence of organic co-solvents, and at various pH's. For example, the half-lives of CA-TR and EC-TR were 0.24 and 163.20 hours at 40 ºC, respectively. The improved stability of EC-TR can be explained by covalent-linkages on the surface of trypsin molecules, which effectively inhibits the denaturation, autolysis, and leaching of trypsin. The protein digestion was performed at 40 °C by using both CA-TR and EC-TR in digesting a model protein, enolase. EC-TR showed better performance and stability than CA-TR by maintaining good performance of enolase digestion under recycled uses for a period of one week. In the same condition, CA-TR showed poor performance from the beginning, and could not be used for digestion at all after a few usages. The enzyme coating approach is anticipated to be successfully employed not only for protein digestion in proteomic analysis, but also for various other fields where the poor enzyme stability presently hampers the practical applications of enzymes.
Learning from Science and Sport - How we, Safety, "Engage with Rigor"
Herd, A.
2012-01-01
As the world of spaceflight safety is relatively small and potentially inward-looking, we need to be aware of the "outside world". We should then try to remind ourselves to be open to the possibility that data, knowledge or experience from outside of the spaceflight community may provide some constructive alternate perspectives. This paper will assess aspects from two seemingly tangential fields, science and sport, and align these with the world of safety. In doing so some useful insights will be given to the challenges we face and may provide solutions relevant in our everyday (of safety engineering). Sport, particularly a contact sport such as rugby union, requires direct interaction between members of two (opposing) teams. Professional, accurately timed and positioned interaction for a desired outcome. These interactions, whilst an essential part of the game, are however not without their constraints. The rugby scrum has constraints as to the formation and engagement of the two teams. The controlled engagement provides for an interaction between the two teams in a safe manner. The constraints arising from the reality that an incorrect engagement could cause serious injury to members of either team. In academia, scientific rigor is applied to assure that the arguments provided and the conclusions drawn in academic papers presented for publication are valid, legitimate and credible. The scientific goal of the need for rigor may be expressed in the example of achieving a statistically relevant sample size, n, in order to assure analysis validity of the data pool. A failure to apply rigor could then place the entire study at risk of failing to have the respective paper published. This paper will consider the merits of these two different aspects, scientific rigor and sports engagement, and offer a reflective look at how this may provide a "modus operandi" for safety engineers at any level whether at their desks (creating or reviewing safety assessments) or in a
An Efficient Heuristic Approach for Irregular Cutting Stock Problem in Ship Building Industry
Directory of Open Access Journals (Sweden)
Yan-xin Xu
2016-01-01
Full Text Available This paper presents an efficient approach for solving a real two-dimensional irregular cutting stock problem in ship building industry. Cutting stock problem is a common cutting and packing problem that arises in a variety of industrial applications. A modification of selection heuristic Exact Fit is applied in our research. In the case referring to irregular shapes, a placement heuristics is more important to construct a complete solution. A placement heuristic relating to bottom-left-fill is presented. We evaluate the proposed approach using generated instance only with convex shapes in literatures and some instances with nonconvex shapes based on real problem from ship building industry. The results demonstrate that the effectiveness and efficiency of the proposed approach are significantly better than some conventional heuristics.
Kobayashi, Masahiko; Takemori, Shigeru; Yamaguchi, Maki
2004-02-10
Based on the molecular mechanism of rigor mortis, we have proposed that stiffness (elastic modulus evaluated with tension response against minute length perturbations) can be a suitable index of post-mortem rigidity in skeletal muscle. To trace the developmental process of rigor mortis, we measured stiffness and tension in both red and white rat skeletal muscle kept in liquid paraffin at 37 and 25 degrees C. White muscle (in which type IIB fibres predominate) developed stiffness and tension significantly more slowly than red muscle, except for soleus red muscle at 25 degrees C, which showed disproportionately slow rigor development. In each of the examined muscles, stiffness and tension developed more slowly at 25 degrees C than at 37 degrees C. In each specimen, tension always reached its maximum level earlier than stiffness, and then decreased more rapidly and markedly than stiffness. These phenomena may account for the sequential progress of rigor mortis in human cadavers.
Cavitt, L C; Sams, A R
2003-07-01
Studies were conducted to develop a non-destructive method for monitoring the rate of rigor mortis development in poultry and to evaluate the effectiveness of electrical stimulation (ES). In the first study, 36 male broilers in each of two trials were processed at 7 wk of age. After being bled, half of the birds received electrical stimulation (400 to 450 V, 400 to 450 mA, for seven pulses of 2 s on and 1 s off), and the other half were designated as controls. At 0.25 and 1.5 h postmortem (PM), carcasses were evaluated for the angles of the shoulder, elbow, and wing tip and the distance between the elbows. Breast fillets were harvested at 1.5 h PM (after chilling) from all carcasses. Fillet samples were excised and frozen for later measurement of pH and R-value, and the remainder of each fillet was held on ice until 24 h postmortem. Shear value and pH means were significantly lower, but R-value means were higher (P rigor mortis by ES. The physical dimensions of the shoulder and elbow changed (P rigor mortis development and with ES. These results indicate that physical measurements of the wings maybe useful as a nondestructive indicator of rigor development and for monitoring the effectiveness of ES. In the second study, 60 male broilers in each of two trials were processed at 7 wk of age. At 0.25, 1.5, 3.0, and 6.0 h PM, carcasses were evaluated for the distance between the elbows. At each time point, breast fillets were harvested from each carcass. Fillet samples were excised and frozen for later measurement of pH and sacromere length, whereas the remainder of each fillet was held on ice until 24 h PM. Shear value and pH means (P rigor mortis development. Elbow distance decreased (P rigor development and was correlated (P rigor mortis development in broiler carcasses.
Chen, Bowen; Zhao, Yongli; Zhang, Jie
2015-09-21
In this paper, we develop a virtual link priority mapping (LPM) approach and a virtual node priority mapping (NPM) approach to improve the energy efficiency and to reduce the spectrum usage over the converged flexible bandwidth optical networks and data centers. For comparison, the lower bound of the virtual optical network mapping is used for the benchmark solutions. Simulation results show that the LPM approach achieves the better performance in terms of power consumption, energy efficiency, spectrum usage, and the number of regenerators compared to the NPM approach.
A rigorous test for a new conceptual model for collisions
International Nuclear Information System (INIS)
Peixoto, E.M.A.; Mu-Tao, L.
1979-01-01
A rigorous theoretical foundation for the previously proposed model is formulated and applied to electron scattering by H 2 in the gas phase. An rigorous treatment of the interaction potential between the incident electron and the Hydrogen molecule is carried out to calculate Differential Cross Sections for 1 KeV electrons, using Glauber's approximation Wang's molecular wave function for the ground electronic state of H 2 . Moreover, it is shown for the first time that, when adequately done, the omission of two center terms does not adversely influence the results of molecular calculations. It is shown that the new model is far superior to the Independent Atom Model (or Independent Particle Model). The accuracy and simplicity of the new model suggest that it may be fruitfully applied to the description of other collision phenomena (e.g., in molecular beam experiments and nuclear physics). A new techniques is presented for calculations involving two center integrals within the frame work of the Glauber's approximation for scattering. (Author) [pt
Rigorous Analysis of a Randomised Number Field Sieve
Lee, Jonathan; Venkatesan, Ramarathnam
2018-01-01
Factorisation of integers $n$ is of number theoretic and cryptographic significance. The Number Field Sieve (NFS) introduced circa 1990, is still the state of the art algorithm, but no rigorous proof that it halts or generates relationships is known. We propose and analyse an explicitly randomised variant. For each $n$, we show that these randomised variants of the NFS and Coppersmith's multiple polynomial sieve find congruences of squares in expected times matching the best-known heuristic e...
Phuong, Vu Hung
2018-03-01
This research applies Data Envelopment Analysis (DEA) approach to analyze Total Factor Productivity (TFP) and efficiency changes in Vietnam coal mining industry from 2007 to 2013. The TFP of Vietnam coal mining companies decreased due to slow technological progress and unimproved efficiency. The decadence of technical efficiency in many enterprises proved that the coal mining industry has a large potential to increase productivity through technical efficiency improvement. Enhancing human resource training, technology and research & development investment could help the industry to improve efficiency and productivity in Vietnam coal mining industry.
Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach
de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio
2015-01-01
Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653
Efficient steady-state solver for hierarchical quantum master equations
Zhang, Hou-Dao; Qiao, Qin; Xu, Rui-Xue; Zheng, Xiao; Yan, YiJing
2017-07-01
Steady states play pivotal roles in many equilibrium and non-equilibrium open system studies. Their accurate evaluations call for exact theories with rigorous treatment of system-bath interactions. Therein, the hierarchical equations-of-motion (HEOM) formalism is a nonperturbative and non-Markovian quantum dissipation theory, which can faithfully describe the dissipative dynamics and nonlinear response of open systems. Nevertheless, solving the steady states of open quantum systems via HEOM is often a challenging task, due to the vast number of dynamical quantities involved. In this work, we propose a self-consistent iteration approach that quickly solves the HEOM steady states. We demonstrate its high efficiency with accurate and fast evaluations of low-temperature thermal equilibrium of a model Fenna-Matthews-Olson pigment-protein complex. Numerically exact evaluation of thermal equilibrium Rényi entropies and stationary emission line shapes is presented with detailed discussion.
A rigorous pole representation of multilevel cross sections and its practical applications
International Nuclear Information System (INIS)
Hwang, R.N.
1987-01-01
In this article a rigorous method for representing the multilevel cross sections and its practical applications are described. It is a generalization of the rationale suggested by de Saussure and Perez for the s-wave resonances. A computer code WHOPPER has been developed to convert the Reich-Moore parameters into the pole and residue parameters in momentum space. Sample calculations have been carried out to illustrate that the proposed method preserves the rigor of the Reich-Moore cross sections exactly. An analytical method has been developed to evaluate the pertinent Doppler-broadened line shape functions. A discussion is presented on how to minimize the number of pole parameters so that the existing reactor codes can be best utilized
Quality and efficiency in high dimensional Nearest neighbor search
Tao, Yufei; Yi, Ke; Sheng, Cheng; Kalnis, Panos
2009-01-01
Nearest neighbor (NN) search in high dimensional space is an important problem in many applications. Ideally, a practical solution (i) should be implementable in a relational database, and (ii) its query cost should grow sub-linearly with the dataset size, regardless of the data and query distributions. Despite the bulk of NN literature, no solution fulfills both requirements, except locality sensitive hashing (LSH). The existing LSH implementations are either rigorous or adhoc. Rigorous-LSH ensures good quality of query results, but requires expensive space and query cost. Although adhoc-LSH is more efficient, it abandons quality control, i.e., the neighbor it outputs can be arbitrarily bad. As a result, currently no method is able to ensure both quality and efficiency simultaneously in practice. Motivated by this, we propose a new access method called the locality sensitive B-tree (LSB-tree) that enables fast highdimensional NN search with excellent quality. The combination of several LSB-trees leads to a structure called the LSB-forest that ensures the same result quality as rigorous-LSH, but reduces its space and query cost dramatically. The LSB-forest also outperforms adhoc-LSH, even though the latter has no quality guarantee. Besides its appealing theoretical properties, the LSB-tree itself also serves as an effective index that consumes linear space, and supports efficient updates. Our extensive experiments confirm that the LSB-tree is faster than (i) the state of the art of exact NN search by two orders of magnitude, and (ii) the best (linear-space) method of approximate retrieval by an order of magnitude, and at the same time, returns neighbors with much better quality. © 2009 ACM.
Do Energy Efficiency Standards Improve Quality? Evidence from a Revealed Preference Approach
Energy Technology Data Exchange (ETDEWEB)
Houde, Sebastien [Univ. of Maryland, College Park, MD (United States); Spurlock, C. Anna [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2015-06-01
Minimum energy efficiency standards have occupied a central role in U.S. energy policy for more than three decades, but little is known about their welfare effects. In this paper, we employ a revealed preference approach to quantify the impact of past revisions in energy efficiency standards on product quality. The micro-foundation of our approach is a discrete choice model that allows us to compute a price-adjusted index of vertical quality. Focusing on the appliance market, we show that several standard revisions during the period 2001-2011 have led to an increase in quality. We also show that these standards have had a modest effect on prices, and in some cases they even led to decreases in prices. For revision events where overall quality increases and prices decrease, the consumer welfare effect of tightening the standards is unambiguously positive. Finally, we show that after controlling for the effect of improvement in energy efficiency, standards have induced an expansion of quality in the non-energy dimension. We discuss how imperfect competition can rationalize these results.
Some research results by risk-inform approaches for NPP safety and operational efficiency
International Nuclear Information System (INIS)
Komarov, Yu.A.
2013-01-01
Article is the presentation of the same name monograph, which is planned to be issued. In the article the perspective problems of further development risk-oriented approach (ROA) for the grounding and realization of measures on increase of safety and operational efficiency of NPP are considered. Unlike the traditional approach for the ROA, mean due the definition of probabilistic and/or deterministic methods of risk parameters, as criterion functions essence and the measure of the estimation are defined by the solution of specific problem in nuclear field. The ROA application allows essentially expanding opportunities of the substantiations and realizations of measures on safety and operational efficiency increase of NPP
Vada-Kovács, M
1996-01-01
Porcine biceps femoris strips of 10 cm original length were stretched by 50% and fixed within 1 hr post mortem then subjected to temperatures of 4 °, 15 ° or 36 °C until they attained their ultimate pH. Unrestrained control muscle strips, which were left to shorten freely, were similarly treated. Post-mortem metabolism (pH, R-value) and shortening were recorded; thereafter ultimate meat quality traits (pH, lightness, extraction and swelling of myofibrils) were determined. The rate of pH fall at 36 °C, as well as ATP breakdown at 36 and 4 °C, were significantly reduced by pre-rigor stretch. The relationship between R-value and pH indicated cold shortening at 4 °C. Myofibrils isolated from pre-rigor stretched muscle strips kept at 36 °C showed the most severe reduction of hydration capacity, while paleness remained below extreme values. However, pre-rigor stretched myofibrils - when stored at 4 °C - proved to be superior to shortened ones in their extractability and swelling.
Improving the efficiency of a chemotherapy day unit: Applying a business approach to oncology
van Lent, W.A.M.; Goedbloed, N.; van Harten, Willem H.
2009-01-01
Aim: To improve the efficiency of a hospital-based chemotherapy day unit (CDU). - Methods: The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean
Krompecher, Thomas; Gilles, André; Brandt-Casadevall, Conception; Mangin, Patrice
2008-04-07
Objective measurements were carried out to study the possible re-establishment of rigor mortis on rats after "breaking" (mechanical solution). Our experiments showed that: *Cadaveric rigidity can re-establish after breaking. *A significant rigidity can reappear if the breaking occurs before the process is complete. *Rigidity will be considerably weaker after the breaking. *The time course of the intensity does not change in comparison to the controls: --the re-establishment begins immediately after the breaking; --maximal values are reached at the same time as in the controls; --the course of the resolution is the same as in the controls.
Dias, Weeratilake
1998-01-01
Efficient operation of agricultural credit markets is very important both for the producer as well as for the policy makers. DEA approach is used to calculate productivity analysis which allows decomposition of sources of productivity changes into efficiency and technical change. Measured efficiencies are comparable to most recent parametric studies.
Directory of Open Access Journals (Sweden)
Huan Xu
2017-11-01
Full Text Available The aim of this paper is to provide a new approach for assessing the input–output efficiency of education and technology for national science and education department. We used the Data Envelopment Analysis (DEA method to analyze the efficiency sharing activities in education and technology sector, and classify input variables and output variables accordingly. Using the panel data in the education and technology sector of 53 countries, we found that the countries with significant progress in educational efficiency and technological efficiency mainly concentrated in East Asia, especially in Japan, Korea, Taiwan and some other developing countries. We further evaluate the effect of educational and technological efficiencies on national competitiveness, balanced development of the country, national energy efficiency, export, and employment. We found that the efficiency of science and technology has an effect on the balanced development of the country, but that of education has played a counter-productive role; Educational efficiency has a large role and related the country’s educational development. In addition, using the panel data analysis, we showed that educational and technological efficiency has different degrees of contributions to the development from 2000 to 2014. It mainly depends on the economic development progress and the push for the education and technological policy. The proposed approach in this paper provides the decision-making support for the education and technological policy formulation, specially the selection of the appropriate education and technological strategies for resource allocation and process evaluation.
Methodical Approach to Diagnostics of Efficiency of Production Economic Activity of an Enterprise
Directory of Open Access Journals (Sweden)
Zhukov Andrii V.
2014-03-01
Full Text Available The article offers developments of a methodical approach to diagnostics of efficiency of production economic activity of an enterprise, which, unlike the existing ones, is realised through the following stages: analysis of the enterprise external environment; analysis of the enterprise internal environment; identification of components of efficiency of production economic activity for carrying out complex diagnostics by the following directions: efficiency of subsystems of the enterprise production economic activity, efficiency of use of separate types of resources and socio-economic efficiency; scorecard formation; study of tendencies of change of indicators; identification of cause-effect dependencies between the main components of efficiency for diagnosing reasons of its level; diagnosing deviations of indicator values from their optimal values; development of a managerial decision on preserving and increasing efficiency of production economic activity of the enterprise.
Bamberger, Michael; Tarsilla, Michele; Hesse-Biber, Sharlene
2016-04-01
Many widely-used impact evaluation designs, including randomized control trials (RCTs) and quasi-experimental designs (QEDs), frequently fail to detect what are often quite serious unintended consequences of development programs. This seems surprising as experienced planners and evaluators are well aware that unintended consequences frequently occur. Most evaluation designs are intended to determine whether there is credible evidence (statistical, theory-based or narrative) that programs have achieved their intended objectives and the logic of many evaluation designs, even those that are considered the most "rigorous," does not permit the identification of outcomes that were not specified in the program design. We take the example of RCTs as they are considered by many to be the most rigorous evaluation designs. We present a numbers of cases to illustrate how infusing RCTs with a mixed-methods approach (sometimes called an "RCT+" design) can strengthen the credibility of these designs and can also capture important unintended consequences. We provide a Mixed Methods Evaluation Framework that identifies 9 ways in which UCs can occur, and we apply this framework to two of the case studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach
Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard
2013-01-01
The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...
Effects of post mortem temperature on rigor tension, shortening and ...
African Journals Online (AJOL)
Fully developed rigor mortis in muscle is characterised by maximum loss of extensibility. The course of post mortem changes in ostrich muscle was studied by following isometric tension, shortening and change in pH during the first 24 h post mortem within muscle strips from the muscularis gastrocnemius, pars interna at ...
A Dynamic BI–Orthogonal Field Equation Approach to Efficient Bayesian Inversion
Directory of Open Access Journals (Sweden)
Tagade Piyush M.
2017-06-01
Full Text Available This paper proposes a novel computationally efficient stochastic spectral projection based approach to Bayesian inversion of a computer simulator with high dimensional parametric and model structure uncertainty. The proposed method is based on the decomposition of the solution into its mean and a random field using a generic Karhunen-Loève expansion. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spatial dimensions that are spectrally represented using respective orthogonal bases. In particular, the present paper investigates generalized polynomial chaos bases for the stochastic dimension and eigenfunction bases for the spatial dimension. Dynamic orthogonality is used to derive closed-form equations for the time evolution of mean, spatial and the stochastic fields. The resultant system of equations consists of a partial differential equation (PDE that defines the dynamic evolution of the mean, a set of PDEs to define the time evolution of eigenfunction bases, while a set of ordinary differential equations (ODEs define dynamics of the stochastic field. This system of dynamic evolution equations efficiently propagates the prior parametric uncertainty to the system response. The resulting bi-orthogonal expansion of the system response is used to reformulate the Bayesian inference for efficient exploration of the posterior distribution. The efficacy of the proposed method is investigated for calibration of a 2D transient diffusion simulator with an uncertain source location and diffusivity. The computational efficiency of the method is demonstrated against a Monte Carlo method and a generalized polynomial chaos approach.
Market Efficiency of Oil Spot and Futures: A Stochastic Dominance Approach
H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2010-01-01
textabstractThis paper examines the market efficiency of oil spot and futures prices by using a stochastic dominance (SD) approach. As there is no evidence of an SD relationship between oil spot and futures, we conclude that there is no arbitrage opportunity between these two markets, and that both
Devine, Carrick; Wells, Robyn; Lowe, Tim; Waller, John
2014-01-01
The M. longissimus from lambs electrically stimulated at 15 min post-mortem were removed after grading, wrapped in polythene film and held at 4 (n=6), 7 (n=6), 15 (n=6, n=8) and 35°C (n=6), until rigor mortis then aged at 15°C for 0, 4, 24 and 72 h post-rigor. Centrifuged free water increased exponentially, and bound water, dry matter and shear force decreased exponentially over time. Decreases in shear force and increases in free water were closely related (r(2)=0.52) and were unaffected by pre-rigor temperatures. © 2013.
Narasimhan, Seetharam; Chiel, Hillel J; Bhunia, Swarup
2009-01-01
For implantable neural interface applications, it is important to compress data and analyze spike patterns across multiple channels in real time. Such a computational task for online neural data processing requires an innovative circuit-architecture level design approach for low-power, robust and area-efficient hardware implementation. Conventional microprocessor or Digital Signal Processing (DSP) chips would dissipate too much power and are too large in size for an implantable system. In this paper, we propose a novel hardware design approach, referred to as "Preferential Design" that exploits the nature of the neural signal processing algorithm to achieve a low-voltage, robust and area-efficient implementation using nanoscale process technology. The basic idea is to isolate the critical components with respect to system performance and design them more conservatively compared to the noncritical ones. This allows aggressive voltage scaling for low power operation while ensuring robustness and area efficiency. We have applied the proposed approach to a neural signal processing algorithm using the Discrete Wavelet Transform (DWT) and observed significant improvement in power and robustness over conventional design.
New rigorous asymptotic theorems for inverse scattering amplitudes
International Nuclear Information System (INIS)
Lomsadze, Sh.Yu.; Lomsadze, Yu.M.
1984-01-01
The rigorous asymptotic theorems both of integral and local types obtained earlier and establishing logarithmic and in some cases even power correlations aetdeen the real and imaginary parts of scattering amplitudes Fsub(+-) are extended to the inverse amplitudes 1/Fsub(+-). One also succeeds in establishing power correlations of a new type between the real and imaginary parts, both for the amplitudes themselves and for the inverse ones. All the obtained assertions are convenient to be tested in high energy experiments when the amplitudes show asymptotic behaviour
A Hybrid Node Scheduling Approach Based on Energy Efficient Chain Routing for WSN
Directory of Open Access Journals (Sweden)
Yimei Kang
2014-04-01
Full Text Available Energy efficiency is usually a significant goal in wireless sensor networks (WSNs. In this work, an energy efficient chain (EEC data routing approach is first presented. The coverage and connectivity of WSNs are discussed based on EEC. A hybrid node scheduling approach is then proposed. It includes sleep scheduling for cyclically monitoring regions of interest in time-driven modes and wakeup scheduling for tracking emergency events in event-driven modes. A failure rate is introduced to the sleep scheduling to improve the reliability of the system. A wakeup sensor threshold and a sleep time threshold are introduced in the wakeup scheduling to reduce the consumption of energy to the possible extent. The results of the simulation show that the proposed algorithm can extend the effective lifetime of the network to twice that of PEAS. In addition, the proposed methods are computing efficient because they are very simple to implement.
Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.
2015-12-01
Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.
Muroya, Susumu; Ohnishi-Kameyama, Mayumi; Oe, Mika; Nakajima, Ikuyo; Shibata, Masahiro; Chikuni, Koichi
2007-05-16
To investigate changes in myosin light chains (MyLCs) during postmortem aging of the bovine longissimus muscle, we performed two-dimensional gel electrophoresis followed by identification with matrix-assisted laser desorption ionization time-of-flight mass spectrometry. The results of fluorescent differential gel electrophoresis showed that two spots of the myosin regulatory light chain (MyLC2) at pI values of 4.6 and 4.7 shifted toward those at pI values of 4.5 and 4.6, respectively, by 24 h postmortem when rigor mortis was completed. Meanwhile, the MyLC1 and MyLC3 spots did not change during the 14 days postmortem. Phosphoprotein-specific staining of the gels demonstrated that the MyLC2 proteins at pI values of 4.5 and 4.6 were phosphorylated. Furthermore, possible N-terminal region peptides containing one and two phosphoserine residues were detected in each mass spectrum of the MyLC2 spots at pI values of 4.5 and 4.6, respectively. These results demonstrated that MyLC2 became doubly phosphorylated during rigor formation of the bovine longissimus, suggesting involvement of the MyLC2 phosphorylation in the progress of beef rigor mortis. Bovine; myosin regulatory light chain (RLC, MyLC2); phosphorylation; rigor mortis; skeletal muscle.
A Rigorous Methodology for Analyzing and Designing Plug-Ins
DEFF Research Database (Denmark)
Fasie, Marieta V.; Haxthausen, Anne Elisabeth; Kiniry, Joseph
2013-01-01
. This paper addresses these problems by describing a rigorous methodology for analyzing and designing plug-ins. The methodology is grounded in the Extended Business Object Notation (EBON) and covers informal analysis and design of features, GUI, actions, and scenarios, formal architecture design, including...... behavioral semantics, and validation. The methodology is illustrated via a case study whose focus is an Eclipse environment for the RAISE formal method's tool suite....
Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.
Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten
2018-01-01
Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence
A socio-technical approach to improving retail energy efficiency behaviours.
Christina, Sian; Waterson, Patrick; Dainty, Andrew; Daniels, Kevin
2015-03-01
In recent years, the UK retail sector has made a significant contribution to societal responses on carbon reduction. We provide a novel and timely examination of environmental sustainability from a systems perspective, exploring how energy-related technologies and strategies are incorporated into organisational life. We use a longitudinal case study approach, looking at behavioural energy efficiency from within one of the UK's leading retailers. Our data covers a two-year period, with qualitative data from a total of 131 participants gathered using phased interviews and focus groups. We introduce an adapted socio-technical framework approach in order to describe an existing organisational behavioural strategy to support retail energy efficiency. Our findings point to crucial socio-technical and goal-setting factors which both impede and/or enable energy efficient behaviours, these include: tensions linked to store level perception of energy management goals; an emphasis on the importance of technology for underpinning change processes; and, the need for feedback and incentives to support the completion of energy-related tasks. We also describe the evolution of a practical operational intervention designed to address issues raised in our findings. Our study provides fresh insights into how sustainable workplace behaviours can be achieved and sustained over time. Secondly, we discuss in detail a set of issues arising from goal conflict in the workplace; these include the development of a practical energy management strategy to facilitate secondary organisational goals through job redesign. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Simic, Vladimir
2016-06-01
As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
An efficient numerical approach to electrostatic microelectromechanical system simulation
International Nuclear Information System (INIS)
Pu, Li
2009-01-01
Computational analysis of electrostatic microelectromechanical systems (MEMS) requires an electrostatic analysis to compute the electrostatic forces acting on micromechanical structures and a mechanical analysis to compute the deformation of micromechanical structures. Typically, the mechanical analysis is performed on an undeformed geometry. However, the electrostatic analysis is performed on the deformed position of microstructures. In this paper, a new efficient approach to self-consistent analysis of electrostatic MEMS in the small deformation case is presented. In this approach, when the microstructures undergo small deformations, the surface charge densities on the deformed geometry can be computed without updating the geometry of the microstructures. This algorithm is based on the linear mode shapes of a microstructure as basis functions. A boundary integral equation for the electrostatic problem is expanded into a Taylor series around the undeformed configuration, and a new coupled-field equation is presented. This approach is validated by comparing its results with the results available in the literature and ANSYS solutions, and shows attractive features comparable to ANSYS. (general)
The effect of temperature on the mechanical aspects of rigor mortis in a liquid paraffin model.
Ozawa, Masayoshi; Iwadate, Kimiharu; Matsumoto, Sari; Asakura, Kumiko; Ochiai, Eriko; Maebashi, Kyoko
2013-11-01
Rigor mortis is an important phenomenon to estimate the postmortem interval in forensic medicine. Rigor mortis is affected by temperature. We measured stiffness of rat muscles using a liquid paraffin model to monitor the mechanical aspects of rigor mortis at five temperatures (37, 25, 10, 5 and 0°C). At 37, 25 and 10°C, the progression of stiffness was slower in cooler conditions. At 5 and 0°C, the muscle stiffness increased immediately after the muscles were soaked in cooled liquid paraffin and then muscles gradually became rigid without going through a relaxed state. This phenomenon suggests that it is important to be careful when estimating the postmortem interval in cold seasons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An Efficient Approach for Node Localisation and Tracking in Wireless Sensor Networks
CSIR Research Space (South Africa)
Mwila, Martin
2014-08-01
Full Text Available -1 An Efficient Approach for Node Localisation and Tracking in Wireless Sensor Networks Martin K. Mwila Submitted in partial fulfilment of the requirements for the degree Magister Technologiae: Electrical Engineering in the Department of Electrical Engineering...
A rigorous treatment of uncertainty quantification for Silicon damage metrics
International Nuclear Information System (INIS)
Griffin, P.
2016-01-01
These report summaries the contributions made by Sandia National Laboratories in support of the International Atomic Energy Agency (IAEA) Nuclear Data Section (NDS) Technical Meeting (TM) on Nuclear Reaction Data and Uncertainties for Radiation Damage. This work focused on a rigorous treatment of the uncertainties affecting the characterization of the displacement damage seen in silicon semiconductors. (author)
Paper 3: Content and Rigor of Algebra Credit Recovery Courses
Walters, Kirk; Stachel, Suzanne
2014-01-01
This paper describes the content, organization and rigor of the f2f and online summer algebra courses that were delivered in summers 2011 and 2012. Examining the content of both types of courses is important because research suggests that algebra courses with certain features may be better than others in promoting success for struggling students.…
A Mathematical Programming Approach to Brand Efficiency of Smartphones in the US Market
Directory of Open Access Journals (Sweden)
Shiu-Wan Hung
2017-01-01
Full Text Available This study applied mathematical programming approach to investigate the brand efficiency of smartphone brands by collecting data of 2013–2015 from Consumer Report. The brand efficiency was completed by employing the slack-based measure in data envelopment analysis. The degree of inefficiency of each brand was evaluated, and each brand’s metatechnology ratio was calculated using the metafrontier concept. The results revealed that the sampled smartphone brands reach the highest average brand efficiency in 2013, where Apple exhibited the highest brand efficiency among the sampled brands. The high brand efficiency in 2013 was attributed to the small number of product types at beginning of the growth period of smartphones. Finally, this study examined the efficiency of smartphone brands among four major telecommunications operators in the United States. It was found that Apple demonstrated the highest efficiency with all four operators, while no significant difference was noted among operators and smartphone brands.
International Nuclear Information System (INIS)
Birol, Fatih; Okogu, B.E.
1997-01-01
The weaknesses of the traditional measure of national output are well known and, in recent years, efforts to find more appropriate alternatives have intensified. One such methodology is the PPP approach which may capture the real value of the GDP. In general, this approach raises the incomes of developing countries by a substantial amount, and this has serious implications for energy indicators on which policies are usually based. A further problem is that non-commercial energy is usually left out of energy-intensity calculations. We analyze the issue of energy-efficiency and carry out calculations based on three approaches: the traditional approach, the PPP-based income approach and an approach which includes non-commercial energy. The results confirm the limitations of using the PPP approach, as its results in a spuriously high energy-efficiency level suggesting high technological sophistication for developing countries. The inclusion of non-commercial energy gives more complete picture. The main conclusion is that applying the PPP method in energy-intensity calculations may be misleading. (Author)
Improving the efficiency of a chemotherapy day unit: applying a business approach to oncology.
van Lent, Wineke A M; Goedbloed, N; van Harten, W H
2009-03-01
To improve the efficiency of a hospital-based chemotherapy day unit (CDU). The CDU was benchmarked with two other CDUs to identify their attainable performance levels for efficiency, and causes for differences. Furthermore, an in-depth analysis using a business approach, called lean thinking, was performed. An integrated set of interventions was implemented, among them a new planning system. The results were evaluated using pre- and post-measurements. We observed 24% growth of treatments and bed utilisation, a 12% increase of staff member productivity and an 81% reduction of overtime. The used method improved process design and led to increased efficiency and a more timely delivery of care. Thus, the business approaches, which were adapted for healthcare, were successfully applied. The method may serve as an example for other oncology settings with problems concerning waiting times, patient flow or lack of beds.
An efficient algebraic approach to observability analysis in state estimation
Energy Technology Data Exchange (ETDEWEB)
Pruneda, R.E.; Solares, C.; Conejo, A.J. [University of Castilla-La Mancha, 13071 Ciudad Real (Spain); Castillo, E. [University of Cantabria, 39005 Santander (Spain)
2010-03-15
An efficient and compact algebraic approach to state estimation observability is proposed. It is based on transferring rows to columns and vice versa in the Jacobian measurement matrix. The proposed methodology provides a unified approach to observability checking, critical measurement identification, determination of observable islands, and selection of pseudo-measurements to restore observability. Additionally, the observability information obtained from a given set of measurements can provide directly the observability obtained from any subset of measurements of the given set. Several examples are used to illustrate the capabilities of the proposed methodology, and results from a large case study are presented to demonstrate the appropriate computational behavior of the proposed algorithms. Finally, some conclusions are drawn. (author)
Re-establishment of rigor mortis: evidence for a considerably longer post-mortem time span.
Crostack, Chiara; Sehner, Susanne; Raupach, Tobias; Anders, Sven
2017-07-01
Re-establishment of rigor mortis following mechanical loosening is used as part of the complex method for the forensic estimation of the time since death in human bodies and has formerly been reported to occur up to 8-12 h post-mortem (hpm). We recently described our observation of the phenomenon in up to 19 hpm in cases with in-hospital death. Due to the case selection (preceding illness, immobilisation), transfer of these results to forensic cases might be limited. We therefore examined 67 out-of-hospital cases of sudden death with known time points of death. Re-establishment of rigor mortis was positive in 52.2% of cases and was observed up to 20 hpm. In contrast to the current doctrine that a recurrence of rigor mortis is always of a lesser degree than its first manifestation in a given patient, muscular rigidity at re-establishment equalled or even exceeded the degree observed before dissolving in 21 joints. Furthermore, this is the first study to describe that the phenomenon appears to be independent of body or ambient temperature.
A multi-objective approach for developing national energy efficiency plans
International Nuclear Information System (INIS)
Haydt, Gustavo; Leal, Vítor; Dias, Luís
2014-01-01
This paper proposes a new approach to deal with the problem of building national energy efficiency (EE) plans, considering multiple objectives instead of only energy savings. The objectives considered are minimizing the influence of energy use on climate change, minimizing the financial risk from the investment, maximizing the security of energy supply, minimizing investment costs, minimizing the impacts of building new power plants and transmission infrastructures, and maximizing the local air quality. These were identified through literature review and interaction with real decision makers. A database of measures is established, from which millions of potential EE plans can be built by combining measures and their respective degree of implementation. Finally, a hybrid multi-objective and multi-criteria decision analysis (MCDA) model is proposed to search and select the EE plans that best match the decision makers’ preferences. An illustration of the working mode and the type of results obtained from this novel hybrid model is provided through an application to Portugal. For each of five decision perspectives a wide range of potential best plans were identified. These wide ranges show the relevance of introducing multi-objective analysis in a comprehensive search space as a tool to inform decisions about national EE plans. - Highlights: • A multiple objective approach to aid the choice of national energy efficiency plans. • A hybrid multi-objective MCDA model is proposed to search among the possible plans. • The model identified relevant plans according to five different idealized DMs. • The approach is tested with Portugal
Preliminary survey on electric energy efficiency in Ethiopia:- Areas of ...
African Journals Online (AJOL)
In this paper the significance of electric energy efficiency improvement and major areas of loss in Ethiopia's electric power system are highlighted for further rigorous study. Major electric energy loss areas in the utility transmission and distribution systems and consumer premises are indicated. In the consumer area the loss ...
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
An Efficient Approach to Screening Epigenome-Wide Data
Directory of Open Access Journals (Sweden)
Meredith A. Ray
2016-01-01
Full Text Available Screening cytosine-phosphate-guanine dinucleotide (CpG DNA methylation sites in association with some covariate(s is desired due to high dimensionality. We incorporate surrogate variable analyses (SVAs into (ordinary or robust linear regressions and utilize training and testing samples for nested validation to screen CpG sites. SVA is to account for variations in the methylation not explained by the specified covariate(s and adjust for confounding effects. To make it easier to users, this screening method is built into a user-friendly R package, ttScreening, with efficient algorithms implemented. Various simulations were implemented to examine the robustness and sensitivity of the method compared to the classical approaches controlling for multiple testing: the false discovery rates-based (FDR-based and the Bonferroni-based methods. The proposed approach in general performs better and has the potential to control both types I and II errors. We applied ttScreening to 383,998 CpG sites in association with maternal smoking, one of the leading factors for cancer risk.
Analyzing price and efficiency dynamics of large appliances with the experience curve approach
International Nuclear Information System (INIS)
Weiss, Martin; Patel, Martin K.; Junginger, Martin; Blok, Kornelis
2010-01-01
Large appliances are major power consumers in households of industrialized countries. Although their energy efficiency has been increasing substantially in past decades, still additional energy efficiency potentials exist. Energy policy that aims at realizing these potentials faces, however, growing concerns about possible adverse effects on commodity prices. Here, we address these concerns by applying the experience curve approach to analyze long-term price and energy efficiency trends of three wet appliances (washing machines, laundry dryers, and dishwashers) and two cold appliances (refrigerators and freezers). We identify a robust long-term decline in both specific price and specific energy consumption of large appliances. Specific prices of wet appliances decline at learning rates (LR) of 29±8% and thereby much faster than those of cold appliances (LR of 9±4%). Our results demonstrate that technological learning leads to substantial price decline, thus indicating that the introduction of novel and initially expensive energy efficiency technologies does not necessarily imply adverse price effects in the long term. By extending the conventional experience curve approach, we find a steady decline in the specific energy consumption of wet appliances (LR of 20-35%) and cold appliances (LR of 13-17%). Our analysis suggests that energy policy might be able to bend down energy experience curves. (author)
International Nuclear Information System (INIS)
Xu, Xin; Cui, Qiang
2017-01-01
This paper focuses on evaluating airline energy efficiency, which is firstly divided into four stages: Operations Stage, Fleet Maintenance Stage, Services Stage and Sales Stage. The new four-stage network structure of airline energy efficiency is a modification of existing models. A new approach, integrated with Network Epsilon-based Measure and Network Slacks-based Measure, is applied to assess the overall energy efficiency and divisional efficiency of 19 international airlines from 2008 to 2014. The influencing factors of airline energy efficiency are analyzed through the regression analysis. The results indicate the followings: 1. The integrated model can identify the benchmarking airlines in the overall system and stages. 2. Most airlines' energy efficiencies keep steady during the period, except for some sharply fluctuations. The efficiency decreases mainly centralized in the year 2008–2011, affected by the financial crisis in the USA. 3. The average age of fleet is positively correlated with the overall energy efficiency, and each divisional efficiency has different significant influencing factors. - Highlights: • An integrated approach with Network Epsilon-based Measure and Network Slacks-based Measure is developed. • 19 airlines' energy efficiencies are evaluated. • Garuda Indonesia has the highest overall energy efficiency.
Whitley, Meredith A.
2014-01-01
While the quality and quantity of research on service-learning has increased considerably over the past 20 years, researchers as well as governmental and funding agencies have called for more rigor in service-learning research. One key variable in improving rigor is using relevant existing theories to improve the research. The purpose of this…
The energy efficiency paradox revisited through a partial observability approach
International Nuclear Information System (INIS)
Kounetas, Kostas; Tsekouras, Kostas
2008-01-01
The present paper examines the energy efficiency paradox demonstrated in Greek manufacturing firms through a partial observability approach. The data set used has resulted from a survey carried out among 161 energy-saving technology firm adopters. Maximum likelihood estimates that arise from an incidental truncation model reveal that the adoption of the energy-saving technologies is indeed strongly correlated to the returns of assets that are required in order to undertake the corresponding investments. The source of the energy efficiency paradox lies within a wide range of factors. Policy schemes that aim to increase the adoption rate of energy-saving technologies within the field of manufacturing are significantly affected by differences in the size of firms. Finally, mixed policies seem to be more effective than policies that are only capital subsidy or regulation oriented
Rigorous Results for the Distribution of Money on Connected Graphs
Lanchier, Nicolas; Reed, Stephanie
2018-05-01
This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.
Burrus, Barri B.; Scott, Alicia Richmond
2012-01-01
Adolescent parents and their children are at increased risk for adverse short- and long-term health and social outcomes. Effective interventions are needed to support these young families. We studied the evidence base and found a dearth of rigorously evaluated programs. Strategies from successful interventions are needed to inform both intervention design and policies affecting these adolescents. The lack of rigorous evaluations may be attributable to inadequate emphasis on and sufficient funding for evaluation, as well as to challenges encountered by program evaluators working with this population. More rigorous program evaluations are urgently needed to provide scientifically sound guidance for programming and policy decisions. Evaluation lessons learned have implications for other vulnerable populations. PMID:22897541
McCaig, Chris; Begon, Mike; Norman, Rachel; Shankland, Carron
2011-03-01
Changing scale, for example, the ability to move seamlessly from an individual-based model to a population-based model, is an important problem in many fields. In this paper, we introduce process algebra as a novel solution to this problem in the context of models of infectious disease spread. Process algebra allows us to describe a system in terms of the stochastic behaviour of individuals, and is a technique from computer science. We review the use of process algebra in biological systems, and the variety of quantitative and qualitative analysis techniques available. The analysis illustrated here solves the changing scale problem: from the individual behaviour we can rigorously derive equations to describe the mean behaviour of the system at the level of the population. The biological problem investigated is the transmission of infection, and how this relates to individual interactions.
A rigorous derivation of gravitational self-force
International Nuclear Information System (INIS)
Gralla, Samuel E; Wald, Robert M
2008-01-01
There is general agreement that the MiSaTaQuWa equations should describe the motion of a 'small body' in general relativity, taking into account the leading order self-force effects. However, previous derivations of these equations have made a number of ad hoc assumptions and/or contain a number of unsatisfactory features. For example, all previous derivations have invoked, without proper justification, the step of 'Lorenz gauge relaxation', wherein the linearized Einstein equation is written in the form appropriate to the Lorenz gauge, but the Lorenz gauge condition is then not imposed-thereby making the resulting equations for the metric perturbation inequivalent to the linearized Einstein equations. (Such a 'relaxation' of the linearized Einstein equations is essential in order to avoid the conclusion that 'point particles' move on geodesics.) In this paper, we analyze the issue of 'particle motion' in general relativity in a systematic and rigorous way by considering a one-parameter family of metrics, g ab (λ), corresponding to having a body (or black hole) that is 'scaled down' to zero size and mass in an appropriate manner. We prove that the limiting worldline of such a one-parameter family must be a geodesic of the background metric, g ab (λ = 0). Gravitational self-force-as well as the force due to coupling of the spin of the body to curvature-then arises as a first-order perturbative correction in λ to this worldline. No assumptions are made in our analysis apart from the smoothness and limit properties of the one-parameter family of metrics, g ab (λ). Our approach should provide a framework for systematically calculating higher order corrections to gravitational self-force, including higher multipole effects, although we do not attempt to go beyond first-order calculations here. The status of the MiSaTaQuWa equations is explained
A Rigorous Investigation on the Ground State of the Penson-Kolb Model
Yang, Kai-Hua; Tian, Guang-Shan; Han, Ru-Qi
2003-05-01
By using either numerical calculations or analytical methods, such as the bosonization technique, the ground state of the Penson-Kolb model has been previously studied by several groups. Some physicists argued that, as far as the existence of superconductivity in this model is concerned, it is canonically equivalent to the negative-U Hubbard model. However, others did not agree. In the present paper, we shall investigate this model by an independent and rigorous approach. We show that the ground state of the Penson-Kolb model is nondegenerate and has a nonvanishing overlap with the ground state of the negative-U Hubbard model. Furthermore, we also show that the ground states of both the models have the same good quantum numbers and may have superconducting long-range order at the same momentum q = 0. Our results support the equivalence between these models. The project partially supported by the Special Funds for Major State Basic Research Projects (G20000365) and National Natural Science Foundation of China under Grant No. 10174002
International Nuclear Information System (INIS)
Azadeh, A.; Amalnick, M.S.; Ghaderi, S.F.; Asadzadeh, S.M.
2007-01-01
This paper introduces an integrated approach based on data envelopment analysis (DEA), principal component analysis (PCA) and numerical taxonomy (NT) for total energy efficiency assessment and optimization in energy intensive manufacturing sectors. Total energy efficiency assessment and optimization of the proposed approach considers structural indicators in addition conventional consumption and manufacturing sector output indicators. The validity of the DEA model is verified and validated by PCA and NT through Spearman correlation experiment. Moreover, the proposed approach uses the measure-specific super-efficiency DEA model for sensitivity analysis to determine the critical energy carriers. Four energy intensive manufacturing sectors are discussed in this paper: iron and steel, pulp and paper, petroleum refining and cement manufacturing sectors. To show superiority and applicability, the proposed approach has been applied to refinery sub-sectors of some OECD (Organization for Economic Cooperation and Development) countries. This study has several unique features which are: (1) a total approach which considers structural indicators in addition to conventional energy efficiency indicators; (2) a verification and validation mechanism for DEA by PCA and NT and (3) utilization of DEA for total energy efficiency assessment and consumption optimization of energy intensive manufacturing sectors
Econometric models for distinguishing between market-driven and publicly-funded energy efficiency
International Nuclear Information System (INIS)
Horowitz, Marvin J.
2005-01-01
Central to the problem of estimating energy program benefits is the necessity to differentiate between changes in energy use that would have occurred in the absence of public programs versus declines in energy use that would not have occurred but for public programs. The former changes are often referred to as naturally-occurring or market-driven effects. They occur due to a combination of one or more independent variables, such as changes in prices, incomes, weather, and technology. For a rigorous, scientifically-valid program evaluation, it is essential to first control for these variables before making statistical inferences related to public program effects. This paper describes the economic and statistical issues surrounding quantitative studies of energy use, energy efficiency, and public programs. To illustrate the strengths and weaknesses of different impact evaluation approaches, this paper describes three new studies related to electricity use in the U. S. commercial buildings sector. Specification and estimation of time series and cross section econometric models are discussed, as are their capabilities for obtaining long-run estimates of the net impacts of energy efficiency programs
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
New challenges of Japanese energy efficiency program by Top Runner approach
International Nuclear Information System (INIS)
Murakoshi, Chiharu; Nakagami, Hidetoshi; Tsuruda, Masanori; Edamura, Nobuhisa
2005-01-01
The Top Runner standards are the key energy efficiency program in Japan. Last year, TVs and VCRs reached the target year for improvement in efficiency, and have improved their efficiencies far more than the original energy savings targets. In order to accelerate energy savings by the Top Runner approach, the Government of Japan is now planning to add new items, and moreover to strengthen the standards for TVs and VCRs. On the other hand, high prices of efficient appliances are considered by many to be the main factor preventing wider diffusion. In order to increase diffusion of efficient appliances by promoting sales, in 2003 the e-Shop Commendation System for retail stores was started, for 'stores that are excellent in promoting diffusion of energy-efficient appliances.' Until then it was quite rare to see products with their e-Mark energy-efficiency labels attached. Moreover, no incentive was given to retail stores to recommend efficient appliances to customers. Under the e-Shop Commendation System, we evaluated comprehensive measures for retail stores, such as the sales ratio of products achieving the standards, percentage of products with e-Mark labels attached, employee education programs, and creation of original posters. As a result of starting the e-Shop Commendation System, e-Mark labels have come to be posted on almost all appliances, original posters have been produced, and sales staffs are receiving instruction in selling points and how to talk with customers. We describe new challenges of the Top Runner program and the contents, evaluation method, and the considerable effect of the e-Shop Commendation System for retail stores
Bombaerts, G.; Nickel, P.J.
2017-01-01
We inquire how peer and tutor feedback influences students' optimal rigor, basic needs and motivation. We analyze questionnaires from two courses in two subsequent years. We conclude that feedback in blended learning can contribute to rigor and basic needs, but it is not clear from our data what
Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J
2017-01-01
Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical
Directory of Open Access Journals (Sweden)
David C Pavlacky
Full Text Available Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1 coordination across organizations and regions, 2 meaningful management and conservation objectives, and 3 rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17. We provide two examples for the Brewer's sparrow (Spizella breweri in BCR 17 demonstrating the ability of the design to 1 determine hierarchical population responses to landscape change and 2 estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous
Reciprocity relations in transmission electron microscopy: A rigorous derivation.
Krause, Florian F; Rosenauer, Andreas
2017-01-01
A concise derivation of the principle of reciprocity applied to realistic transmission electron microscopy setups is presented making use of the multislice formalism. The equivalence of images acquired in conventional and scanning mode is thereby rigorously shown. The conditions for the applicability of the found reciprocity relations is discussed. Furthermore the positions of apertures in relation to the corresponding lenses are considered, a subject which scarcely has been addressed in previous publications. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Nevalainen, T J; Gavin, J B; Seelye, R N; Whitehouse, S; Donnell, M
1978-07-01
The effect of normal and artificially induced rigor mortis on the vascular passage of erythrocytes and fluid through isolated dog hearts was studied. Increased rigidity of 6-mm thick transmural sections through the centre of the posterior papillary muscle was used as an indication of rigor. The perfusibility of the myocardium was tested by injecting 10 ml of 1% sodium fluorescein in Hanks solution into the circumflex branch of the left coronary artery. In prerigor hearts (20 minute incubation) fluorescein perfused the myocardium evenly whether or not it was preceded by an injection of 10 ml of heparinized dog blood. Rigor mortis developed in all hearts after 90 minutes incubation or within 20 minutes of perfusing the heart with 50 ml of 5 mM iodoacetate in Hanks solution. Fluorescein injected into hearts in rigor did not enter the posterior papillary muscle and adjacent subendocardium whether or not it was preceded by heparinized blood. Thus the vascular occlusion caused by rigor in the dog heart appears to be so effective that it prevents flow into the subendocardium of small soluble ions such as fluorescein.
Statistics for mathematicians a rigorous first course
Panaretos, Victor M
2016-01-01
This textbook provides a coherent introduction to the main concepts and methods of one-parameter statistical inference. Intended for students of Mathematics taking their first course in Statistics, the focus is on Statistics for Mathematicians rather than on Mathematical Statistics. The goal is not to focus on the mathematical/theoretical aspects of the subject, but rather to provide an introduction to the subject tailored to the mindset and tastes of Mathematics students, who are sometimes turned off by the informal nature of Statistics courses. This book can be used as the basis for an elementary semester-long first course on Statistics with a firm sense of direction that does not sacrifice rigor. The deeper goal of the text is to attract the attention of promising Mathematics students.
DEFF Research Database (Denmark)
Riegels, Niels; Pulido-Velazquez, Manuel; Doulgeris, Charalampos
2013-01-01
management objectives. However, the design and implementation of economic instruments for water management, including water pricing, has emerged as a challenging aspect of WFD implementation. This study demonstrates the use of a systems analysis approach to designing and comparing two economic approaches......Economic theory suggests that water pricing can contribute to efficient management of water scarcity. The European Union (EU) Water Framework Directive (WFD) is a major legislative effort to introduce the use of economic instruments to encourage efficient water use and achieve environmental...... to efficient management of groundwater and surface water given EU WFD ecological flow requirements. Under the first approach, all wholesale water users in a river basin face the same volumetric price for water. This water price does not vary in space or in time, and surface water and groundwater are priced...
The efficiency frontier approach to economic evaluation of health-care interventions.
Caro, J Jaime; Nord, Erik; Siebert, Uwe; McGuire, Alistair; McGregor, Maurice; Henry, David; de Pouvourville, Gérard; Atella, Vincenzo; Kolominsky-Rabas, Peter
2010-10-01
IQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system. The panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system. This approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required.
Erikson, U; Misimi, E
2008-03-01
The changes in skin and fillet color of anesthetized and exhausted Atlantic salmon were determined immediately after killing, during rigor mortis, and after ice storage for 7 d. Skin color (CIE L*, a*, b*, and related values) was determined by a Minolta Chroma Meter. Roche SalmoFan Lineal and Roche Color Card values were determined by a computer vision method and a sensory panel. Before color assessment, the stress levels of the 2 fish groups were characterized in terms of white muscle parameters (pH, rigor mortis, and core temperature). The results showed that perimortem handling stress initially significantly affected several color parameters of skin and fillets. Significant transient fillet color changes also occurred in the prerigor phase and during the development of rigor mortis. Our results suggested that fillet color was affected by postmortem glycolysis (pH drop, particularly in anesthetized fillets), then by onset and development of rigor mortis. The color change patterns during storage were different for the 2 groups of fish. The computer vision method was considered suitable for automated (online) quality control and grading of salmonid fillets according to color.
International Nuclear Information System (INIS)
Lin, Boqiang; Du, Kerui
2014-01-01
The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime
Efficient weakly-radiative wireless energy transfer: An EIT-like approach
International Nuclear Information System (INIS)
Hamam, Rafif E.; Karalis, Aristeidis; Joannopoulos, J.D.; Soljacic, Marin
2009-01-01
Inspired by a quantum interference phenomenon known in the atomic physics community as electromagnetically induced transparency (EIT), we propose an efficient weakly radiative wireless energy transfer scheme between two identical classical resonant objects, strongly coupled to an intermediate classical resonant object of substantially different properties, but with the same resonance frequency. The transfer mechanism essentially makes use of the adiabatic evolution of an instantaneous (so called 'dark') eigenstate of the coupled 3-object system. Our analysis is based on temporal coupled mode theory (CMT), and is general enough to be valid for various possible sorts of coupling, including the resonant inductive coupling on which witricity-type wireless energy transfer is based. We show that in certain parameter regimes of interest, this scheme can be more efficient, and/or less radiative than other, more conventional approaches. A concrete example of wireless energy transfer between capacitively-loaded metallic loops is illustrated at the beginning, as a motivation for the more general case. We also explore the performance of the currently proposed EIT-like scheme, in terms of improving efficiency and reducing radiation, as the relevant parameters of the system are varied.
Energy Efficiency in Logistics: An Interactive Approach to Capacity Utilisation
Directory of Open Access Journals (Sweden)
Jessica Wehner
2018-05-01
Full Text Available Logistics operations are energy-consuming and impact the environment negatively. Improving energy efficiency in logistics is crucial for environmental sustainability and can be achieved by increasing the utilisation of capacity. This paper takes an interactive approach to capacity utilisation, to contribute to sustainable freight transport and logistics, by identifying its causes and mitigations. From literature, a conceptual framework was developed to highlight different system levels in the logistics system, in which the energy efficiency improvement potential can be found and that are summarised in the categories activities, actors, and areas. Through semi-structured interviews with representatives of nine companies, empirical data was collected to validate the framework of the causes of the unutilised capacity and proposed mitigations. The results suggest that activities, such as inflexibilities and limited information sharing as well as actors’ over-delivery of logistics services, incorrect price setting, and sales campaigns can cause unutilised capacity, and that problem areas include i.a. poor integration of reversed logistics and the last mile. The paper contributes by categorising causes of unutilised capacity and linking them to mitigations in a framework, providing a critical view towards fill rates, highlighting the need for a standardised approach to measure environmental impact that enables comparison between companies and underlining that costs are not an appropriate indicator for measuring environmental impact.
Rigorous results on measuring the quark charge below color threshold
International Nuclear Information System (INIS)
Lipkin, H.J.
1979-01-01
Rigorous theorems are presented showing that contributions from a color nonsinglet component of the current to matrix elements of a second order electromagnetic transition are suppressed by factors inversely proportional to the energy of the color threshold. Parton models which obtain matrix elements proportional to the color average of the square of the quark charge are shown to neglect terms of the same order of magnitude as terms kept. (author)
Directory of Open Access Journals (Sweden)
Horban Vasylyna B.
2016-11-01
Full Text Available There presented a theoretical rationale for the expediency of using the stakeholder-oriented approach to improve the process of management of sustainable energy efficient development at the local level. The evolution of theories by scientific schools that studied the concepts of «stakeholders» and «interested parties» is analyzed and generalized. A classification of types of stakeholders in the context of eighteen typological features is suggested, which allows to more effectively align their interests and contributes to establishing constructive forms of cooperation in order to achieve efficient final results. An algorithm of interaction with interested parties in achieving the goals of sustainable energy efficient development at the local level is elaborated. Typical motivational interests of stakeholders at the local level in the field of sustainable energy efficient development (on the example of Ukraine are identified. Instruments of prioritization of stakeholders depending on the life cycle stages of energy efficiency projects are proposed. The results obtained in the course of the research can be used to develop local energy efficient programs, business plans and feasibility studies for energy efficient projects.
Rigor mortis development in turkey breast muscle and the effect of electrical stunning.
Alvarado, C Z; Sams, A R
2000-11-01
Rigor mortis development in turkey breast muscle and the effect of electrical stunning on this process are not well characterized. Some electrical stunning procedures have been known to inhibit postmortem (PM) biochemical reactions, thereby delaying the onset of rigor mortis in broilers. Therefore, this study was designed to characterize rigor mortis development in stunned and unstunned turkeys. A total of 154 turkey toms in two trials were conventionally processed at 20 to 22 wk of age. Turkeys were either stunned with a pulsed direct current (500 Hz, 50% duty cycle) at 35 mA (40 V) in a saline bath for 12 seconds or left unstunned as controls. At 15 min and 1, 2, 4, 8, 12, and 24 h PM, pectoralis samples were collected to determine pH, R-value, L* value, sarcomere length, and shear value. In Trial 1, the samples obtained for pH, R-value, and sarcomere length were divided into surface and interior samples. There were no significant differences between the surface and interior samples among any parameters measured. Muscle pH significantly decreased over time in stunned and unstunned birds through 2 h PM. The R-values increased to 8 h PM in unstunned birds and 24 h PM in stunned birds. The L* values increased over time, with no significant differences after 1 h PM for the controls and 2 h PM for the stunned birds. Sarcomere length increased through 2 h PM in the controls and 12 h PM in the stunned fillets. Cooked meat shear values decreased through the 1 h PM deboning time in the control fillets and 2 h PM in the stunned fillets. These results suggest that stunning delayed the development of rigor mortis through 2 h PM, but had no significant effect on the measured parameters at later time points, and that deboning turkey breasts at 2 h PM or later will not significantly impair meat tenderness.
Energy Technology Data Exchange (ETDEWEB)
Galarraga, Ibon, E-mail: ibon.galarraga@bc3research.org; Gonzalez-Eguino, Mikel, E-mail: mikel.gonzalez@bc3research.org; Markandya, Anil, E-mail: anil.markandya@bc3research.org
2011-12-15
This article proposes a combined approach for estimating willingness to pay for the attributes represented by energy efficiency labels and providing reliable price elasticities of demand (own and cross) for close substitutes (e.g. those with low energy efficiency and those with higher energy efficiency). This is done by using the results of the hedonic approach together with the Quantity Based Demand System (QBDS) model. The elasticity results obtained with the latter are then compared with those simulated using the Linear Almost Ideal Demand System (LA/AIDS). The methodology is applied to the dishwasher market in Spain: it is found that 15.6% of the final price is actually paid for the energy efficiency attribute. This accounts for about Euro 80 of the average market price. The elasticity results confirm that energy efficient appliances are more price elastic than regular ones. - Highlights: > The article shows a combined approach for estimating willingness to pay for energy efficiency labels and price elasticities. > The results of the hedonic approach is used together with the Quantity Based Demand System (QBDS) model. > The elasticity results are compared with those simulated using the Linear Almost Ideal Demand System (LA/AIDS). > The methodology is applied to the dishwasher market in Spain.
Rigorous Integration of Non-Linear Ordinary Differential Equations in Chebyshev Basis
Czech Academy of Sciences Publication Activity Database
Dzetkulič, Tomáš
2015-01-01
Roč. 69, č. 1 (2015), s. 183-205 ISSN 1017-1398 R&D Projects: GA MŠk OC10048; GA ČR GD201/09/H057 Institutional research plan: CEZ:AV0Z10300504 Keywords : Initial value problem * Rigorous integration * Taylor model * Chebyshev basis Subject RIV: IN - Informatics, Computer Science Impact factor: 1.366, year: 2015
Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks
Directory of Open Access Journals (Sweden)
Liang Jinghang
2012-08-01
network inferred from a T cell immune response dataset. An SBN can also implement the function of an asynchronous PBN and is potentially useful in a hybrid approach in combination with a continuous or single-molecule level stochastic model. Conclusions Stochastic Boolean networks (SBNs are proposed as an efficient approach to modelling gene regulatory networks (GRNs. The SBN approach is able to recover biologically-proven regulatory behaviours, such as the oscillatory dynamics of the p53-Mdm2 network and the dynamic attractors in a T cell immune response network. The proposed approach can further predict the network dynamics when the genes are under perturbation, thus providing biologically meaningful insights for a better understanding of the dynamics of GRNs. The algorithms and methods described in this paper have been implemented in Matlab packages, which are attached as Additional files.
Stochastic Geometry and Quantum Gravity: Some Rigorous Results
Zessin, H.
The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.
Rigorous quantum limits on monitoring free masses and harmonic oscillators
Roy, S. M.
2018-03-01
There are heuristic arguments proposing that the accuracy of monitoring position of a free mass m is limited by the standard quantum limit (SQL): σ2( X (t ) ) ≥σ2( X (0 ) ) +(t2/m2) σ2( P (0 ) ) ≥ℏ t /m , where σ2( X (t ) ) and σ2( P (t ) ) denote variances of the Heisenberg representation position and momentum operators. Yuen [Phys. Rev. Lett. 51, 719 (1983), 10.1103/PhysRevLett.51.719] discovered that there are contractive states for which this result is incorrect. Here I prove universally valid rigorous quantum limits (RQL), viz. rigorous upper and lower bounds on σ2( X (t ) ) in terms of σ2( X (0 ) ) and σ2( P (0 ) ) , given by Eq. (12) for a free mass and by Eq. (36) for an oscillator. I also obtain the maximally contractive and maximally expanding states which saturate the RQL, and use the contractive states to set up an Ozawa-type measurement theory with accuracies respecting the RQL but beating the standard quantum limit. The contractive states for oscillators improve on the Schrödinger coherent states of constant variance and may be useful for gravitational wave detection and optical communication.
Parent Management Training-Oregon Model: Adapting Intervention with Rigorous Research.
Forgatch, Marion S; Kjøbli, John
2016-09-01
Parent Management Training-Oregon Model (PMTO(®) ) is a set of theory-based parenting programs with status as evidence-based treatments. PMTO has been rigorously tested in efficacy and effectiveness trials in different contexts, cultures, and formats. Parents, the presumed agents of change, learn core parenting practices, specifically skill encouragement, limit setting, monitoring/supervision, interpersonal problem solving, and positive involvement. The intervention effectively prevents and ameliorates children's behavior problems by replacing coercive interactions with positive parenting practices. Delivery format includes sessions with individual families in agencies or families' homes, parent groups, and web-based and telehealth communication. Mediational models have tested parenting practices as mechanisms of change for children's behavior and found support for the theory underlying PMTO programs. Moderating effects include children's age, maternal depression, and social disadvantage. The Norwegian PMTO implementation is presented as an example of how PMTO has been tailored to reach diverse populations as delivered by multiple systems of care throughout the nation. An implementation and research center in Oslo provides infrastructure and promotes collaboration between practitioners and researchers to conduct rigorous intervention research. Although evidence-based and tested within a wide array of contexts and populations, PMTO must continue to adapt to an ever-changing world. © 2016 Family Process Institute.
Rigorous simulations of a helical core fiber by the use of transformation optics formalism.
Napiorkowski, Maciej; Urbanczyk, Waclaw
2014-09-22
We report for the first time on rigorous numerical simulations of a helical-core fiber by using a full vectorial method based on the transformation optics formalism. We modeled the dependence of circular birefringence of the fundamental mode on the helix pitch and analyzed the effect of a birefringence increase caused by the mode displacement induced by a core twist. Furthermore, we analyzed the complex field evolution versus the helix pitch in the first order modes, including polarization and intensity distribution. Finally, we show that the use of the rigorous vectorial method allows to better predict the confinement loss of the guided modes compared to approximate methods based on equivalent in-plane bending models.
Energy-efficient and safe driving using a situation-aware gamification approach in logistics
Klemke, Roland; Kravcik, Milos; Bohuschke, Felix
2013-01-01
Klemke, R., Kravčík, M., & Bohuschke, F. (2013, 23-25 October). Energy-efficient and safe driving using a situation-aware gamification approach in logistics. Presentation at the Games and Learning Alliance Conference (GALAConf 2013), Paris, France. http://www.galaconf.org/
Modelling efficient innovative work: integration of economic and social psychological approaches
Directory of Open Access Journals (Sweden)
Babanova Yulia
2017-01-01
Full Text Available The article deals with the relevance of integration of economic and social psychological approaches to the solution of enhancing the efficiency of innovation management. The content, features and specifics of the modelling methods within each of approaches are unfolded and options of integration are considered. The economic approach lies in the generation of the integrated matrix concept of management of innovative development of an enterprise in line with the stages of innovative work and the use of the integrated vector method for the evaluation of the innovative enterprise development level. The social psychological approach lies in the development of a system of psychodiagnostic indexes of activity resources within the scope of psychological innovative audit of enterprise management and development of modelling methods for the balance of activity trends. Modelling the activity resources is based on the system of equations accounting for the interaction type of psychodiagnostic indexes. Integration of two approaches includes a methodological level, a level of empirical studies and modelling methods. There are suggested options of integrating the economic and psychological approaches to analyze available material and non-material resources of the enterprises’ innovative work and to forecast an optimal option of development based on the implemented modelling methods.
Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon
2014-01-01
This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a
A rigorous proof of the Landau-Peierls formula and much more
DEFF Research Database (Denmark)
Briet, Philippe; Cornean, Horia; Savoie, Baptiste
2012-01-01
We present a rigorous mathematical treatment of the zero-field orbital magnetic susceptibility of a non-interacting Bloch electron gas, at fixed temperature and density, for both metals and semiconductors/insulators. In particular, we obtain the Landau-Peierls formula in the low temperature and d...... and density limit as conjectured by Kjeldaas and Kohn (Phys Rev 105:806–813, 1957)....
The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach
Directory of Open Access Journals (Sweden)
Frances Obafemi
2013-11-01
Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.
International Nuclear Information System (INIS)
Pollitt, Michael
2005-01-01
Electricity regulators around the world make use of efficiency analysis (or benchmarking) to produce estimates of the likely amount of cost reduction which regulated electric utilities can achieve. This short paper examines the use of such efficiency estimates by the UK electricity regulator (Ofgem) within electricity distribution and transmission price reviews. It highlights the place of efficiency analysis within the calculation of X factors. We suggest a number of problems with the current approach and make suggestions for the future development of X factor setting. (author)
Unmet Need: Improving mHealth Evaluation Rigor to Build the Evidence Base.
Mookherji, Sangeeta; Mehl, Garrett; Kaonga, Nadi; Mechael, Patricia
2015-01-01
mHealth-the use of mobile technologies for health-is a growing element of health system activity globally, but evaluation of those activities remains quite scant, and remains an important knowledge gap for advancing mHealth activities. In 2010, the World Health Organization and Columbia University implemented a small-scale survey to generate preliminary data on evaluation activities used by mHealth initiatives. The authors describe self-reported data from 69 projects in 29 countries. The majority (74%) reported some sort of evaluation activity, primarily nonexperimental in design (62%). The authors developed a 6-point scale of evaluation rigor comprising information on use of comparison groups, sample size calculation, data collection timing, and randomization. The mean score was low (2.4); half (47%) were conducting evaluations with a minimum threshold (4+) of rigor, indicating use of a comparison group, while less than 20% had randomized the mHealth intervention. The authors were unable to assess whether the rigor score was appropriate for the type of mHealth activity being evaluated. What was clear was that although most data came from mHealth projects pilots aimed for scale-up, few had designed evaluations that would support crucial decisions on whether to scale up and how. Whether the mHealth activity is a strategy to improve health or a tool for achieving intermediate outcomes that should lead to better health, mHealth evaluations must be improved to generate robust evidence for cost-effectiveness assessment and to allow for accurate identification of the contribution of mHealth initiatives to health systems strengthening and the impact on actual health outcomes.
McNamara, J P
2012-06-01
The role of the dairy cow is to help provide high-quality protein and other nutrients for humans. We must select and manage cows with the goal of reaching the greatest possible efficiency for any given environment. We have increased efficiency tremendously over the years, yet the variation in productive and reproductive efficiency among animals is still quite large. In part this is because of a lack of full integration of genetic, nutritional, and reproductive biology into management decisions. However, integration across these disciplines is increasing as biological research findings show more specific control points at which genetics, nutrition, and reproduction interact. An ordered systems biology approach that focuses on why and how cells regulate energy and N use and on how and why organs interact by endocrine and neurocrine mechanisms will speed improvements in efficiency. More sophisticated dairy managers will demand better information to improve the efficiency of their animals. Using genetic improvement and proper animal management to improve milk productive and reproductive efficiency requires a deeper understanding of metabolic processes during the transition period. Using existing metabolic models, we can design experiments specifically to integrate new data from transcriptional arrays into models that describe nutrient use in farm animals. A systems modeling approach can help focus our research to make faster and large advances in efficiency and show directly how this can be applied on the farms.
Estimation of the time since death--reconsidering the re-establishment of rigor mortis.
Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter
2013-01-01
In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.
[A motivational approach of cognitive efficiency in nursing home residents].
Clément, Evelyne; Vivicorsi, Bruno; Altintas, Emin; Guerrien, Alain
2014-06-01
Despite a widespread concern with self-determined motivation (behavior is engaged in "out of pleasure" or "out of choice and valued as being important") and psychological adjustment in later life (well-being, satisfaction in life, meaning of life, or self-esteem), very little is known about the existence and nature of the links between self-determined motivation and cognitive efficiency. The aim of the present study was to investigate theses links in nursing home residents in the framework of the Self-determination theory (SDT) (Deci & Ryan, 2002), in which motivational profile of a person is determined by the combination of different kinds of motivation. We hypothesized that self-determined motivation would lead to higher cognitive efficiency. Participants. 39 (32 women and 7 men) elderly nursing home residents (m= 83.6 ± 9.3 year old) without any neurological or psychiatric disorders (DSM IV) or depression or anxiety (Hamilton depression rating scales) were included in the study. Methods. Cognitive efficiency was evaluated by two brief neuropsychological tests, the Mini mental state examination (MMSE) and the Frontal assessment battery (FAB). The motivational profile was assessed by the Elderly motivation scale (Vallerand & 0'Connor, 1991) which includes four subscales assessing self- and non-self determined motivation to engage oneself in different domains of daily life activity. Results. The neuropsychological scores were positively and significantly correlated to self-determined extrinsic motivation (behavior is engaged in "out of choice" and valued as being important), and the global self-determination index (self-determined motivational profile) was the best predictor of the cognitive efficiency. Conclusion. The results support the SDT interest for a qualitative assessment of the motivation of the elderly people and suggest that a motivational approach of cognitive efficiency could help to interpret cognitive performances exhibited during neuropsychological
College Readiness in California: A Look at Rigorous High School Course-Taking
Gao, Niu
2016-01-01
Recognizing the educational and economic benefits of a college degree, education policymakers at the federal, state, and local levels have made college preparation a priority. There are many ways to measure college readiness, but one key component is rigorous high school coursework. California has not yet adopted a statewide college readiness…
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.
Directory of Open Access Journals (Sweden)
Sophie Marchal
Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.
How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.
Gray, Kurt
2017-09-01
Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.
From properties to materials: An efficient and simple approach.
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-21
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
From properties to materials: An efficient and simple approach
Huwig, Kai; Fan, Chencheng; Springborg, Michael
2017-12-01
We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.
Energy Technology Data Exchange (ETDEWEB)
Lundstrom, M.S.; Melloch, M.R.; Lush, G.B.; O`Bradovich, G.J.; Young, M.P. [Purdue Univ., Lafayette, IN (United States)
1993-01-01
This report describes progress during the first year of a three-year project. The objective of the research is to examine new design approaches for achieving very high conversion efficiencies. The program is divided into two areas. The first centers on exploring new thin-film approaches specifically designed for III-V semiconductors. The second area centers on exploring design approaches for achieving high conversion efficiencies without requiring extremely high quality material. Research activities consisted of an experimental study of minority carrier recombination in n-type, metal-organic chemical vapor deposition (MOCVD)-deposited GaAs, an assessment of the minority carrier lifetimes in n-GaAs grown by molecular beam epitaxy, and developing a high-efficiency cell fabrication process.
Bureaucratic Corruption: Efficiency Virtue or Distributive Vice?
Kulshreshtha Pravin
2003-01-01
Governments frequently allocate resources at low prices and on a first-come-first-served basis because of reasons of equity and a concern for the poor. However, bureaucrats who distribute these resources often take bribes. This paper develops a rigorous model to analyze the distributional, efficiency and public policy implications of bribery in such situations. It is shown that at low prices, the poor would choose to wait while the rich would pay the bribe to obtain the rationed commodity. If...
Mathematical beauty in service of deep approach to learning
DEFF Research Database (Denmark)
Karamehmedovic, Mirza
2015-01-01
was hands-on MATLAB programming, where the algorithms were tested and applied to solve physical modelbased problems. To encourage a deep approach, and discourage a surface approach to learning, I introduced into the lectures a basic but rigorous mathematical treatment of crucial theoretical points...
Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories
Directory of Open Access Journals (Sweden)
Augusto Beléndez
2012-08-01
Full Text Available There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW or the Rigorous Coupled Wave theory (RCW. The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.
Volume Holograms in Photopolymers: Comparison between Analytical and Rigorous Theories
Gallego, Sergi; Neipp, Cristian; Estepa, Luis A.; Ortuño, Manuel; Márquez, Andrés; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto
2012-01-01
There is no doubt that the concept of volume holography has led to an incredibly great amount of scientific research and technological applications. One of these applications is the use of volume holograms as optical memories, and in particular, the use of a photosensitive medium like a photopolymeric material to record information in all its volume. In this work we analyze the applicability of Kogelnik’s Coupled Wave theory to the study of volume holograms recorded in photopolymers. Some of the theoretical models in the literature describing the mechanism of hologram formation in photopolymer materials use Kogelnik’s theory to analyze the gratings recorded in photopolymeric materials. If Kogelnik’s theory cannot be applied is necessary to use a more general Coupled Wave theory (CW) or the Rigorous Coupled Wave theory (RCW). The RCW does not incorporate any approximation and thus, since it is rigorous, permits judging the accurateness of the approximations included in Kogelnik’s and CW theories. In this article, a comparison between the predictions of the three theories for phase transmission diffraction gratings is carried out. We have demonstrated the agreement in the prediction of CW and RCW and the validity of Kogelnik’s theory only for gratings with spatial frequencies higher than 500 lines/mm for the usual values of the refractive index modulations obtained in photopolymers.
An efficient and extensible approach for compressing phylogenetic trees.
Matthews, Suzanne J; Williams, Tiffani L
2011-10-18
Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.
Energy Technology Data Exchange (ETDEWEB)
Ngampitipan, Tritos, E-mail: tritos.ngampitipan@gmail.com [Faculty of Science, Chandrakasem Rajabhat University, Ratchadaphisek Road, Chatuchak, Bangkok 10900 (Thailand); Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Boonserm, Petarpa, E-mail: petarpa.boonserm@gmail.com [Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Chatrabhuti, Auttakit, E-mail: dma3ac2@gmail.com [Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330 (Thailand); Visser, Matt, E-mail: matt.visser@msor.vuw.ac.nz [School of Mathematics, Statistics, and Operations Research, Victoria University of Wellington, PO Box 600, Wellington 6140 (New Zealand)
2016-06-02
Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.
International Nuclear Information System (INIS)
Ngampitipan, Tritos; Boonserm, Petarpa; Chatrabhuti, Auttakit; Visser, Matt
2016-01-01
Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.
How efficient are Greek hospitals? A case study using a double bootstrap DEA approach.
Kounetas, Kostas; Papathanassopoulos, Fotis
2013-12-01
The purpose of this study was to measure Greek hospital performance using different input-output combinations, and to identify the factors that influence their efficiency thus providing policy makers with valuable input for the decision-making process. Using a unique dataset, we estimated the productive efficiency of each hospital through a bootstrapped data envelopment analysis (DEA) approach. In a second stage, we explored, using a bootstrapped truncated regression, the impact of environmental factors on hospitals' technical and scale efficiency. Our results reveal that over 80% of the examined hospitals appear to have a technical efficiency lower than 0.8, while the majority appear to be scale efficient. Moreover, efficiency performance differed with inclusion of medical examinations as an additional variable. On the other hand, bed occupancy ratio appeared to affect both technical and scale efficiency in a rather interesting way, while the adoption of advanced medical equipment and the type of hospital improves scale and technical efficiency, correspondingly. The findings of this study on Greek hospitals' performance are not encouraging. Furthermore, our results raise questions regarding the number of hospitals that should operate, and which type of hospital is more efficient. Finally, the results indicate the role of medical equipment in performance, confirming its misallocation in healthcare expenditure.
The jABC Approach to Rigorous Collaborative Development of SCM Applications
Hörmann, Martina; Margaria, Tiziana; Mender, Thomas; Nagel, Ralf; Steffen, Bernhard; Trinh, Hong
Our approach to the model-driven collaborative design of IKEA's P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of non-programmers and non-technical people, which is the focus of this contribution, and it is now being rolled out in the operative practice.
Kaiser, C.; Roll, K.; Volk, W.
2017-09-01
In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.
Towards a Rigorous Formulation of the Space Mapping Technique for Engineering Design
DEFF Research Database (Denmark)
Koziel, Slawek; Bandler, John W.; Madsen, Kaj
2005-01-01
This paper deals with the Space Mapping (SM) approach to engineering design optimization. We attempt here a theoretical justification of methods that have already proven efficient in solving practical problems, especially in the RF and microwave area. A formal definition of optimization algorithm...
Some comments on rigorous quantum field path integrals in the analytical regularization scheme
Energy Technology Data Exchange (ETDEWEB)
Botelho, Luiz C.L. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Dept. de Matematica Aplicada]. E-mail: botelho.luiz@superig.com.br
2008-07-01
Through the systematic use of the Minlos theorem on the support of cylindrical measures on R{sup {infinity}}, we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)
Some comments on rigorous quantum field path integrals in the analytical regularization scheme
International Nuclear Information System (INIS)
Botelho, Luiz C.L.
2008-01-01
Through the systematic use of the Minlos theorem on the support of cylindrical measures on R ∞ , we produce several mathematically rigorous path integrals in interacting euclidean quantum fields with Gaussian free measures defined by generalized powers of the Laplacian operator. (author)
A plea for rigorous conceptual analysis as central method in transnational law design
Rijgersberg, R.; van der Kaaij, H.
2013-01-01
Although shared problems are generally easily identified in transnational law design, it is considerably more difficult to design frameworks that transcend the peculiarities of local law in a univocal fashion. The following exposition is a plea for giving more prominence to rigorous conceptual
Jonas, Wayne B; Crawford, Cindy; Hilton, Lara; Elfenbaum, Pamela
2017-01-01
Answering the question of "what works" in healthcare can be complex and requires the careful design and sequential application of systematic methodologies. Over the last decade, the Samueli Institute has, along with multiple partners, developed a streamlined, systematic, phased approach to this process called the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). The SEaRCH process provides an approach for rigorously, efficiently, and transparently making evidence-based decisions about healthcare claims in research and practice with minimal bias. SEaRCH uses three methods combined in a coordinated fashion to help determine what works in healthcare. The first, the Claims Assessment Profile (CAP), seeks to clarify the healthcare claim and question, and its ability to be evaluated in the context of its delivery. The second method, the Rapid Evidence Assessment of the Literature (REAL © ), is a streamlined, systematic review process conducted to determine the quantity, quality, and strength of evidence and risk/benefit for the treatment. The third method involves the structured use of expert panels (EPs). There are several types of EPs, depending on the purpose and need. Together, these three methods-CAP, REAL, and EP-can be integrated into a strategic approach to help answer the question "what works in healthcare?" and what it means in a comprehensive way. SEaRCH is a systematic, rigorous approach for evaluating healthcare claims of therapies, practices, programs, or products in an efficient and stepwise fashion. It provides an iterative, protocol-driven process that is customized to the intervention, consumer, and context. Multiple communities, including those involved in health service and policy, can benefit from this organized framework, assuring that evidence-based principles determine which healthcare practices with the greatest promise are used for improving the public's health and wellness.
DEFF Research Database (Denmark)
Zhang, Yue; Dragoni, Nicola; Wang, Jiangtao
2015-01-01
efficiency to facilitate the design of fault detection methods and the evaluation of their energy efficiency. Following the same design principle of the fault detection framework, the paper proposes a classification for fault detection approaches. The classification is applied to a number of fault detection...
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
International Nuclear Information System (INIS)
Beltrán-Esteve, Mercedes; Reig-Martínez, Ernest; Estruch-Guitart, Vicent
2017-01-01
Sustainability analysis requires a joint assessment of environmental, social and economic aspects of production processes. Here we propose the use of Life Cycle Analysis (LCA), a metafrontier (MF) directional distance function (DDF) approach, and Data Envelopment Analysis (DEA), to assess technological and managerial differences in eco-efficiency between production systems. We use LCA to compute six environmental and health impacts associated with the production processes of nearly 200 Spanish citrus farms belonging to organic and conventional farming systems. DEA is then employed to obtain joint economic-environmental farm's scores that we refer to as eco-efficiency. DDF allows us to determine farms' global eco-efficiency scores, as well as eco-efficiency scores with respect to specific environmental impacts. Furthermore, the use of an MF helps us to disentangle technological and managerial eco-inefficiencies by comparing the eco-efficiency of both farming systems with regards to a common benchmark. Our core results suggest that the shift from conventional to organic farming technology would allow a potential reduction in environmental impacts of 80% without resulting in any decline in economic performance. In contrast, as regards farmers' managerial capacities, both systems display quite similar mean scores.
Energy Technology Data Exchange (ETDEWEB)
Beltrán-Esteve, Mercedes, E-mail: mercedes.beltran@uv.es [Department of Applied Economics II, University of Valencia (Spain); Reig-Martínez, Ernest [Department of Applied Economics II, University of Valencia, Ivie (Spain); Estruch-Guitart, Vicent [Department of Economy and Social Sciences, Polytechnic University of Valencia (Spain)
2017-03-15
Sustainability analysis requires a joint assessment of environmental, social and economic aspects of production processes. Here we propose the use of Life Cycle Analysis (LCA), a metafrontier (MF) directional distance function (DDF) approach, and Data Envelopment Analysis (DEA), to assess technological and managerial differences in eco-efficiency between production systems. We use LCA to compute six environmental and health impacts associated with the production processes of nearly 200 Spanish citrus farms belonging to organic and conventional farming systems. DEA is then employed to obtain joint economic-environmental farm's scores that we refer to as eco-efficiency. DDF allows us to determine farms' global eco-efficiency scores, as well as eco-efficiency scores with respect to specific environmental impacts. Furthermore, the use of an MF helps us to disentangle technological and managerial eco-inefficiencies by comparing the eco-efficiency of both farming systems with regards to a common benchmark. Our core results suggest that the shift from conventional to organic farming technology would allow a potential reduction in environmental impacts of 80% without resulting in any decline in economic performance. In contrast, as regards farmers' managerial capacities, both systems display quite similar mean scores.
Efendiev, Y.
2009-11-01
The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.
Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.
Montalvo-Acosta, Joel José; Cecchini, Marco
2016-12-01
The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Efficient Approach for Harmonic Resonance Identification of Large Wind Power Plants
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2016-01-01
Unlike conventional power systems where the resonance frequencies are mainly determined by the passive components parameters, large Wind Power Plants (WPPs) may introduce additional harmonic resonances because of the interactions of the wideband control systems of power converters with each other...... and with passive components. This paper presents an efficient approach for identification of harmonic resonances in large WPPs containing power electronic converters, cable, transformer, capacitor banks, shunt reactors, etc. The proposed approach introduces a large WPP as a Multi-Input Multi-Output (MIMO) control...... system by considering the linearized models of the inner control loops of grid-side converters. Therefore, the resonance frequencies of the WPP resulting from passive components and the control loop interactions are identified based on the determinant of the transfer function matrix of the introduced...
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Crawford, Cindy; Hilton, Lara; Elfenbaum, Pamela
2017-01-01
Abstract Background: Answering the question of “what works” in healthcare can be complex and requires the careful design and sequential application of systematic methodologies. Over the last decade, the Samueli Institute has, along with multiple partners, developed a streamlined, systematic, phased approach to this process called the Scientific Evaluation and Review of Claims in Health Care (SEaRCH™). The SEaRCH process provides an approach for rigorously, efficiently, and transparently making evidence-based decisions about healthcare claims in research and practice with minimal bias. Methods: SEaRCH uses three methods combined in a coordinated fashion to help determine what works in healthcare. The first, the Claims Assessment Profile (CAP), seeks to clarify the healthcare claim and question, and its ability to be evaluated in the context of its delivery. The second method, the Rapid Evidence Assessment of the Literature (REAL©), is a streamlined, systematic review process conducted to determine the quantity, quality, and strength of evidence and risk/benefit for the treatment. The third method involves the structured use of expert panels (EPs). There are several types of EPs, depending on the purpose and need. Together, these three methods—CAP, REAL, and EP—can be integrated into a strategic approach to help answer the question “what works in healthcare?” and what it means in a comprehensive way. Discussion: SEaRCH is a systematic, rigorous approach for evaluating healthcare claims of therapies, practices, programs, or products in an efficient and stepwise fashion. It provides an iterative, protocol-driven process that is customized to the intervention, consumer, and context. Multiple communities, including those involved in health service and policy, can benefit from this organized framework, assuring that evidence-based principles determine which healthcare practices with the greatest promise are used for improving the public's health and
Restraining approach for the spurious kinematic modes in hybrid equilibrium element
Parrinello, F.
2013-10-01
The present paper proposes a rigorous approach for the elimination of spurious kinematic modes in hybrid equilibrium elements, for three well known mesh patches. The approach is based on the identification of the dependent equations in the set of inter-element and boundary equilibrium equations of the sides involved in the spurious kinematic mode. Then the kinematic variables related to the dependent equations are reciprocally constrained and, by application of master slave elimination method, the set of inter-element equilibrium equations is reduced to full rank. The elastic solutions produced by means of the proposed approach verify the homogeneous, the inter-element and the boundary equilibrium equations. Hybrid stress formulation is developed in a rigorous mathematical setting. The results of linear elastic analysis obtained by the proposed approach and by classical displacement based method are compared for some structural examples.
Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei
2016-01-01
Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.
Supersymmetry and the Parisi-Sourlas dimensional reduction: A rigorous proof
International Nuclear Information System (INIS)
Klein, A.; Landau, L.J.; Perez, J.F.
1984-01-01
Functional integrals that are formally related to the average correlation functions of a classical field theory in the presence of random external sources are given a rigorous meaning. Their dimensional reduction to the Schwinger functions of the corresponding quantum field theory in two fewer dimensions is proven. This is done by reexpressing those functional integrals as expectations of a supersymmetric field theory. The Parisi-Sourlas dimensional reduction of a supersymmetric field theory to a usual quantum field theory in two fewer dimensions is proven. (orig.)
Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II
Energy Technology Data Exchange (ETDEWEB)
George J. Koperna Jr.; Vello A. Kuuskraa; David E. Riestenberg; Aiysha Sultana; Tyler Van Leeuwen
2009-06-01
This report serves as the final technical report and users manual for the 'Rigorous Screening Technology for Identifying Suitable CO2 Storage Sites II SBIR project. Advanced Resources International has developed a screening tool by which users can technically screen, assess the storage capacity and quantify the costs of CO2 storage in four types of CO2 storage reservoirs. These include CO2-enhanced oil recovery reservoirs, depleted oil and gas fields (non-enhanced oil recovery candidates), deep coal seems that are amenable to CO2-enhanced methane recovery, and saline reservoirs. The screening function assessed whether the reservoir could likely serve as a safe, long-term CO2 storage reservoir. The storage capacity assessment uses rigorous reservoir simulation models to determine the timing, ultimate storage capacity, and potential for enhanced hydrocarbon recovery. Finally, the economic assessment function determines both the field-level and pipeline (transportation) costs for CO2 sequestration in a given reservoir. The screening tool has been peer reviewed at an Electrical Power Research Institute (EPRI) technical meeting in March 2009. A number of useful observations and recommendations emerged from the Workshop on the costs of CO2 transport and storage that could be readily incorporated into a commercial version of the Screening Tool in a Phase III SBIR.
Directory of Open Access Journals (Sweden)
Md. Rezaul Karim
2012-03-01
Full Text Available Mining interesting patterns from DNA sequences is one of the most challenging tasks in bioinformatics and computational biology. Maximal contiguous frequent patterns are preferable for expressing the function and structure of DNA sequences and hence can capture the common data characteristics among related sequences. Biologists are interested in finding frequent orderly arrangements of motifs that are responsible for similar expression of a group of genes. In order to reduce mining time and complexity, however, most existing sequence mining algorithms either focus on finding short DNA sequences or require explicit specification of sequence lengths in advance. The challenge is to find longer sequences without specifying sequence lengths in advance. In this paper, we propose an efficient approach to mining maximal contiguous frequent patterns from large DNA sequence datasets. The experimental results show that our proposed approach is memory-efficient and mines maximal contiguous frequent patterns within a reasonable time.
International Nuclear Information System (INIS)
Geemert, René van
2014-01-01
eigenvalue to unity. This paper presents a rigorous derivation of the new approach, followed by a comparison on convergence efficiencies, for a number of 3D full core nodal grid resolution regimes, between the previously available multi-level rebalancing setup and the new multi-level surface rebalancing concept. The surface rebalancing methodology and a number of related concepts are covered in the patents EP2091049, EP2287855, EP2287854 and EP2287853 that were granted in 2012
Geldenhuys, Greta; Muller, Nina; Frylinck, Lorinda; Hoffman, Louwrens C
2016-01-15
Baseline research on the toughness of Egyptian goose meat is required. This study therefore investigates the post mortem pH and temperature decline (15 min-4 h 15 min post mortem) in the pectoralis muscle (breast portion) of this gamebird species. It also explores the enzyme activity of the Ca(2+)-dependent protease (calpain system) and the lysosomal cathepsins during the rigor mortis period. No differences were found for any of the variables between genders. The pH decline in the pectoralis muscle occurs quite rapidly (c = -0.806; ultimate pH ∼ 5.86) compared with other species and it is speculated that the high rigor temperature (>20 °C) may contribute to the increased toughness. No calpain I was found in Egyptian goose meat and the µ/m-calpain activity remained constant during the rigor period, while a decrease in calpastatin activity was observed. The cathepsin B, B & L and H activity increased over the rigor period. Further research into the connective tissue content and myofibrillar breakdown during aging is required in order to know if the proteolytic enzymes do in actual fact contribute to tenderisation. © 2015 Society of Chemical Industry.
A methodology for the rigorous verification of plasma simulation codes
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Donders, S.; Pluymers, B.; Ragnarsson, P.; Hadjit, R.; Desmet, W.
2010-04-01
In the vehicle design process, design decisions are more and more based on virtual prototypes. Due to competitive and regulatory pressure, vehicle manufacturers are forced to improve product quality, to reduce time-to-market and to launch an increasing number of design variants on the global market. To speed up the design iteration process, substructuring and component mode synthesis (CMS) methods are commonly used, involving the analysis of substructure models and the synthesis of the substructure analysis results. Substructuring and CMS enable efficient decentralized collaboration across departments and allow to benefit from the availability of parallel computing environments. However, traditional CMS methods become prohibitively inefficient when substructures are coupled along large interfaces, i.e. with a large number of degrees of freedom (DOFs) at the interface between substructures. The reason is that the analysis of substructures involves the calculation of a number of enrichment vectors, one for each interface degree of freedom (DOF). Since large interfaces are common in vehicles (e.g. the continuous line connections to connect the body with the windshield, roof or floor), this interface bottleneck poses a clear limitation in the vehicle noise, vibration and harshness (NVH) design process. Therefore there is a need to describe the interface dynamics more efficiently. This paper presents a wave-based substructuring (WBS) approach, which allows reducing the interface representation between substructures in an assembly by expressing the interface DOFs in terms of a limited set of basis functions ("waves"). As the number of basis functions can be much lower than the number of interface DOFs, this greatly facilitates the substructure analysis procedure and results in faster design predictions. The waves are calculated once from a full nominal assembly analysis, but these nominal waves can be re-used for the assembly of modified components. The WBS approach thus
K. Di; Y. Liu; B. Liu; M. Peng
2012-01-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D c...
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul
2012-01-01
The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
Directory of Open Access Journals (Sweden)
Md Zobaer Hasan
Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.
EPIC: A Testbed for Scientifically Rigorous Cyber-Physical Security Experimentation
SIATERLIS CHRISTOS; GENGE BELA; HOHENADEL MARC
2013-01-01
Recent malware, like Stuxnet and Flame, constitute a major threat to Networked Critical Infrastructures (NCIs), e.g., power plants. They revealed several vulnerabilities in today's NCIs, but most importantly they highlighted the lack of an efficient scientific approach to conduct experiments that measure the impact of cyber threats on both the physical and the cyber parts of NCIs. In this paper we present EPIC, a novel cyber-physical testbed and a modern scientific instrument that can pr...
Sun, Xiang; Li, Xinyao; Song, Song; Zhu, Yuchao; Liang, Yu-Feng; Jiao, Ning
2015-05-13
An efficient Mn-catalyzed aerobic oxidative hydroxyazidation of olefins for synthesis of β-azido alcohols has been developed. The aerobic oxidative generation of azido radical employing air as the terminal oxidant is disclosed as the key process for this transformation. The reaction is appreciated by its broad substrate scope, inexpensive Mn-catalyst, high efficiency, easy operation under air, and mild conditions at room temperature. This chemistry provides a novel approach to high value-added β-azido alcohols, which are useful precursors of aziridines, β-amino alcohols, and other important N- and O-containing heterocyclic compounds. This chemistry also provides an unexpected approach to azido substituted cyclic peroxy alcohol esters. A DFT calculation indicates that Mn catalyst plays key dual roles as an efficient catalyst for the generation of azido radical and a stabilizer for peroxyl radical intermediate. Further calculation reasonably explains the proposed mechanism for the control of C-C bond cleavage or for the formation of β-azido alcohols.
International Nuclear Information System (INIS)
Flores M, E.; Avila, O.; Rodriguez V, M.; Massillon, J.L.G.; Buenfil A, E.; Ruiz T, C.; Brandan, M.E.; Gamboa De Buen, I.
2007-01-01
This work presents measures of relative thermoluminescent efficiency of those high temperature peaks of TLD-100 dosemeters exposed to protons of 1.5 MeV and to helium nuclei of 3 and 7.5 MeV. A rigorous reading and of deconvolution protocol was used for the calculation of the TL efficiencies. Additionally an Excel program that facilitated the deconvolution adjustment process of the glow curves was elaborated. (Author)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
International Nuclear Information System (INIS)
Salari, Ehsan; Craft, David; Wala, Jeremiah
2012-01-01
To formulate and solve the fluence-map merging procedure of the recently-published VMAT treatment-plan optimization method, called vmerge, as a bi-criteria optimization problem. Using an exact merging method rather than the previously-used heuristic, we are able to better characterize the trade-off between the delivery efficiency and dose quality. vmerge begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the ‘ideal’ dose distribution. Neighboring fluence maps are then successively merged, meaning that they are added together and delivered as a single map. The merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution. We replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose. We formulate this using a network-flow model that represents the merging problem. Since the problem is discrete and thus non-convex, we employ a customized box algorithm to characterize the Pareto frontier. The Pareto frontier is then used as a benchmark to evaluate the performance of the standard vmerge algorithm as well as two other similar heuristics. We test the exact and heuristic merging approaches on a pancreas and a prostate cancer case. For both cases, the shape of the Pareto frontier suggests that starting from a high-quality plan, we can obtain efficient VMAT plans through merging neighboring fluence maps without substantially deviating from the initial dose distribution. The trade-off curves obtained by the various heuristics are contrasted and shown to all be equally capable of initial plan simplifications, but to deviate in quality for more drastic efficiency improvements. This work presents a network optimization approach to the merging problem. Contrasting the trade-off curves of the
Salari, Ehsan; Wala, Jeremiah; Craft, David
2012-09-07
To formulate and solve the fluence-map merging procedure of the recently-published VMAT treatment-plan optimization method, called VMERGE, as a bi-criteria optimization problem. Using an exact merging method rather than the previously-used heuristic, we are able to better characterize the trade-off between the delivery efficiency and dose quality. VMERGE begins with a solution of the fluence-map optimization problem with 180 equi-spaced beams that yields the 'ideal' dose distribution. Neighboring fluence maps are then successively merged, meaning that they are added together and delivered as a single map. The merging process improves the delivery efficiency at the expense of deviating from the initial high-quality dose distribution. We replace the original merging heuristic by considering the merging problem as a discrete bi-criteria optimization problem with the objectives of maximizing the treatment efficiency and minimizing the deviation from the ideal dose. We formulate this using a network-flow model that represents the merging problem. Since the problem is discrete and thus non-convex, we employ a customized box algorithm to characterize the Pareto frontier. The Pareto frontier is then used as a benchmark to evaluate the performance of the standard VMERGE algorithm as well as two other similar heuristics. We test the exact and heuristic merging approaches on a pancreas and a prostate cancer case. For both cases, the shape of the Pareto frontier suggests that starting from a high-quality plan, we can obtain efficient VMAT plans through merging neighboring fluence maps without substantially deviating from the initial dose distribution. The trade-off curves obtained by the various heuristics are contrasted and shown to all be equally capable of initial plan simplifications, but to deviate in quality for more drastic efficiency improvements. This work presents a network optimization approach to the merging problem. Contrasting the trade-off curves of the merging
2010-05-27
... rigorous knowledge and skills in English- language arts and mathematics that employers and colleges expect... specialists and to access the student outcome data needed to meet annual evaluation and reporting requirements...
Use of spatial symmetry in atomic--integral calculations: an efficient permutational approach
International Nuclear Information System (INIS)
Rouzo, H.L.
1979-01-01
The minimal number of independent nonzero atomic integrals that occur over arbitrarily oriented basis orbitals of the form R(r).Y/sub lm/(Ω) is theoretically derived. The corresponding method can be easily applied to any point group, including the molecular continuous groups C/sub infinity v/ and D/sub infinity h/. On the basis of this (theoretical) lower bound, the efficiency of the permutational approach in generating sets of independent integrals is discussed. It is proved that lobe orbitals are always more efficient than the familiar Cartesian Gaussians, in the sense that GLOS provide the shortest integral lists. Moreover, it appears that the new axial GLOS often lead to a number of integrals, which is the theoretical lower bound previously defined. With AGLOS, the numbers of two-electron integrals to be computed, stored, and processed are divided by factors 2.9 (NH 3 ), 4.2 (C 5 H 5 ), and 3.6 (C 6 H 6 ) with reference to the corresponding CGTOS calculations. Remembering that in the permutational approach, atomic integrals are directly computed without any four-indice transformation, it appears that its utilization in connection with AGLOS provides one of the most powerful tools for treating symmetrical species. 34 references
Hursen, Cigdem; Fasli, Funda Gezer
2017-01-01
The main purpose of this research is to investigate the efficiency of scenario based learning and reflective learning approaches in teacher education. The impact of applications of scenario based learning and reflective learning on prospective teachers' academic achievement and views regarding application and professional self-competence…
An efficient and extensible approach for compressing phylogenetic trees
Matthews, Suzanne J
2011-01-01
Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Towards energy and resource efficient manufacturing: A processes and systems approach
DEFF Research Database (Denmark)
Duflou, Joost R.; Sutherland, John W.; Dornfeld, David
2012-01-01
, distinguishing different system scale levels, is applied: starting from a unit process focus, respectively the multi-machine, factory, multi-facility and supply chain levels are covered. Determined by the research contributions reported in literature, the de facto focus of the paper is mainly on energy related......This paper aims to provide a systematic overview of the state of the art in energy and resource efficiency increasing methods and techniques in the domain of discrete part manufacturing, with attention for the effectiveness of the available options. For this purpose a structured approach...
Rigorous patient-prosthesis matching of Perimount Magna aortic bioprosthesis.
Nakamura, Hiromasa; Yamaguchi, Hiroki; Takagaki, Masami; Kadowaki, Tasuku; Nakao, Tatsuya; Amano, Atsushi
2015-03-01
Severe patient-prosthesis mismatch, defined as effective orifice area index ≤0.65 cm(2) m(-2), has demonstrated poor long-term survival after aortic valve replacement. Reported rates of severe mismatch involving the Perimount Magna aortic bioprosthesis range from 4% to 20% in patients with a small annulus. Between June 2008 and August 2011, 251 patients (mean age 70.5 ± 10.2 years; mean body surface area 1.55 ± 0.19 m(2)) underwent aortic valve replacement with a Perimount Magna bioprosthesis, with or without concomitant procedures. We performed our procedure with rigorous patient-prosthesis matching to implant a valve appropriately sized to each patient, and carried out annular enlargement when a 19-mm valve did not fit. The bioprosthetic performance was evaluated by transthoracic echocardiography predischarge and at 1 and 2 years after surgery. Overall hospital mortality was 1.6%. Only 5 (2.0%) patients required annular enlargement. The mean follow-up period was 19.1 ± 10.7 months with a 98.4% completion rate. Predischarge data showed a mean effective orifice area index of 1.21 ± 0.20 cm(2) m(-2). Moderate mismatch, defined as effective orifice area index ≤0.85 cm(2) m(-2), developed in 4 (1.6%) patients. None developed severe mismatch. Data at 1 and 2 years showed only two cases of moderate mismatch; neither was severe. Rigorous patient-prosthesis matching maximized the performance of the Perimount Magna, and no severe mismatch resulted in this Japanese population of aortic valve replacement patients. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
An efficient approach to unstructured mesh hydrodynamics on the cell broadband engine
Energy Technology Data Exchange (ETDEWEB)
Ferenbaugh, Charles R [Los Alamos National Laboratory
2010-01-01
Unstructured mesh physics for the Cell Broadband Engine (CBE) has received little or no attention to date, largely because the CBE architecture poses particular challenges for unstructured mesh algorithms. The most common SPU memory management strategies cannot be applied to the irregular memory access patterns of unstructured meshes, and the SPU vector instruction set does not support the indirect addressing needed by connectivity arrays. This paper presents an approach to unstructured mesh physics that addresses these challenges, by creating a new mesh data structure and reorganizing code to give efficient CBE performance. The approach is demonstrated on the FLAG production hydrodynamics code using standard test problems, and results show an average speedup of more than 5x over the original code.
Directory of Open Access Journals (Sweden)
Henrik von Wehrden
2017-02-01
Full Text Available Sustainability science encompasses a unique field that is defined through its purpose, the problem it addresses, and its solution-oriented agenda. However, this orientation creates significant methodological challenges. In this discussion paper, we conceptualize sustainability problems as wicked problems to tease out the key challenges that sustainability science is facing if scientists intend to deliver on its solution-oriented agenda. Building on the available literature, we discuss three aspects that demand increased attention for advancing sustainability science: 1 methods with higher diversity and complementarity are needed to increase the chance of deriving solutions to the unique aspects of wicked problems; for instance, mixed methods approaches are potentially better suited to allow for an approximation of solutions, since they cover wider arrays of knowledge; 2 methodologies capable of dealing with wicked problems demand strict procedural and ethical guidelines, in order to ensure their integration potential; for example, learning from solution implementation in different contexts requires increased comparability between research approaches while carefully addressing issues of legitimacy and credibility; and 3 approaches are needed that allow for longitudinal research, since wicked problems are continuous and solutions can only be diagnosed in retrospect; for example, complex dynamics of wicked problems play out across temporal patterns that are not necessarily aligned with the common timeframe of participatory sustainability research. Taken together, we call for plurality in methodologies, emphasizing procedural rigor and the necessity of continuous research to effectively addressing wicked problems as well as methodological challenges in sustainability science.
Nonlinear mechanics of non-rigid origami: an efficient computational approach
Liu, K.; Paulino, G. H.
2017-10-01
Origami-inspired designs possess attractive applications to science and engineering (e.g. deployable, self-assembling, adaptable systems). The special geometric arrangement of panels and creases gives rise to unique mechanical properties of origami, such as reconfigurability, making origami designs well suited for tunable structures. Although often being ignored, origami structures exhibit additional soft modes beyond rigid folding due to the flexibility of thin sheets that further influence their behaviour. Actual behaviour of origami structures usually involves significant geometric nonlinearity, which amplifies the influence of additional soft modes. To investigate the nonlinear mechanics of origami structures with deformable panels, we present a structural engineering approach for simulating the nonlinear response of non-rigid origami structures. In this paper, we propose a fully nonlinear, displacement-based implicit formulation for performing static/quasi-static analyses of non-rigid origami structures based on `bar-and-hinge' models. The formulation itself leads to an efficient and robust numerical implementation. Agreement between real models and numerical simulations demonstrates the ability of the proposed approach to capture key features of origami behaviour.
Baran, Derya; Gasparini, Nicola; Wadsworth, Andrew; Tan, Ching Hong; Wehbe, Nimer; Song, Xin; Hamid, Zeinab; Zhang, Weimin; Neophytou, Marios; Kirchartz, Thomas; Brabec, Christoph J; Durrant, James R; McCulloch, Iain
2018-05-25
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm -2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Baran, Derya
2018-05-21
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm-2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Baran, Derya; Gasparini, Nicola; Wadsworth, Andrew; Tan, Ching Hong; Wehbe, Nimer; Song, Xin; Hamid, Zeinab; Zhang, Weimin; Neophytou, Marios; Kirchartz, Thomas; Brabec, Christoph J.; Durrant, James R.; McCulloch, Iain
2018-01-01
Nonfullerene solar cells have increased their efficiencies up to 13%, yet quantum efficiencies are still limited to 80%. Here we report efficient nonfullerene solar cells with quantum efficiencies approaching unity. This is achieved with overlapping absorption bands of donor and acceptor that increases the photon absorption strength in the range from about 570 to 700 nm, thus, almost all incident photons are absorbed in the active layer. The charges generated are found to dissociate with negligible geminate recombination losses resulting in a short-circuit current density of 20 mA cm-2 along with open-circuit voltages >1 V, which is remarkable for a 1.6 eV bandgap system. Most importantly, the unique nano-morphology of the donor:acceptor blend results in a substantially improved stability under illumination. Understanding the efficient charge separation in nonfullerene acceptors can pave the way to robust and recombination-free organic solar cells.
Directory of Open Access Journals (Sweden)
Feng Xiao-Jiang
2008-10-01
Full Text Available Abstract Background The analysis of large-scale data sets via clustering techniques is utilized in a number of applications. Biclustering in particular has emerged as an important problem in the analysis of gene expression data since genes may only jointly respond over a subset of conditions. Biclustering algorithms also have important applications in sample classification where, for instance, tissue samples can be classified as cancerous or normal. Many of the methods for biclustering, and clustering algorithms in general, utilize simplified models or heuristic strategies for identifying the "best" grouping of elements according to some metric and cluster definition and thus result in suboptimal clusters. Results In this article, we present a rigorous approach to biclustering, OREO, which is based on the Optimal RE-Ordering of the rows and columns of a data matrix so as to globally minimize the dissimilarity metric. The physical permutations of the rows and columns of the data matrix can be modeled as either a network flow problem or a traveling salesman problem. Cluster boundaries in one dimension are used to partition and re-order the other dimensions of the corresponding submatrices to generate biclusters. The performance of OREO is tested on (a metabolite concentration data, (b an image reconstruction matrix, (c synthetic data with implanted biclusters, and gene expression data for (d colon cancer data, (e breast cancer data, as well as (f yeast segregant data to validate the ability of the proposed method and compare it to existing biclustering and clustering methods. Conclusion We demonstrate that this rigorous global optimization method for biclustering produces clusters with more insightful groupings of similar entities, such as genes or metabolites sharing common functions, than other clustering and biclustering algorithms and can reconstruct underlying fundamental patterns in the data for several distinct sets of data matrices arising
Fast and Rigorous Assignment Algorithm Multiple Preference and Calculation
Directory of Open Access Journals (Sweden)
Ümit Çiftçi
2010-03-01
Full Text Available The goal of paper is to develop an algorithm that evaluates students then places them depending on their desired choices according to dependant preferences. The developed algorithm is also used to implement software. The success and accuracy of the software as well as the algorithm are tested by applying it to ability test at Beykent University. This ability test is repeated several times in order to fill all available places at Fine Art Faculty departments in every academic year. It has been shown that this algorithm is very fast and rigorous after application of 2008-2009 and 2009-20010 academic years.Key Words: Assignment algorithm, student placement, ability test
Juvonen, Piia
2012-01-01
ABSTRACT Juvonen, Piia Suvi Päivikki 2012. Effective information flow through efficient supply chain management -Value stream mapping approach - Case Outokumpu Tornio Works. Master`s Thesis. Kemi-Tornio University of Applied Sciences. Business and Culture. Pages 63. Appendices 2. The general aim of this thesis is to explore effective information flow through efficient supply chain management by following one of the lean management principles, value stream mapping. The specific research...
Auditing energy use -a systematic approach for enhancing energy efficiency
International Nuclear Information System (INIS)
Ardhapnrkar, P.M.; Mahalle, A.M.
2005-01-01
Energy management is a critical activity in the developing as well as developed countries owing to constraints in the availability of primary energy resources and the increasing demand for energy from the industrial and non-industrial users. Energy consumption is a vital parameter that determines the economic growth of any country. An energy management system (EMS) can save money by allowing greater control over energy consuming equipment. The foundation for the energy program is the energy audit, which is the systematic study of factory or building to determine where and how well energy is being used. It is the nucleus of any successful energy saving program -it is tool, not a solution. Conventional energy conservation methods are mostly sporadic and lack a coordinated plan of action. Consequently only apparent systems are treated without the analysis of system interaction. Energy audit on the other hand, involves total system approach and aims at optimizing energy use efficiently for the entire plant. In the present paper a new approach to pursue energy conservation techniques is being discussed. The focus is mainly on the methodology of energy audit, energy use analysis, relating energy with the production, and reducing energy losses, etc. It is observe that with this systematic approach, if adopted, which consists of three essential segments namely capacity utilization fine-tuning of the equipment and technology up-gradation can result in phenomenal savings in the energy, building competitive edge for the industry. This approach along with commitment can provide the right impetus to reap the benefits of energy conservation on a sustained basis. (author)
Bringing scientific rigor to community-developed programs in Hong Kong
Directory of Open Access Journals (Sweden)
Fabrizio Cecilia S
2012-12-01
Full Text Available Abstract Background This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR. Methods The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Results Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. Conclusions The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.
Desarrollo constitucional, legal y jurisprudencia del principio de rigor subsidiario
Directory of Open Access Journals (Sweden)
Germán Eduardo Cifuentes Sandoval
2013-09-01
Full Text Available In colombia the environment state administration is in charge of environmental national system, SINA, SINA is made up of states entities that coexist beneath a mixed organization of centralization and decentralization. SINA decentralization express itself in a administrative and territorial level, and is waited that entities that function under this structure act in a coordinated way in order to reach suggested objectives in the environmental national politicy. To achieve the coordinated environmental administration through entities that define the SINA, the environmental legislation of Colombia has include three basic principles: 1. The principle of “armorial regional” 2. The principle of “gradationnormative” 3. The principle of “rigorsubsidiaries”. These principles belong to the article 63, law 99 of 1933, and even in the case of the two first, it is possible to find equivalents in other norms that integrate the Colombian legal system, it does not happen in that way with the “ rigor subsidiaries” because its elements are uniques of the environmental normativity and do not seem to be similar to those that make part of the principle of “ subsidiaridad” present in the article 288 of the politic constitution. The “ rigor subsidiaries” give to decentralizates entities certain type of special ability to modify the current environmental legislation to defend the local ecological patrimony. It is an administrative ability with a foundation in the decentralization autonomy that allows to take place of the reglamentary denied of the legislative power with the condition that the new normativity be more demanding that the one that belongs to the central level
Bringing scientific rigor to community-developed programs in Hong Kong.
Fabrizio, Cecilia S; Hirschmann, Malia R; Lam, Tai Hing; Cheung, Teresa; Pang, Irene; Chan, Sophia; Stewart, Sunita M
2012-12-31
This paper describes efforts to generate evidence for community-developed programs to enhance family relationships in the Chinese culture of Hong Kong, within the framework of community-based participatory research (CBPR). The CBPR framework was applied to help maximize the development of the intervention and the public health impact of the studies, while enhancing the capabilities of the social service sector partners. Four academic-community research teams explored the process of designing and implementing randomized controlled trials in the community. In addition to the expected cultural barriers between teams of academics and community practitioners, with their different outlooks, concerns and languages, the team navigated issues in utilizing the principles of CBPR unique to this Chinese culture. Eventually the team developed tools for adaptation, such as an emphasis on building the relationship while respecting role delineation and an iterative process of defining the non-negotiable parameters of research design while maintaining scientific rigor. Lessons learned include the risk of underemphasizing the size of the operational and skills shift between usual agency practices and research studies, the importance of minimizing non-negotiable parameters in implementing rigorous research designs in the community, and the need to view community capacity enhancement as a long term process. The four pilot studies under the FAMILY Project demonstrated that nuanced design adaptations, such as wait list controls and shorter assessments, better served the needs of the community and led to the successful development and vigorous evaluation of a series of preventive, family-oriented interventions in the Chinese culture of Hong Kong.
Cold homes, fuel poverty and energy efficiency improvements: A longitudinal focus group approach.
Grey, Charlotte N B; Schmieder-Gaite, Tina; Jiang, Shiyu; Nascimento, Christina; Poortinga, Wouter
2017-08-01
Cold homes and fuel poverty have been identified as factors in health and social inequalities that could be alleviated through energy efficiency interventions. Research on fuel poverty and the health impacts of affordable warmth initiatives have to date primarily been conducted using quantitative and statistical methods, limiting the way how fuel poverty is understood. This study took a longitudinal focus group approach that allowed exploration of lived experiences of fuel poverty before and after an energy efficiency intervention. Focus group discussions were held with residents from three low-income communities before (n = 28) and after (n = 22) they received energy efficiency measures funded through a government-led scheme. The results show that improving the energy efficiency of homes at risk of fuel poverty has a profound impact on wellbeing and quality of life, financial stress, thermal comfort, social interactions and indoor space use. However, the process of receiving the intervention was experienced by some as stressful. There is a need for better community engagement and communication to improve the benefits delivered by fuel poverty programmes, as well as further qualitative exploration to better understand the wider impacts of fuel poverty and policy-led intervention schemes.
Energy Efficient Hierarchical Clustering Approaches in Wireless Sensor Networks: A Survey
Directory of Open Access Journals (Sweden)
Bilal Jan
2017-01-01
Full Text Available Wireless sensor networks (WSN are one of the significant technologies due to their diverse applications such as health care monitoring, smart phones, military, disaster management, and other surveillance systems. Sensor nodes are usually deployed in large number that work independently in unattended harsh environments. Due to constraint resources, typically the scarce battery power, these wireless nodes are grouped into clusters for energy efficient communication. In clustering hierarchical schemes have achieved great interest for minimizing energy consumption. Hierarchical schemes are generally categorized as cluster-based and grid-based approaches. In cluster-based approaches, nodes are grouped into clusters, where a resourceful sensor node is nominated as a cluster head (CH while in grid-based approach the network is divided into confined virtual grids usually performed by the base station. This paper highlights and discusses the design challenges for cluster-based schemes, the important cluster formation parameters, and classification of hierarchical clustering protocols. Moreover, existing cluster-based and grid-based techniques are evaluated by considering certain parameters to help users in selecting appropriate technique. Furthermore, a detailed summary of these protocols is presented with their advantages, disadvantages, and applicability in particular cases.
Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas
2014-06-01
This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.
A Qualitative Approach to Enzyme Inhibition
Waldrop, Grover L.
2009-01-01
Most general biochemistry textbooks present enzyme inhibition by showing how the basic Michaelis-Menten parameters K[subscript m] and V[subscript max] are affected mathematically by a particular type of inhibitor. This approach, while mathematically rigorous, does not lend itself to understanding how inhibition patterns are used to determine the…
Measurement and evaluation of energy efficiency programs: California and South Korea
International Nuclear Information System (INIS)
Vine, E.; Rhee, C.H.; Lee, K.D.
2006-01-01
One of the key challenges for countries facing electric utility restructuring is to ensure that key public goods, such as energy efficiency programs, do not lose support but are maintained and enhanced via regulatory policy and government action. Moreover, an infrastructure and process also needs to be designed and implemented for conducting the measurement and evaluation of energy efficiency programs. This paper describes the experiences of California and the Republic of Korea (Korea) in addressing these issues. These case studies confirm that the active involvement of regulatory bodies is needed to ensure that energy efficiency investments continue. The case studies also show that the development of an infrastructure and process for conducting rigorous measurement and evaluation takes time and needs the active participation of many stakeholders
Olsson, U; Hertzman, C; Tornberg, E
1994-01-01
The course of rigor mortis, ageing and tenderness have been evaluated for two beef muscles, M. semimembranosus (SM) and M. longissimus dorsi (LD), when entering rigor at constant temperatures in the cold-shortening region (1, 4, 7 and 10°C). The influence of electrical stimulation (ES) was also examined. Post-mortem changes were registered by shortening and isometric tension and by following the decline of pH, ATP and creatine phosphate. The effect of ageing on tenderness was recorded by measuring shear-force (2, 8 and 15 days post mortem) and the sensory properties were assessed 15 days post mortem. It was found that shortening increased with decreasing temperature, resulting in decreased tenderness. Tenderness for LD, but not for SM, was improved by ES at 1 and 4°C, whereas ES did not give rise to any decrease in the degree of shortening during rigor mortis development. This suggests that ES influences tenderization more than it prevents cold-shortening. The samples with a pre-rigor mortis temperature of 1°C could not be tenderized, when stored up to 15 days, whereas this was the case for the muscles entering rigor mortis at the other higher temperatures. The results show that under the conditions used in this study, the course of rigor mortis is more important for the ultimate tenderness than the course of ageing. Copyright © 1994. Published by Elsevier Ltd.
Alternative approaches to evaluation of cow efficiency
African Journals Online (AJOL)
anonymous
2017-01-26
Jan 26, 2017 ... Indexes that are consistent with the econometric definition of efficiency and seek to ... defined as ratios, such as the biological efficiency metric calf weight/cow weight. Dinkel & Brown ..... Biometrics 15, 469-485. Scholtz, M.M.& ...
Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System
Goluskin, David
2018-04-01
We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.
Directory of Open Access Journals (Sweden)
Salma Noor-E Islami
2014-12-01
Full Text Available Rigor-index in market-size striped catfish (Pangasianodon hypophthalmus, locally called Thai-Pangas was determined to assess fillet yield for production of value-added products. In whole fish, rigor started within 1 hr after death under both iced and room temperature conditions while rigor-index reached a maximum of 72.23% within 8 hr and 85.5% within 5 hr at room temperature and iced condition, respectively, which was fully relaxed after 22 hr under both storage conditions. Post-mortem muscle pH decreased to 6.8 after 2 hr, 6.2 after 8 hr and sharp increase to 6.9 after 9 hr. There was a positive correlation between rigor progress and pH shift in fish fillets. Hand filleting was done post-rigor and fillet yield experiment showed 50.4±2.1% fillet, 8.0±0.2% viscera, 8.0±1.3% skin and 32.0±3.2% carcass could be obtained from Thai-Pangas. Proximate composition analysis of four regions of Thai-Pangas viz., head region, middle region, tail region and viscera revealed moisture 78.36%, 81.14%, 81.45% and 57.33%; protein 15.83%, 15.97%, 16.14% and 17.20%; lipid 4.61%, 1.82%, 1.32% and 24.31% and ash 1.09%, 0.96%, 0.95% and 0.86%, respectively indicating suitability of Thai-Pangas for production of value-added products such as fish fillets.
Chenu, K; van Oosterom, E J; McLean, G; Deifel, K S; Fletcher, A; Geetika, G; Tirfessa, A; Mace, E S; Jordan, D R; Sulman, R; Hammer, G L
2018-02-21
Following advances in genetics, genomics, and phenotyping, trait selection in breeding is limited by our ability to understand interactions within the plants and with their environments, and to target traits of most relevance for the target population of environments. We propose an integrated approach that combines insights from crop modelling, physiology, genetics, and breeding to identify traits valuable for yield gain in the target population of environments, develop relevant high-throughput phenotyping platforms, and identify genetic controls and their values in production environments. This paper uses transpiration efficiency (biomass produced per unit of water used) as an example of a complex trait of interest to illustrate how the approach can guide modelling, phenotyping, and selection in a breeding program. We believe that this approach, by integrating insights from diverse disciplines, can increase the resource use efficiency of breeding programs for improving yield gains in target populations of environments.
A statistical approach to the analysis of merger and acquisition efficiency in the Russian industry
Directory of Open Access Journals (Sweden)
Karelina M.
2017-01-01
Full Text Available At present, the success of economic institution transformations, as well as creating an efficient economic system with a fundamental new nature of corporate relationships are impossible without the statistical recording of factors contributing to the efficiency of merger and acquisition transactions in the Russian industry. The paper proposes a method for analyzing the efficiency of merger and acquisition transactions of enterprises in the industrial sector of the Russian economy, based on simulation methods. The methodical approach developed to analyze the efficiency of the integration transactions of Russian industrial companies allows one to consider individual preferences of investors, as well as to give a complex statistical evaluation of the strategic economic benefits from M&A transactions. This method enables to evaluate the probability and stability of the synergistic effect values within the increase of competitiveness of Russian industrial enterprises on the domestic and foreign markets.
Directory of Open Access Journals (Sweden)
Joana Maia Mendes
2015-06-01
Full Text Available ResumoO presente trabalho avaliou a influência do estresse pré-abate e do método de abate sobre o rigor mortis do tambaqui durante armazenamento em gelo. Foram estudadas respostas fisiológicas do tambaqui ao estresse durante o pré-abate, que foi dividido em quatro etapas: despesca, transporte, recuperação por 24 h e por 48 h. Ao final de cada etapa, os peixes foram amostrados para caracterização do estresse pré-abate por meio de análises dos parâmetros plasmáticos de glicose, lactato e amônia e, em seguida, os peixes foram abatidos por hipotermia ou por asfixia com gás carbônico para o estudo do rigor mortis. Verificou-se que o estado fisiológico de estresse dos peixes foi mais agudo logo após o transporte, implicando numa entrada em rigor mortis mais rápida: 60 minutos para tambaquis abatidos por hipotermia e 120 minutos para tambaquis abatidos por asfixia com gás carbônico. Nos viveiros, os peixes abatidos logo após a despesca apresentaram estado de estresse intermediário, sem diferença no tempo de entrada em rigor mortis em relação ao método de abate (135 minutos. Os peixes que passaram por recuperação ao estresse causado pelo transporte em condições simuladas de indústria apresentaram entrada em rigor mortis mais tardia: 225 minutos (com 24 h de recuperação e 255 minutos (com 48 h de recuperação, igualmente sem diferença em relação aos métodos de abate testados. A resolução do rigor mortis foi mais rápida nos peixes abatidos após o transporte, que foi de 12 dias. Nos peixes abatidos logo após a despesca, a resolução ocorreu com 16 dias e, nos peixes abatidos após recuperação, com 20 dias para 24 h de recuperação ao estresse pré-abate e 24 dias para 48 h de recuperação, sem influência do método de abate na resolução do rigor mortis. Assim, é desejável que o abate do tambaqui destinado à indústria seja feito após período de recuperação ao estresse, com vistas a aumentar sua
International Nuclear Information System (INIS)
Valadkhani, Abbas; Roshdi, Israfil; Smyth, Russell
2016-01-01
We propose a multiplicative environmental data envelopment analysis (ME-DEA) approach to measure the performance of 46 countries that generate most of the world's carbon dioxide (CO_2) emissions. In the model, we combine economic (labour and capital), environmental (freshwater) and energy inputs with a desirable output (GDP) and three undesirable outputs (CO_2, methane and nitrous oxide emissions). We rank each country according to the optimum use of its resources employing a multiplicative extension of environmental DEA models. By computing partial efficiency scores for each input and output separately, we thus identify major sources of inefficiency for all sample countries. Based on the partial efficiency scores obtained from the model, we define aggregate economic, energy and environmental efficiency indexes for 2002, 2007 and 2011, reflecting points in time before and after the official enactment of the Kyoto Protocol. We find that for most countries efficiency scores increase over this period. In addition, there exists a positive relationship between economic and environmental efficiency, although, at the same time, our results suggest that environmental efficiency cannot be realized without first reaching a certain threshold of economic efficiency. We also find support for the Paradox of Plenty, whereby an abundance of natural and energy resources results in their inefficient use. - Highlights: • This study proposes a multiplicative extension of environmental DEA models. • We examine how countries utilize energy, labour, capital and freshwater over time. • We measure how efficiently countries minimize the emissions of greenhouse gases. • Results support the Paradox of Plenty among 46 countries in 2002, 2007 and 2011. • Countries richest in oil and gas exhibited the worst energy efficiency.
Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.
Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta
2016-10-27
This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.
Elementary calculus an infinitesimal approach
Keisler, H Jerome
2012-01-01
This first-year calculus book is centered around the use of infinitesimals, an approach largely neglected until recently for reasons of mathematical rigor. It exposes students to the intuition that originally led to the calculus, simplifying their grasp of the central concepts of derivatives and integrals. The author also teaches the traditional approach, giving students the benefits of both methods.Chapters 1 through 4 employ infinitesimals to quickly develop the basic concepts of derivatives, continuity, and integrals. Chapter 5 introduces the traditional limit concept, using approximation p
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Directory of Open Access Journals (Sweden)
A. A. Heidari
2017-09-01
Full Text Available Automated fare collection (AFC systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO is utilized and evaluated for the first time as a new metaheuristic algorithm (MA in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO and genetic algorithm (GA. The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Rigorous Performance Evaluation of Smartphone GNSS/IMU Sensors for ITS Applications
Directory of Open Access Journals (Sweden)
Vassilis Gikas
2016-08-01
Full Text Available With the rapid growth in smartphone technologies and improvement in their navigation sensors, an increasing amount of location information is now available, opening the road to the provision of new Intelligent Transportation System (ITS services. Current smartphone devices embody miniaturized Global Navigation Satellite System (GNSS, Inertial Measurement Unit (IMU and other sensors capable of providing user position, velocity and attitude. However, it is hard to characterize their actual positioning and navigation performance capabilities due to the disparate sensor and software technologies adopted among manufacturers and the high influence of environmental conditions, and therefore, a unified certification process is missing. This paper presents the analysis results obtained from the assessment of two modern smartphones regarding their positioning accuracy (i.e., precision and trueness capabilities (i.e., potential and limitations based on a practical but rigorous methodological approach. Our investigation relies on the results of several vehicle tracking (i.e., cruising and maneuvering tests realized through comparing smartphone obtained trajectories and kinematic parameters to those derived using a high-end GNSS/IMU system and advanced filtering techniques. Performance testing is undertaken for the HTC One S (Android and iPhone 5s (iOS. Our findings indicate that the deviation of the smartphone locations from ground truth (trueness deteriorates by a factor of two in obscured environments compared to those derived in open sky conditions. Moreover, it appears that iPhone 5s produces relatively smaller and less dispersed error values compared to those computed for HTC One S. Also, the navigation solution of the HTC One S appears to adapt faster to changes in environmental conditions, suggesting a somewhat different data filtering approach for the iPhone 5s. Testing the accuracy of the accelerometer and gyroscope sensors for a number of
A Generic Model for Relative Adjustment Between Optical Sensors Using Rigorous Orbit Mechanics
Directory of Open Access Journals (Sweden)
B. Islam
2008-06-01
Full Text Available The classical calibration or space resection is the fundamental task in photogrammetry. The lack of sufficient knowledge of interior and exterior orientation parameters lead to unreliable results in the photogrammetric process. One of the earliest in approaches using in photogrammetry was the plumb line calibration method. This method is suitable to recover the radial and decentering lens distortion coefficients, while the remaining interior(focal length and principal point coordinates and exterior orientation parameters have to be determined by a complimentary method. As the lens distortion remains very less it not considered as the interior orientation parameters, in the present rigorous sensor model. There are several other available methods based on the photogrammetric collinearity equations, which consider the determination of exterior orientation parameters, with no mention to the simultaneous determination of inner orientation parameters. Normal space resection methods solve the problem using control points, whose coordinates are known both in image and object reference systems. The non-linearity of the model and the problems, in point location in digital images and identifying the maximum GPS measured control points are the main drawbacks of the classical approaches. This paper addresses mathematical model based on the fundamental assumption of collineariy of three points of two Along-Track Stereo imagery sensors and independent object point. Assuming this condition it is possible to extract the exterior orientation (EO parameters for a long strip and single image together, without and with using the control points. Moreover, after extracting the EO parameters the accuracy for satellite data products are compared in with using single and with no control points.
Energy Technology Data Exchange (ETDEWEB)
Sant' ana, Paulo Henrique de Mello [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Centro de Engenharia e Ciencias Sociais Aplicadas. Nucleo Interdisciplinar de Planejamento Energetico; Bajay, Sergio Valdir [Universidade Estadual de Campinas (NIPE/UNICAMP), SP (Brazil). Fac. de Engenharia Mecanica. Nucleo Interdisciplinar de Planejamento Energetico
2010-07-01
A modern approach often used in international literature says that the government has the role to create favorable conditions for improving energy efficiency in industry, either through policies, programs or actions. This article's main objective is to describe the main programs for promoting energy efficiency in industry in Brazil and in other countries, for later to propose a new approach for the management and development of energy efficiency programs for the Brazilian industry. The creation of an executive agency, connected to the MME and with strong ties to ELETROBRAS and PETROBRAS, could manage effectively the enormous resources that are needed to mobilize the energy efficiency programs as real alternatives to programs for additional expansion in energy supply. The creation of energy assessment centers, along with an energy efficiency program for energy-intensive industry, would help in promoting energy efficiency in industry. These actions would likely bounce in other industries, and would assist in achieving optimal management standards in the energy industry, consistent with ISO 9000 and ISO 14000, used in countries like the USA and Sweden. (author)
A Rigorous Treatment of Energy Extraction from a Rotating Black Hole
Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.
2009-05-01
The Cauchy problem is considered for the scalar wave equation in the Kerr geometry. We prove that by choosing a suitable wave packet as initial data, one can extract energy from the black hole, thereby putting supperradiance, the wave analogue of the Penrose process, into a rigorous mathematical framework. We quantify the maximal energy gain. We also compute the infinitesimal change of mass and angular momentum of the black hole, in agreement with Christodoulou’s result for the Penrose process. The main mathematical tool is our previously derived integral representation of the wave propagator.
Energy-Efficient Distributed Filtering in Sensor Networks: A Unified Switched System Approach.
Zhang, Dan; Shi, Peng; Zhang, Wen-An; Yu, Li
2016-04-21
This paper is concerned with the energy-efficient distributed filtering in sensor networks, and a unified switched system approach is proposed to achieve this goal. For the system under study, the measurement is first sampled under nonuniform sampling periods, then the local measurement elements are selected and quantized for transmission. Then, the transmission rate is further reduced to save constrained power in sensors. Based on the switched system approach, a unified model is presented to capture the nonuniform sampling, the measurement size reduction, the transmission rate reduction, the signal quantization, and the measurement missing phenomena. Sufficient conditions are obtained such that the filtering error system is exponentially stable in the mean-square sense with a prescribed H∞ performance level. Both simulation and experiment studies are given to show the effectiveness of the proposed new design technique.
Method for Determining Volumetric Efficiency and Its Experimental Validation
Directory of Open Access Journals (Sweden)
Ambrozik Andrzej
2017-12-01
Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.
Katz, C M
1991-04-01
Sliding-scale insulin therapy is seldom the best way to treat hospitalized diabetic patients. In the few clinical situations in which it is appropriate, close attention to details and solidly based scientific principles is absolutely necessary. Well-organized alternative approaches to insulin therapy usually offer greater efficiency and effectiveness.
Guarino, Heidi; Yoder, Shaun
2015-01-01
"Seizing the Future: How Ohio's Career and Technical Education Programs Fuse Academic Rigor and Real-World Experiences to Prepare Students for College and Work," demonstrates Ohio's progress in developing strong policies for career and technical education (CTE) programs to promote rigor, including college- and career-ready graduation…
Li, Hao; Dong, Siping
2015-01-01
China has long been stuck in applying traditional data envelopment analysis (DEA) models to measure technical efficiency of public hospitals without bias correction of efficiency scores. In this article, we have introduced the Bootstrap-DEA approach from the international literature to analyze the technical efficiency of public hospitals in Tianjin (China) and tried to improve the application of this method for benchmarking and inter-organizational learning. It is found that the bias corrected efficiency scores of Bootstrap-DEA differ significantly from those of the traditional Banker, Charnes, and Cooper (BCC) model, which means that Chinese researchers need to update their DEA models for more scientific calculation of hospital efficiency scores. Our research has helped shorten the gap between China and the international world in relative efficiency measurement and improvement of hospitals. It is suggested that Bootstrap-DEA be widely applied into afterward research to measure relative efficiency and productivity of Chinese hospitals so as to better serve for efficiency improvement and related decision making. © The Author(s) 2015.
Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach
Energy Technology Data Exchange (ETDEWEB)
Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)
2010-09-15
This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.
Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach
Energy Technology Data Exchange (ETDEWEB)
Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)
2010-09-15
This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)
Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach
International Nuclear Information System (INIS)
Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung
2010-01-01
This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.
Effect of muscle restraint on sheep meat tenderness with rigor mortis at 18°C.
Devine, Carrick E; Payne, Steven R; Wells, Robyn W
2002-02-01
The effect on shear force of skeletal restraint and removing muscles from lamb m. longissimus thoracis et lumborum (LT) immediately after slaughter and electrical stimulation was undertaken at a rigor temperature of 18°C (n=15). The temperature of 18°C was achieved through chilling of electrically stimulated sheep carcasses in air at 12°C, air flow 1-1.5 ms(-2). In other groups, the muscle was removed at 2.5 h post-mortem and either wrapped or left non-wrapped before being placed back on the carcass to follow carcass cooling regimes. Following rigor mortis, the meat was aged for 0, 16, 40 and 65 h at 15°C and frozen. For the non-stimulated samples, the meat was aged for 0, 12, 36 and 60 h before being frozen. The frozen meat was cooked to 75°C in an 85°C water bath and shear force values obtained from a 1 × 1 cm cross-section. Commencement of ageing was considered to take place at rigor mortis and this was taken as zero aged meat. There were no significant differences in the rate of tenderisation and initial shear force for all treatments. The 23% cook loss was similar for all wrapped and non-wrapped situations and the values decreased slightly with longer ageing durations. Wrapping was shown to mimic meat left intact on the carcass, as it prevented significant prerigor shortening. Such techniques allows muscles to be removed and placed in a controlled temperature environment to enable precise studies of ageing processes.
An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem
Directory of Open Access Journals (Sweden)
Tu Zhenyu
2005-01-01
Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.
Efficiency evaluation of a safety department in a construction company-A case study: A DEA approach
Directory of Open Access Journals (Sweden)
Solomon Odeyale
2015-01-01
Full Text Available Data Envelopment Analysis (DEA is a decision making tool based on linear programming for measuring the relative efficiency of a set of comparable units. DEA helps us identify the sources and level of inefficiency for each of the inputs and outputs. This approach has been used to evaluate the efficiency of the safety department in five construction companies. A three-input, safety workforce, safety training, and safety budget, and two-output, Perfect days and Uptime, constant returns-to-scale (CRS model was developed. The model indicated the necessary improvements required in the inefficient unit’s inputs and outputs to make it efficient, by identifying what factor is responsible for the low efficiency of performance, and also what factor should be improved in order to improve the efficiency of the safety department. The result shows that the safety department of firm A, B and D are efficient, but Firm C and Firm E can improve their efficiency by reducing inputs up to 3.34% and 6.05%, respectively. The inputs identified for reduction were; number of safety staffs and safety budget for Firm C and E respectively.
Real analysis a constructive approach
Bridger, Mark
2012-01-01
A unique approach to analysis that lets you apply mathematics across a range of subjects This innovative text sets forth a thoroughly rigorous modern account of the theoretical underpinnings of calculus: continuity, differentiability, and convergence. Using a constructive approach, every proof of every result is direct and ultimately computationally verifiable. In particular, existence is never established by showing that the assumption of non-existence leads to a contradiction. The ultimate consequence of this method is that it makes sense-not just to math majors but also to students from a
An efficient approach to unstructured mesh hydrodynamics on the cell broadband engine (u)
Energy Technology Data Exchange (ETDEWEB)
Ferenbaugh, Charles R [Los Alamos National Laboratory
2010-12-14
Unstructured mesh physics for the Cell Broadband Engine (CBE) has received little or no attention to date, largely because the CBE architecture poses particular challenges for unstructured mesh algorithms. SPU memory management strategies such as data preloading cannot be applied to the irregular memory storage patterns of unstructured meshes; and the SPU vector instruction set does not support the indirect addressing needed by connectivity arrays. This paper presents an approach to unstructured mesh physics that addresses these challenges, by creating a new mesh data structure and reorganizing code to give efficient CBE performance. The approach is demonstrated on the FLAG production hydrodynamics code using standard test problems, and results show an average speedup of more than 5x over the original code.
Li, Weina; Li, Xuesong; Zhu, Wei; Li, Changxu; Xu, Dan; Ju, Yong; Li, Guangtao
2011-07-21
Based on a topochemical approach, a strategy for efficiently producing main-chain poly(bile acid)s in the solid state was developed. This strategy allows for facile and scalable synthesis of main-chain poly(bile acid)s not only with high molecular weights, but also with quantitative conversions and yields.
DEFF Research Database (Denmark)
Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann
2016-01-01
Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...
A statistical approach to plasma profile analysis
International Nuclear Information System (INIS)
Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.
1990-05-01
A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)
Virtue-based Approaches to Professional Ethics: a Plea for More Rigorous Use of Empirical Science
Directory of Open Access Journals (Sweden)
Georg Spielthenner
2017-08-01
Full Text Available Until recently, the method of professional ethics has been largely principle-based. But the failure of this approach to take into sufficient account the character of professionals has led to a revival of virtue ethics. The kind of professional virtue ethics that I am concerned with in this paper is teleological in that it relates the virtues of a profession to the ends of this profession. My aim is to show how empirical research can (in addition to philosophical inquiry be used to develop virtue-based accounts of professional ethics, and that such empirically well-informed approaches are more convincing than traditional kinds of professional virtue ethics. The paper is divided into four sections. In the first, I outline the structure of a teleological approach to virtue ethics. In Section 2, I show that empirical research can play an essential role in professional ethics by emphasizing the difference between conceptual and empirical matters. Section 3 demonstrates the relevance of virtues in professional life; and the last section is concerned with some meta-ethical issues that are raised by a teleological account of professional virtues.
International Nuclear Information System (INIS)
García-Valenzuela, A; Contreras-Tello, H; Márquez-Islas, R; Sánchez-Pérez, C
2013-01-01
We derive an optical model for the light intensity distribution around the critical angle in a standard Abbe refractometer when used on absorbing homogenous fluids. The model is developed using rigorous electromagnetic optics. The obtained formula is very simple and can be used suitably in the analysis and design of optical sensors relying on Abbe type refractometry.
A rigorous phenomenological analysis of the ππ scattering lengths
International Nuclear Information System (INIS)
Caprini, I.; Dita, P.; Sararu, M.
1979-11-01
The constraining power of the present experimental data, combined with the general theoretical knowledge about ππ scattering, upon the scattering lengths of this process, is investigated by means of a rigorous functional method. We take as input the experimental phase shifts and make no hypotheses about the high energy behaviour of the amplitudes, using only absolute bounds derived from axiomatic field theory and exact consequences of crossing symmetry. In the simplest application of the method, involving only the π 0 π 0 S-wave, we explored numerically a number of values proposed by various authors for the scattering lengths a 0 and a 2 and found that no one appears to be especially favoured. (author)
An Efficient Similarity Digests Database Lookup - A Logarithmic Divide & Conquer Approach
Directory of Open Access Journals (Sweden)
Frank Breitinger
2014-09-01
Full Text Available Investigating seized devices within digital forensics represents a challenging task due to the increasing amount of data. Common procedures utilize automated file identification, which reduces the amount of data an investigator has to examine manually. In the past years the research field of approximate matching arises to detect similar data. However, if n denotes the number of similarity digests in a database, then the lookup for a single similarity digest is of complexity of O(n. This paper presents a concept to extend existing approximate matching algorithms, which reduces the lookup complexity from O(n to O(log(n. Our proposed approach is based on the well-known divide and conquer paradigm and builds a Bloom filter-based tree data structure in order to enable an efficient lookup of similarity digests. Further, it is demonstrated that the presented technique is highly scalable operating a trade-off between storage requirements and computational efficiency. We perform a theoretical assessment based on recently published results and reasonable magnitudes of input data, and show that the complexity reduction achieved by the proposed technique yields a 220-fold acceleration of look-up costs.
Evans, Mark I; Krantz, David A; Hallahan, Terrence; Sherwin, John; Britt, David W
2013-01-01
To determine if nuchal translucency (NT) quality correlates with the extent to which clinics vary in rigor and quality control. We correlated NT performance quality (bias and precision) of 246,000 patients with two alternative measures of clinic culture - % of cases for whom nasal bone (NB) measurements were performed and % of requisitions correctly filled for race-ethnicity and weight. When requisition errors occurred in 5% (33%), the curve lowered to 0.93 MoM (p 90%, MoM was 0.99 compared to those quality exists independent of individual variation in NT quality, and two divergent indices of program rigor are associated with NT quality. Quality control must be program wide, and to effect continued improvement in the quality of NT results across time, the cultures of clinics must become a target for intervention. Copyright © 2013 S. Karger AG, Basel.
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
Qian, Ma; Ma, Jie
2009-06-07
Fletcher's spherical substrate model [J. Chem. Phys. 29, 572 (1958)] is a basic model for understanding the heterogeneous nucleation phenomena in nature. However, a rigorous thermodynamic formulation of the model has been missing due to the significant complexities involved. This has not only left the classical model deficient but also likely obscured its other important features, which would otherwise have helped to better understand and control heterogeneous nucleation on spherical substrates. This work presents a rigorous thermodynamic formulation of Fletcher's model using a novel analytical approach and discusses the new perspectives derived. In particular, it is shown that the use of an intermediate variable, a selected geometrical angle or pseudocontact angle between the embryo and spherical substrate, revealed extraordinary similarities between the first derivatives of the free energy change with respect to embryo radius for nucleation on spherical and flat substrates. Enlightened by the discovery, it was found that there exists a local maximum in the difference between the equivalent contact angles for nucleation on spherical and flat substrates due to the existence of a local maximum in the difference between the shape factors for nucleation on spherical and flat substrate surfaces. This helps to understand the complexity of the heterogeneous nucleation phenomena in a practical system. Also, it was found that the unfavorable size effect occurs primarily when R<5r( *) (R: radius of substrate and r( *): critical embryo radius) and diminishes rapidly with increasing value of R/r( *) beyond R/r( *)=5. This finding provides a baseline for controlling the size effects in heterogeneous nucleation.
Metafroniter energy efficiency with CO2 emissions and its convergence analysis for China
International Nuclear Information System (INIS)
Li, Ke; Lin, Boqiang
2015-01-01
This paper measures the energy efficiency performance with carbon dioxide (CO 2 ) emissions in 30 provinces in China during the period of 1997–2011 using a meta-frontier framework with the improved directional distance function (DDF). We construct a new environmental production possibility set by combining the super-efficiency and sequential data envelopment analysis (DEA) models to avoid “discriminating power problem” and “technical regress” when evaluating efficiency by DDF. Then, it is used in a meta-frontier framework to reflect the technology heterogeneities across east, central and west China. The results indicate that eastern China achieved the highest progress inefficiency relative to the metafrontier, followed by western and the central China. By focusing on technology gaps, we offer some suggestions for the different groups based on group-frontier and meta-frontier analyses. The inefficiency can be attributed to managerial failure for eastern and western China, and technological differences for central China. The convergence analysis shows that energy and CO 2 emission governance will produce negative effects on economic growth, and it is suitable and acceptable to introduce rigorous environmental measures in eastern China. - Highlights: • We present an improved DEA model to calculate the directional distance function. • The improved directional distance function combines with a meta-frontier analysis. • The reasons of energy inefficiency are varied for different regions. • Convergence analysis means east China should introduce rigorous environmental policy
A Differential Geometric Approach to Nonlinear Filtering: The Projection Filter
Brigo, D.; Hanzon, B.; LeGland, F.
1998-01-01
This paper presents a new and systematic method of approximating exact nonlinear filters with finite dimensional filters, using the differential geometric approach to statistics. The projection filter is defined rigorously in the case of exponential families. A convenient exponential family is
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.
A genetic algorithm approach to optimization for the radiological worker allocation problem
International Nuclear Information System (INIS)
Yan Chen; Masakuni Narita; Masashi Tsuji; Sangduk Sa
1996-01-01
The worker allocation optimization problem in radiological facilities inevitably involves various types of requirements and constraints relevant to radiological protection and labor management. Some of these goals and constraints are not amenable to a rigorous mathematical formulation. Conventional methods for this problem rely heavily on sophisticated algebraic or numerical algorithms, which cause difficulties in the search for optimal solutions in the search space of worker allocation optimization problems. Genetic algorithms (GAB) are stochastic search algorithms introduced by J. Holland in the 1970s based on ideas and techniques from genetic and evolutionary theories. The most striking characteristic of GAs is the large flexibility allowed in the formulation of the optimal problem and the process of the search for the optimal solution. In the formulation, it is not necessary to define the optimal problem in rigorous mathematical terms, as required in the conventional methods. Furthermore, by designing a model of evolution for the optimal search problem, the optimal solution can be sought efficiently with computational simple manipulations without highly complex mathematical algorithms. We reported a GA approach to the worker allocation problem in radiological facilities in the previous study. In this study, two types of hard constraints were employed to reduce the huge search space, where the optimal solution is sought in such a way as to satisfy as many of soft constraints as possible. It was demonstrated that the proposed evolutionary method could provide the optimal solution efficiently compared with conventional methods. However, although the employed hard constraints could localize the search space into a very small region, it brought some complexities in the designed genetic operators and demanded additional computational burdens. In this paper, we propose a simplified evolutionary model with less restrictive hard constraints and make comparisons between
Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya
2017-10-01
Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.
Butler Ellis, M Clare; Kennedy, Marc C; Kuster, Christian J; Alanis, Rafael; Tuck, Clive R
2018-03-17
The BREAM (Bystander and Resident Exposure Assessment Model) (Kennedy et al. in BREAM: A probabilistic bystander and resident exposure assessment model of spray drift from an agricultural boom sprayer. Comput Electron Agric 2012;88:63-71) for bystander and resident exposure to spray drift from boom sprayers has recently been incorporated into the European Food Safety Authority (EFSA) guidance for determining non-dietary exposures of humans to plant protection products. The component of BREAM, which relates airborne spray concentrations to bystander and resident dermal exposure, has been reviewed to identify whether it is possible to improve this and its description of variability captured in the model. Two approaches have been explored: a more rigorous statistical analysis of the empirical data and a semi-mechanistic model based on established studies combined with new data obtained in a wind tunnel. A statistical comparison between field data and model outputs was used to determine which approach gave the better prediction of exposures. The semi-mechanistic approach gave the better prediction of experimental data and resulted in a reduction in the proposed regulatory values for the 75th and 95th percentiles of the exposure distribution.
Rigorous lower bound on the dynamic critical exponent of some multilevel Swendsen-Wang algorithms
International Nuclear Information System (INIS)
Li, X.; Sokal, A.D.
1991-01-01
We prove the rigorous lower bound z exp ≥α/ν for the dynamic critical exponent of a broad class of multilevel (or ''multigrid'') variants of the Swendsen-Wang algorithm. This proves that such algorithms do suffer from critical slowing down. We conjecture that such algorithms in fact lie in the same dynamic universality class as the stanard Swendsen-Wang algorithm
Goodman, Lisa A.; Epstein, Deborah; Sullivan, Cris M.
2018-01-01
Programs for domestic violence (DV) victims and their families have grown exponentially over the last four decades. The evidence demonstrating the extent of their effectiveness, however, often has been criticized as stemming from studies lacking scientific rigor. A core reason for this critique is the widespread belief that credible evidence can…
International Nuclear Information System (INIS)
Lee, S.
2009-01-01
As a provider of national energy security, the Korean Institute of Energy Research is seeking to establish a long term strategic technology roadmap for a hydrogen-based economy. This paper addressed 5 criteria regarding the strategy, notably economic impact, commercial potential, inner capacity, technical spinoff, and development cost. The fuzzy AHP and DEA hybrid model were used in a two-stage multi-criteria decision making approach to evaluate the relative efficiency of hydrogen technologies for the hydrogen economy. The fuzzy analytic hierarchy process reflects the uncertainty of human thoughts with interval values instead of clear-cut numbers. It therefore allocates the relative importance of 4 criteria, notably economic impact, commercial potential, inner capacity and technical spin-off. The relative efficiency of hydrogen technologies for the hydrogen economy can be measured via data envelopment analysis. It was concluded that the scientific decision making approach can be used effectively to allocate research and development resources and activities
Energy Technology Data Exchange (ETDEWEB)
Lee, S. [Korea Inst. of Energy Research, Daejeon (Korea, Republic of). Energy Policy Research Division; Mogi, G. [Tokyo Univ., (Japan). Dept. of Technology Management for Innovation, Graduate School of Engineering; Kim, J. [Korea Inst. of Energy Research, Daejeon (Korea, Republic of)
2009-07-01
As a provider of national energy security, the Korean Institute of Energy Research is seeking to establish a long term strategic technology roadmap for a hydrogen-based economy. This paper addressed 5 criteria regarding the strategy, notably economic impact, commercial potential, inner capacity, technical spinoff, and development cost. The fuzzy AHP and DEA hybrid model were used in a two-stage multi-criteria decision making approach to evaluate the relative efficiency of hydrogen technologies for the hydrogen economy. The fuzzy analytic hierarchy process reflects the uncertainty of human thoughts with interval values instead of clear-cut numbers. It therefore allocates the relative importance of 4 criteria, notably economic impact, commercial potential, inner capacity and technical spin-off. The relative efficiency of hydrogen technologies for the hydrogen economy can be measured via data envelopment analysis. It was concluded that the scientific decision making approach can be used effectively to allocate research and development resources and activities.
A MultiAir®/MultiFuel Approach to Enhancing Engine System Efficiency
Energy Technology Data Exchange (ETDEWEB)
Reese, Ronald [Chrysler Group LLC., Auburn Hills, MI (United States)
2015-05-20
FCA US LLC (formally known as Chrysler Group LLC, and hereinafter “Chrysler”) was awarded an American Recovery and Reinvestment Act (ARRA) funded project by the Department of Energy (DOE) titled “A MultiAir®/MultiFuel Approach to Enhancing Engine System Efficiency” (hereinafter “project”). This award was issued after Chrysler submitted a proposal for Funding Opportunity Announcement DE-FOA- 0000079, “Systems Level Technology Development, Integration, and Demonstration for Efficient Class 8 Trucks (SuperTruck) and Advanced Technology Powertrains for Light-Duty Vehicles (ATP-LD).” Chrysler started work on this project on June 01, 2010 and completed testing activities on August 30, 2014. Overall objectives of this project were; Demonstrate a 25% improvement in combined Federal Test Procedure (FTP) City and Highway fuel economy over a 2009 Chrysler minivan; Accelerate the development of highly efficient engine and powertrain systems for light-duty vehicles, while meeting future emissions standards; and Create and retain jobs in accordance with the American Recovery and Reinvestment Act of 2009
Study design elements for rigorous quasi-experimental comparative effectiveness research.
Maciejewski, Matthew L; Curtis, Lesley H; Dowd, Bryan
2013-03-01
Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.
Effects of well-boat transportation on the muscle pH and onset of rigor mortis in Atlantic salmon.
Gatica, M C; Monti, G; Gallo, C; Knowles, T G; Warriss, P D
2008-07-26
During the transport of salmon (Salmo salar), in a well-boat, 10 fish were sampled at each of six stages: in cages after crowding at the farm (stage 1), in the well-boat after loading (stage 2), in the well-boat after eight hours transport and before unloading (stage 3), in the resting cages immediately after finishing unloading (stage 4), after 24 hours resting in cages, (stage 5) and in the processing plant after pumping from the resting cages (stage 6). The water in the well-boat was at ambient temperature with recirculation to the sea. At each stage the fish were stunned percussively and bled by gill cutting. Immediately after death, and then every three hours for 18 hours, the muscle pH and rigor index of the fish were measured. At successive stages the initial muscle pH of the fish decreased, except for a slight gain in stage 5, after they had been rested for 24 hours. The lowest initial muscle pH was observed at stage 6. The fishes' rigor index showed that rigor developed more quickly at each successive stage, except for a slight decrease in rate at stage 5, attributable to the recovery of muscle reserves.
THE MAIN APPROACHES TO THE DEFINITION OF ECONOMIC EFFICIENCY OF ENTERPRISES OF CONSTRUCTION INDUSTRY
Directory of Open Access Journals (Sweden)
A. V. Dolgova
2015-01-01
Full Text Available The problem of ambiguous interpretation of the category of "efficiency". Taking into account the fact that economic efficiency is quite complex and multifaceted category, coupled with the economic laws and applies to all activities of the enterprise this figure is one of the essential characteristics of the processes occurring in industrial organizations. The lack of a generally accepted view among domestic and foreign economists regarding the essential part of the index of efficiency of activity of the industrial enterprise leads to the impossibility of its use for the management of processes. In the conditions of market economy, the task of economic evaluation res ults of the economic entity remains an important element of the research aspects of the company. Each new refinement of recorded knowledge acts as a stimulus for the development of fundamental knowledge categories. One of the Central in the system of economic categories, in our opinion, is indicated for the effective functioning of the enterprise." Precise formulation of the conceptual framework, criteria-based composition have important theoretical value to justify the subject of any research conducted in the conditions of modern economic development of the global space. The definition of criteria and performance indicators, as well as developing sound economic policies of improving the economic mechanism of enterprise depend on a comprehensive study of the essence of economic efficiency of industrial enterprises of the industry. The article examines the main approaches to the definition of economic efficiency, as well as identify the degree of relation of the category of "efficiency" with other economic categories. The author suggested that the characteristics of the essence of economic efficiency, adequate to the task of functioning and development of enterprises in the construction industry.
Rigorous Combination of GNSS and VLBI: How it Improves Earth Orientation and Reference Frames
Lambert, S. B.; Richard, J. Y.; Bizouard, C.; Becker, O.
2017-12-01
Current reference series (C04) of the International Earth Rotation and Reference Systems Service (IERS) are produced by a weighted combination of Earth orientation parameters (EOP) time series built up by combination centers of each technique (VLBI, GNSS, Laser ranging, DORIS). In the future, we plan to derive EOP from a rigorous combination of the normal equation systems of the four techniques.We present here the results of a rigorous combination of VLBI and GNSS pre-reduced, constraint-free, normal equations with the DYNAMO geodetic analysis software package developed and maintained by the French GRGS (Groupe de Recherche en GeÌodeÌsie Spatiale). The used normal equations are those produced separately by the IVS and IGS combination centers to which we apply our own minimal constraints.We address the usefulness of such a method with respect to the classical, a posteriori, combination method, and we show whether EOP determinations are improved.Especially, we implement external validations of the EOP series based on comparison with geophysical excitation and examination of the covariance matrices. Finally, we address the potential of the technique for the next generation celestial reference frames, which are currently determined by VLBI only.
Rigorous vector wave propagation for arbitrary flat media
Bos, Steven P.; Haffert, Sebastiaan Y.; Keller, Christoph U.
2017-08-01
Precise modelling of the (off-axis) point spread function (PSF) to identify geometrical and polarization aberrations is important for many optical systems. In order to characterise the PSF of the system in all Stokes parameters, an end-to-end simulation of the system has to be performed in which Maxwell's equations are rigorously solved. We present the first results of a python code that we are developing to perform multiscale end-to-end wave propagation simulations that include all relevant physics. Currently we can handle plane-parallel near- and far-field vector diffraction effects of propagating waves in homogeneous isotropic and anisotropic materials, refraction and reflection of flat parallel surfaces, interference effects in thin films and unpolarized light. We show that the code has a numerical precision on the order of 10-16 for non-absorbing isotropic and anisotropic materials. For absorbing materials the precision is on the order of 10-8. The capabilities of the code are demonstrated by simulating a converging beam reflecting from a flat aluminium mirror at normal incidence.
Dynamics of harmonically-confined systems: Some rigorous results
Energy Technology Data Exchange (ETDEWEB)
Wu, Zhigang, E-mail: zwu@physics.queensu.ca; Zaremba, Eugene, E-mail: zaremba@sparky.phy.queensu.ca
2014-03-15
In this paper we consider the dynamics of harmonically-confined atomic gases. We present various general results which are independent of particle statistics, interatomic interactions and dimensionality. Of particular interest is the response of the system to external perturbations which can be either static or dynamic in nature. We prove an extended Harmonic Potential Theorem which is useful in determining the damping of the centre of mass motion when the system is prepared initially in a highly nonequilibrium state. We also study the response of the gas to a dynamic external potential whose position is made to oscillate sinusoidally in a given direction. We show in this case that either the energy absorption rate or the centre of mass dynamics can serve as a probe of the optical conductivity of the system. -- Highlights: •We derive various rigorous results on the dynamics of harmonically-confined atomic gases. •We derive an extension of the Harmonic Potential Theorem. •We demonstrate the link between the energy absorption rate in a harmonically-confined system and the optical conductivity.
Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
Rigorous theory of molecular orientational nonlinear optics
International Nuclear Information System (INIS)
Kwak, Chong Hoon; Kim, Gun Yeup
2015-01-01
Classical statistical mechanics of the molecular optics theory proposed by Buckingham [A. D. Buckingham and J. A. Pople, Proc. Phys. Soc. A 68, 905 (1955)] has been extended to describe the field induced molecular orientational polarization effects on nonlinear optics. In this paper, we present the generalized molecular orientational nonlinear optical processes (MONLO) through the calculation of the classical orientational averaging using the Boltzmann type time-averaged orientational interaction energy in the randomly oriented molecular system under the influence of applied electric fields. The focal points of the calculation are (1) the derivation of rigorous tensorial components of the effective molecular hyperpolarizabilities, (2) the molecular orientational polarizations and the electronic polarizations including the well-known third-order dc polarization, dc electric field induced Kerr effect (dc Kerr effect), optical Kerr effect (OKE), dc electric field induced second harmonic generation (EFISH), degenerate four wave mixing (DFWM) and third harmonic generation (THG). We also present some of the new predictive MONLO processes. For second-order MONLO, second-order optical rectification (SOR), Pockels effect and difference frequency generation (DFG) are described in terms of the anisotropic coefficients of first hyperpolarizability. And, for third-order MONLO, third-order optical rectification (TOR), dc electric field induced difference frequency generation (EFIDFG) and pump-probe transmission are presented
International Nuclear Information System (INIS)
Espinoza-Ojeda, O M; Santoyo, E; Andaverde, J
2011-01-01
Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates
Ye, Kai; Kosters, Walter A; Ijzerman, Adriaan P
2007-03-15
Pattern discovery in protein sequences is often based on multiple sequence alignments (MSA). The procedure can be computationally intensive and often requires manual adjustment, which may be particularly difficult for a set of deviating sequences. In contrast, two algorithms, PRATT2 (http//www.ebi.ac.uk/pratt/) and TEIRESIAS (http://cbcsrv.watson.ibm.com/) are used to directly identify frequent patterns from unaligned biological sequences without an attempt to align them. Here we propose a new algorithm with more efficiency and more functionality than both PRATT2 and TEIRESIAS, and discuss some of its applications to G protein-coupled receptors, a protein family of important drug targets. In this study, we designed and implemented six algorithms to mine three different pattern types from either one or two datasets using a pattern growth approach. We compared our approach to PRATT2 and TEIRESIAS in efficiency, completeness and the diversity of pattern types. Compared to PRATT2, our approach is faster, capable of processing large datasets and able to identify the so-called type III patterns. Our approach is comparable to TEIRESIAS in the discovery of the so-called type I patterns but has additional functionality such as mining the so-called type II and type III patterns and finding discriminating patterns between two datasets. The source code for pattern growth algorithms and their pseudo-code are available at http://www.liacs.nl/home/kosters/pg/.
Community historians and the dilemma of rigor vs relevance : A comment on Danziger and van Rappard
Dehue, Trudy
1998-01-01
Since the transition from finalism to contextualism, the history of science seems to be caught up in a basic dilemma. Many historians fear that with the new contextualist standards of rigorous historiography, historical research can no longer be relevant to working scientists themselves. The present
Bigaj, Stephen J.; Bazinet, Gregory P.
1993-01-01
Suggests a team approach for effectively and efficiently providing services for postsecondary students with disabilities. Reviews various teaming concepts and presents a framework for a postsecondary disability problem-solving team. (Author/JOW)
A Game-Theoretical Approach for Spectrum Efficiency Improvement in Cloud-RAN
Directory of Open Access Journals (Sweden)
Zhuofu Zhou
2016-01-01
Full Text Available As tremendous mobile devices access to the Internet in the future, the cells which can provide high data rate and more capacity are expected to be deployed. Specifically, in the next generation of mobile communication 5G, cloud computing is supposed to be applied to radio access network. In cloud radio access network (Cloud-RAN, the traditional base station is divided into two parts, that is, remote radio heads (RRHs and base band units (BBUs. RRHs are geographically distributed and densely deployed, so as to achieve high data rate and low latency. However, the ultradense deployment inevitably deteriorates spectrum efficiency due to the severer intercell interference among RRHs. In this paper, the downlink spectrum efficiency can be improved through the cooperative transmission based on forming the coalitions of RRHs. We formulate the problem as a coalition formation game in partition form. In the process of coalition formation, each RRH can join or leave one coalition to maximize its own individual utility while taking into account the coalition utility at the same time. Moreover, the convergence and stability of the resulting coalition structure are studied. The numeric simulation result demonstrates that the proposed approach based on coalition formation game is superior to the noncooperative method in terms of the aggregate coalition utility.
Mahpeykar, Seyed Milad; Wang, Xihua
2017-02-01
Colloidal quantum dot (CQD) solar cells have been under the spotlight in recent years mainly due to their potential for low-cost solution-processed fabrication and efficient light harvesting through multiple exciton generation (MEG) and tunable absorption spectrum via the quantum size effect. Despite the impressive advances achieved in charge carrier mobility of quantum dot solids and the cells' light trapping capabilities, the recent progress in CQD solar cell efficiencies has been slow, leaving them behind other competing solar cell technologies. In this work, using comprehensive optoelectronic modeling and simulation, we demonstrate the presence of a strong efficiency loss mechanism, here called the "efficiency black hole", that can significantly hold back the improvements achieved by any efficiency enhancement strategy. We prove that this efficiency black hole is the result of sole focus on enhancement of either light absorption or charge extraction capabilities of CQD solar cells. This means that for a given thickness of CQD layer, improvements accomplished exclusively in optic or electronic aspect of CQD solar cells do not necessarily translate into tangible enhancement in their efficiency. The results suggest that in order for CQD solar cells to come out of the mentioned black hole, incorporation of an effective light trapping strategy and a high quality CQD film at the same time is an essential necessity. Using the developed optoelectronic model, the requirements for this incorporation approach and the expected efficiencies after its implementation are predicted as a roadmap for CQD solar cell research community.
Energy Technology Data Exchange (ETDEWEB)
Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Abhyankar, Nikit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Diddi, Saurabh [Bureau of Energy Efficiency, Government of India (India); Ahuja, Deepanshu [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Mukherjee, P. K. [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Walia, Archana [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States)
2016-06-01
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment
Hospital efficiency and transaction costs: a stochastic frontier approach.
Ludwig, Martijn; Groot, Wim; Van Merode, Frits
2009-07-01
The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.
Hu, Kexiang; Ding, Enjie; Wangyang, Peihua; Wang, Qingkang
2016-06-01
The electromagnetic spectrum and the photoelectric conversion efficiency of the silicon hexagonal nanoconical hole (SiHNH) arrays based solar cells is systematically analyzed according to Rigorous Coupled Wave Analysis (RCWA) and Modal Transmission Line (MTL) theory. An ultimate efficiency of the optimized SiHNH arrays based solar cell is up to 31.92% in consideration of the absorption spectrum, 4.52% higher than that of silicon hexagonal nanoconical frustum (SiHNF) arrays. The absorption enhancement of the SiHNH arrays is due to its lower reflectance and more supported guided-mode resonances, and the enhanced ultimate efficiency is insensitive to bottom diameter (D(bot)) of nanoconical hole and the incident angle. The result provides an additional guideline for the nanostructure surface texturing fabrication design for photovoltaic applications.
An Efficient Approach for Identifying Stable Lobes with Discretization Method
Directory of Open Access Journals (Sweden)
Baohai Wu
2013-01-01
Full Text Available This paper presents a new approach for quick identification of chatter stability lobes with discretization method. Firstly, three different kinds of stability regions are defined: absolute stable region, valid region, and invalid region. Secondly, while identifying the chatter stability lobes, three different regions within the chatter stability lobes are identified with relatively large time intervals. Thirdly, stability boundary within the valid regions is finely calculated to get exact chatter stability lobes. The proposed method only needs to test a small portion of spindle speed and cutting depth set; about 89% computation time is savedcompared with full discretization method. It spends only about10 minutes to get exact chatter stability lobes. Since, based on discretization method, the proposed method can be used for different immersion cutting including low immersion cutting process, the proposed method can be directly implemented in the workshop to promote machining parameters selection efficiency.
New approach for calibration the efficiency of HPGe detectors
International Nuclear Information System (INIS)
Alnour, I.A.; Wagiran, H.; Suhaimi Hamzah; Siong, W.B.; Mohd Suhaimi Elias
2013-01-01
Full-text: This work evaluates the efficiency calibrating of HPGe detector coupled with Canberra GC3018 with Genie 2000 software and Ortec GEM25-76-XLB-C with Gamma Vision software; available at Neutron activation analysis laboratory in Malaysian Nuclear Agency (NM). The efficiency calibration curve was constructed from measurement of an IAEA, standard gamma point sources set composed by 214 Am, 57 Co, 133 Ba, 152 Eu, 137 Cs and 60 Co. The efficiency calibrations were performed for three different geometries: 5, 10 and 15 cm distances from the end cap detector. The polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points. The efficiency equation was established from the known fitted parameters which allow for the efficiency evaluation at particular energy of interest. The study shows that significant deviations in the efficiency, depending on the source-detector distance and photon energy. (author)
Hu, Jiajin; Guo, Zheng; Glasius, Marianne; Kristensen, Kasper; Xiao, Langtao; Xu, Xuebing
2011-08-26
To develop an efficient green extraction approach for recovery of bioactive compounds from natural plants, we examined the potential of pressurized liquid extraction (PLE) of ginger (Zingiber officinale Roscoe) with bioethanol/water as solvents. The advantages of PLE over other extraction approaches, in addition to reduced time/solvent cost, the extract of PLE showed a distinct constituent profile from that of Soxhlet extraction, with significantly improved recovery of diarylheptanoids, etc. Among the pure solvents tested for PLE, bioethanol yield the highest efficiency for recovering most constituents of gingerol-related compounds; while for a broad concentration spectrum of ethanol aqueous solutions, 70% ethanol gave the best performance in terms of yield of total extract, complete constituent profile and recovery of most gingerol-related components. PLE with 70% bioethanol operated at 1500 psi and 100 °C for 20 min (static extraction time: 5 min) is recommended as optimized extraction conditions, achieving 106.8%, 109.3% and 108.0% yield of [6]-, [8]- and [10]-gingerol relative to the yield of corresponding constituent obtained by 8h Soxhlet extraction (absolute ethanol as extraction solvent). Copyright © 2011 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
S.J. Mohammadi
2016-03-01
Full Text Available Introduction: In the developed world, particularly in developing countries, livestock is the most important agricultural sub-sector.Livestock of primary and secondary industries has an especial place in the national economybecause of their greatvalue of products, creating job opportunities, providing health products for consumers, increasing export income of the economy throughaccessing global markets of livestock products and finally their undeniable role in acquiring food security.The demand for milk in Iran increased due to an increase in population and the amount of milk production was also increased. The great share of increased produced milk goes to the industrial dairy farms. One of the major methods to increase the amount of milk production continually is to make its production efficient and improve economic conditions. The current study attempts to determine the efficiency and ranking of industrial dairy farms in Saqqez and Divandarreh cities using super-efficiency model. Materials and Methods: The statistical populations of the study are all active industrial dairy farms of Saqqez and Divandarreh cities which are about 19 farms. The required data for calculating the efficiency were gathered by surveying and completing questionnaires for the year 2013. In this study first, for each farm Data Envelopment Analysis (DEA method and GAMS software package were used to estimate super efficiency. Super efficiency is a form of modified DEA model in which each farm can get an efficiency greater than one. Then in order to make sure about being unbiased the obtained super-efficiency scores, the modified model of Banker and Gifford, was re-estimated and the conventional efficiency scores of farms were compared by normalizing and removing some of the scores of outlier farm based on pre-selected screens. The model has suggested conditions for which some of the estimates for dairy farms might have been contaminated with error.As a t result, it has been
McKee, S R; Sams, A R
1998-01-01
Development of rigor mortis at elevated post-mortem temperatures may contribute to turkey meat characteristics that are similar to those found in pale, soft, exudative pork. To evaluate this effect, 36 Nicholas tom turkeys were processed at 19 wk of age and placed in water at 40, 20, and 0 C immediately after evisceration. Pectoralis muscle samples were taken at 15 min, 30 min, 1 h, 2 h, and 4 h post-mortem and analyzed for R-value (an indirect measure of adenosine triphosphate), glycogen, pH, color, and sarcomere length. At 4 h, the remaining intact Pectoralis muscle was harvested, and aged on ice 23 h, and analyzed for drip loss, cook loss, shear values, and sarcomere length. By 15 min post-mortem, the 40 C treatment had higher R-values, which persisted through 4 h. By 1 h, the 40 C treatment pH and glycogen levels were lower than the 0 C treatment; however, they did not differ from those of the 20 C treatment. Increased L* values indicated that color became more pale by 2 h post-mortem in the 40 C treatment when compared to the 20 and 0 C treatments. Drip loss, cook loss, and shear value were increased whereas sarcomere lengths were decreased as a result of the 40 C treatment. These findings suggested that elevated post-mortem temperatures during processing resulted in acceleration of rigor mortis and biochemical changes in the muscle that produced pale, exudative meat characteristics in turkey.
Rigorous Quantum Field Theory A Festschrift for Jacques Bros
Monvel, Anne Boutet; Iagolnitzer, Daniel; Moschella, Ugo
2007-01-01
Jacques Bros has greatly advanced our present understanding of rigorous quantum field theory through numerous fundamental contributions. This book arose from an international symposium held in honour of Jacques Bros on the occasion of his 70th birthday, at the Department of Theoretical Physics of the CEA in Saclay, France. The impact of the work of Jacques Bros is evident in several articles in this book. Quantum fields are regarded as genuine mathematical objects, whose various properties and relevant physical interpretations must be studied in a well-defined mathematical framework. The key topics in this volume include analytic structures of Quantum Field Theory (QFT), renormalization group methods, gauge QFT, stability properties and extension of the axiomatic framework, QFT on models of curved spacetimes, QFT on noncommutative Minkowski spacetime. Contributors: D. Bahns, M. Bertola, R. Brunetti, D. Buchholz, A. Connes, F. Corbetta, S. Doplicher, M. Dubois-Violette, M. Dütsch, H. Epstein, C.J. Fewster, K....
A multi-criteria decision approach to sorting actions for promoting energy efficiency
International Nuclear Information System (INIS)
Pires Neves, Luis; Gomes Martins, Antonio; Henggeler Antunes, Carlos; Candido Dias, Luis
2008-01-01
This paper proposes a multi-criteria decision approach for sorting energy-efficiency initiatives, promoted by electric utilities, with or without public funds authorized by a regulator, or promoted by an independent energy agency, overcoming the limitations and drawbacks of cost-benefit analysis. The proposed approach is based on the ELECTRE-TRI multi-criteria method and allows the consideration of different kinds of impacts, although avoiding difficult measurements and unit conversions. The decision is based on all the significant effects of the initiative, both positive and negative, including ancillary effects often forgotten in cost-benefit analysis. The ELECTRE-TRI, as most multi-criteria methods, provides to the decision maker the ability of controlling the relevance each impact can have on the final decision in a transparent way. The decision support process encompasses a robustness analysis, which, together with a good documentation of the parameters supplied into the model, should support sound decisions. The models were tested with a set of real-world initiatives and compared with possible decisions based on cost-benefit analysis
International Nuclear Information System (INIS)
Zhang Hongkun; Cen Song; Wang Haitao; Cheng Huanyu
2012-01-01
An efficient 3D approach is proposed for simulating the complicated responses of the multi-body structure in reactor core under seismic loading. By utilizing the rigid-body and connector functions of the software Abaqus, the multi-body structure of the reactor core is simplified as a mass-point system interlinked by spring-dashpot connectors. And reasonable schemes are used for determining various connector coefficients. Furthermore, a scripting program is also complied for the 3D parametric modeling. Numerical examples show that, the proposed method can not only produce the results which satisfy the engineering requirements, but also improve the computational efficiency more than 100 times. (authors)
An efficient multiple particle filter based on the variational Bayesian approach
Ait-El-Fquih, Boujemaa
2015-12-07
This paper addresses the filtering problem in large-dimensional systems, in which conventional particle filters (PFs) remain computationally prohibitive owing to the large number of particles needed to obtain reasonable performances. To overcome this drawback, a class of multiple particle filters (MPFs) has been recently introduced in which the state-space is split into low-dimensional subspaces, and then a separate PF is applied to each subspace. In this paper, we adopt the variational Bayesian (VB) approach to propose a new MPF, the VBMPF. The proposed filter is computationally more efficient since the propagation of each particle requires generating one (new) particle only, while in the standard MPFs a set of (children) particles needs to be generated. In a numerical test, the proposed VBMPF behaves better than the PF and MPF.
International Nuclear Information System (INIS)
Fu, Jie; Tian, Yanlong; Chang, Binbin; Li, Gengnan; Xi, Fengna; Dong, Xiaoping
2012-01-01
A novel Mn-intercalated layered titanate as highly active photocatalyst in visible-light region has been synthesized via a convenient and efficient exfoliation–flocculation approach with divalent Mn ions and monolayer titanate nanosheets. The 0.91 nm interlayer spacing of obtained photocatalyst is in accordance with the sum of the thickness of titanate nanosheet and the diameter of Mn ions. The yellow photocatalyst shows a spectral response in visible-light region and the calculated band gap is 2.59 eV. The photocatalytic performance of this material was evaluated by degradation and mineralization of an aqueous dye methylene blue under visible-light irradiation, and an enhanced photocatalytic activity in comparison with protonated titanate as well as the P25 TiO 2 and N-doped TiO 2 was obtained. Additionally, the layered structure is retained, no dye ions intercalating occurs during the photocatalysis process, and a ∼90% photocatalytic activity can be remained after reusing 3 cycles. - Graphical abstract: Mn-intercalated layered titanate as a novel and efficient visible-light harvesting photocatalyst was synthesized via a convenient and efficient exfoliation–flocculation approach in a mild condition. Highlights: ► Mn-intercalated titanate has been prepared by exfoliation–flocculation approach. ► The as-prepared catalyst shows spectral response in the visible-light region. ► Heat treatment at certain temperature enables formation of Mn-doped TiO 2 . ► Dye can be degradated effectively by the catalyst under visible light irradiation.
Schumacher, F.; Friederich, W.; Lamara, S.
2016-02-01
We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be
Ye, LvZhou; Zhang, Hou-Dao; Wang, Yao; Zheng, Xiao; Yan, YiJing
2017-08-21
An efficient low-frequency logarithmic discretization (LFLD) scheme for the decomposition of fermionic reservoir spectrum is proposed for the investigation of quantum impurity systems. The scheme combines the Padé spectrum decomposition (PSD) and a logarithmic discretization of the residual part in which the parameters are determined based on an extension of the recently developed minimum-dissipaton ansatz [J. J. Ding et al., J. Chem. Phys. 145, 204110 (2016)]. A hierarchical equations of motion (HEOM) approach is then employed to validate the proposed scheme by examining the static and dynamic system properties in both the Kondo and noninteracting regimes. The LFLD scheme requires a much smaller number of exponential functions than the conventional PSD scheme to reproduce the reservoir correlation function and thus facilitates the efficient implementation of the HEOM approach in extremely low temperature regimes.
Theory of Randomized Search Heuristics in Combinatorial Optimization
DEFF Research Database (Denmark)
The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS), the Metr......The rigorous mathematical analysis of randomized search heuristics(RSHs) with respect to their expected runtime is a growing research area where many results have been obtained in recent years. This class of heuristics includes well-known approaches such as Randomized Local Search (RLS...... analysis of randomized algorithms to RSHs. Mostly, the expected runtime of RSHs on selected problems is analzyed. Thereby, we understand why and when RSHs are efficient optimizers and, conversely, when they cannot be efficient. The tutorial will give an overview on the analysis of RSHs for solving...
Kranthiraja, Kakaraparthi; Aryal, Um Kanta; Sree, Vijaya Gopalan; Gunasekar, Kumarasamy; Lee, Changyeon; Kim, Minseok; Kim, Bumjoon J; Song, Myungkwan; Jin, Sung-Ho
2018-04-10
The ternary-blend approach has the potential to enhance the power conversion efficiencies (PCEs) of polymer solar cells (PSCs) by providing complementary absorption and efficient charge generation. Unfortunately, most PSCs are processed with toxic halogenated solvents, which are harmful to human health and the environment. Herein, we report the addition of a nonfullerene electron acceptor 3,9-bis(2-methylene-(3-(1,1-dicyanomethylene)-indanone))-5,5,11,11-tetrakis(4-hexylphenyl)-dithieno[2,3- d:2',3'- d']- s-indaceno[1,2- b:5,6- b']dithiophene (ITIC) to a binary blend (poly[4,8-bis(2-(4-(2-ethylhexyloxy)3-fluorophenyl)-5-thienyl)benzo[1,2- b:4,5- b']dithiophene- alt-1,3-bis(4-octylthien-2-yl)-5-(2-ethylhexyl)thieno[3,4- c]pyrrole-4,6-dione] (P1):[6,6]-phenyl-C 71 -butyric acid methyl ester (PC 71 BM), PCE = 8.07%) to produce an efficient nonhalogenated green solvent-processed ternary PSC system with a high PCE of 10.11%. The estimated wetting coefficient value (0.086) for the ternary blend suggests that ITIC could be located at the P1:PC 71 BM interface, resulting in efficient charge generation and charge transport. In addition, the improved current density, sustained open-circuit voltage and PCE of the optimized ternary PSCs were highly correlated with their better external quantum efficiency response and flat-band potential value obtained from the Mott-Schottky analysis. In addition, the ternary PSCs also showed excellent ambient stability over 720 h. Therefore, our results demonstrate the combination of fullerene and nonfullerene acceptors in ternary blend as an efficient approach to improve the performance of eco-friendly solvent-processed PSCs with long-term stability.
International Nuclear Information System (INIS)
Calahorra, Yonatan; Mendels, Dan; Epstein, Ariel
2014-01-01
Bounded geometries introduce a fundamental problem in calculating the image force barrier lowering of metal-wrapped semiconductor systems. In bounded geometries, the derivation of the barrier lowering requires calculating the reference energy of the system, when the charge is at the geometry center. In the following, we formulate and rigorously solve this problem; this allows combining the image force electrostatic potential with the band diagram of the bounded geometry. The suggested approach is applied to spheres as well as cylinders. Furthermore, although the expressions governing cylindrical systems are complex and can only be evaluated numerically, we present analytical approximations for the solution, which allow easy implementation in calculated band diagrams. The results are further used to calculate the image force barrier lowering of metal-wrapped cylindrical nanowires; calculations show that although the image force potential is stronger than that of planar systems, taking the complete band-structure into account results in a weaker effect of barrier lowering. Moreover, when considering small diameter nanowires, we find that the electrostatic effects of the image force exceed the barrier region, and influence the electronic properties of the nanowire core. This study is of interest to the nanowire community, and in particular for the analysis of nanowire I−V measurements where wrapped or omega-shaped metallic contacts are used. (paper)
International Nuclear Information System (INIS)
Krommes, J.A.; Kim, Chang-Bae
1990-06-01
The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulence electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: the dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain upon considering all bounding calculations to date one can conclude, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research. 29 refs
International Nuclear Information System (INIS)
Krommes, J.A.; Kim, C.
1990-01-01
The fundamental problem in the theory of turbulent transport is to find the flux Γ of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound Γ by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulent electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: The dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain, upon considering all bounding calculations to date it can be concluded, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research
Residual Generation for the Ship Benchmark Using Structural Approach
DEFF Research Database (Denmark)
Cocquempot, V.; Izadi-Zamanabadi, Roozbeh; Staroswiecki, M
1998-01-01
The prime objective of Fault-tolerant Control (FTC) systems is to handle faults and discrepancies using appropriate accommodation policies. The issue of obtaining information about various parameters and signals, which have to be monitored for fault detection purposes, becomes a rigorous task...... with the growing number of subsystems. The structural approach, presented in this paper, constitutes a general framework for providing information when the system becomes complex. The methodology of this approach is illustrated on the ship propulsion benchmark....
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from
Prediction of Protein Thermostability by an Efficient Neural Network Approach
Directory of Open Access Journals (Sweden)
Jalal Rezaeenour
2016-10-01
significantly improves the accuracy of ELM in prediction of thermostable enzymes. ELM tends to require more neurons in the hidden-layer than conventional tuning-based learning algorithms. To overcome these, the proposed approach uses a GA which optimizes the structure and the parameters of the ELM. In summary, optimization of ELM with GA results in an efficient prediction method; numerical experiments proved that our approach yields excellent results.
Di, K.; Liu, Y.; Liu, B.; Peng, M.
2012-07-01
Chang'E-1(CE-1) and Chang'E-2(CE-2) are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF) coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1) refining EOPs by correcting the attitude angle bias, 2) refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model) and DOM (Digital Ortho Map) are automatically generated.
Energy efficiency of urban transportation system in Xiamen, China. An integrated approach
International Nuclear Information System (INIS)
Meng, Fanxin; Liu, Gengyuan; Yang, Zhifeng; Casazza, Marco; Cui, Shenghui; Ulgiati, Sergio
2017-01-01
Highlights: • An integrated life cycle approach is used to study Urban Transport Metabolism (UTM). • A selection of different material, energy and environmental assessment methods is synergically applied. • The study is based on an accurate inventory of infrastructure, machinery and operative resource costs. • Results show that the different methods provide much needed insight into different aspects of UTM. • Innovative Bus Rapid Transport shows better resource and environmental performance than Normal Bus Transport system. - Abstract: An integrated life cycle approach framework, including material flow analysis (MFA), Cumulative Energy Demand (CED), exergy analysis (EXA), Emergy Assessment (EMA), and emissions (EMI) has been constructed and applied to examine the energy efficiency of high speed urban bus transportation systems compared to conventional bus transport in the city of Xiamen, Fujian province, China. This paper explores the consistency of the results achieved by means of several evaluation methods, and explores the sustainability of innovation in urban public transportation systems. The case study dealt with in this paper is a Bus Rapid Transit (BRT) system compared to Normal Bus Transit (NBT). All the analyses have been performed based on a common yearly database of natural resources, material, labor, energy and fuel input flows used in all life cycle phases (resource extraction, processing and manufacturing, use and end of life) of the infrastructure, vehicle and vehicle fuel. Cumulative energy, material and environmental support demands of transport are accounted for. Selected pressure indicators are compared to yield a comprehensive picture of the public transportation system. Results show that Bus Rapid Transit system (BRT) shows much better energy and environmental performance than NBT, as indicated by the set of sustainability indicators calculated by means of our integrated approach. This is because of the higher efficiency of such
Efficient approach to obtain free energy gradient using QM/MM MD simulation
International Nuclear Information System (INIS)
Asada, Toshio; Koseki, Shiro; Ando, Kanta
2015-01-01
The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means of FEG and the nudged elastic band (NEB) method
DNA isolation by Chelex-100: an efficient approach to consider in leptospirosis early stages
Directory of Open Access Journals (Sweden)
Angel Alberto Noda
2014-06-01
Full Text Available Objective: To compare the value of leptospiral DNA extraction procedures from clinical samples for the early diagnosis of leptospirosis. Methods: Three DNA extraction procedures were applied for microbiological analysis, results of QIAmp DNA mini kit (QIAGEN, Germany, CLART HPV kit (GENOMICA, Spain and Chelex-100 assay were compared concerning extraction efficiency, DNA purity and DNA suitability for amplification by specific polymerase chain reaction for pathogenic leptospires from blood, plasma and serum artificially infected. Results: The comparison of extraction methods highlighted the efficiency of Chelex-100 and QIAmp DNA mini kit. Chelex-100 achieved the isolation of the highest concentration of leptospiral DNA from the culture and the spiked samples, with acceptable purities and without inhibitors to PCR. Conclusions: Chelex-100 assay is a rapid and effective approach for DNA isolation in clinical samples having pathogenic leptospires and it could be useful in the early diagnosis of leptospirosis.
International Nuclear Information System (INIS)
Zhang, Chuan; Romagnoli, Alessandro; Zhou, Li; Kraft, Markus
2017-01-01
Highlights: •An intelligent energy management system for Eco-Industrial Park (EIP) is proposed. •An explicit domain ontology for EIP energy management is designed. •Ontology-based approach can increase knowledge interoperability within EIP. •Ontology-based approach can allow self-optimization without human intervention in EIP. •The proposed system harbours huge potential in the future scenario of Internet of Things. -- Abstract: An ontology-based approach for Eco-Industrial Park (EIP) knowledge management is proposed in this paper. The designed ontology in this study is formalized conceptualization of EIP. Based on such an ontological representation, a Knowledge-Based System (KBS) for EIP energy management named J-Park Simulator (JPS) is developed. By applying JPS to the solution of EIP waste heat utilization problem, the results of this study show that ontology is a powerful tool for knowledge management of complex systems such as EIP. The ontology-based approach can increase knowledge interoperability between different companies in EIP. The ontology-based approach can also allow intelligent decision making by using disparate data from remote databases, which implies the possibility of self-optimization without human intervention scenario of Internet of Things (IoT). It is shown through this study that KBS can bridge the communication gaps between different companies in EIP, sequentially more potential Industrial Symbiosis (IS) links can be established to improve the overall energy efficiency of the whole EIP.
Energy Technology Data Exchange (ETDEWEB)
Shah, Nihar [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Abhyankar, Nikit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Park, Won Young [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Phadke, Amol [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis and Environmental Impacts Division; Diddi, Saurabh [Government of India, New Delhi (India). Bureau of Energy Efficiency; Ahuja, Deepanshu [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Mukherjee, P. K. [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States); Walia, Archana [Collaborative Labeling and Appliance Standards Program (CLASP), Washington, DC (United States)
2016-06-30
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. We assess several efficiency levels, two of which are summarized below in the report. The finding that significant efficiency improvement is cost effective from a consumer perspective is robust over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one star level) should be evaluated rigorously considering significant benefits to consumers, energy security and environment.
A Modern Approach to the Efficient-Market Hypothesis
Gabriel Frahm
2013-01-01
Market efficiency at least requires the absence of weak arbitrage opportunities, but this is not sufficient to establish a situation where the market is sensitive, i.e., where it "fully reflects" or "rapidly adjusts to" some information flow including the evolution of asset prices. By contrast, No Weak Arbitrage together with market sensitivity is sufficient and necessary for a market to be informationally efficient.
Directory of Open Access Journals (Sweden)
Yuanjiang Huang
2014-01-01
Full Text Available The sensor nodes in the Wireless Sensor Networks (WSNs are prone to failures due to many reasons, for example, running out of battery or harsh environment deployment; therefore, the WSNs are expected to be able to maintain network connectivity and tolerate certain amount of node failures. By applying fuzzy-logic approach to control the network topology, this paper aims at improving the network connectivity and fault-tolerant capability in response to node failures, while taking into account that the control approach has to be localized and energy efficient. Two fuzzy controllers are proposed in this paper: one is Learning-based Fuzzy-logic Topology Control (LFTC, of which the fuzzy controller is learnt from a training data set; another one is Rules-based Fuzzy-logic Topology Control (RFTC, of which the fuzzy controller is obtained through designing if-then rules and membership functions. Both LFTC and RFTC do not rely on location information, and they are localized. Comparing them with other three representative algorithms (LTRT, List-based, and NONE through extensive simulations, our two proposed fuzzy controllers have been proved to be very energy efficient to achieve desired node degree and improve the network connectivity when sensor nodes run out of battery or are subject to random attacks.
Stella, L.; Lorenz, C. D.; Kantorovich, L.
2014-04-01
The generalized Langevin equation (GLE) has been recently suggested to simulate the time evolution of classical solid and molecular systems when considering general nonequilibrium processes. In this approach, a part of the whole system (an open system), which interacts and exchanges energy with its dissipative environment, is studied. Because the GLE is derived by projecting out exactly the harmonic environment, the coupling to it is realistic, while the equations of motion are non-Markovian. Although the GLE formalism has already found promising applications, e.g., in nanotribology and as a powerful thermostat for equilibration in classical molecular dynamics simulations, efficient algorithms to solve the GLE for realistic memory kernels are highly nontrivial, especially if the memory kernels decay nonexponentially. This is due to the fact that one has to generate a colored noise and take account of the memory effects in a consistent manner. In this paper, we present a simple, yet efficient, algorithm for solving the GLE for practical memory kernels and we demonstrate its capability for the exactly solvable case of a harmonic oscillator coupled to a Debye bath.
The Researchers' View of Scientific Rigor-Survey on the Conduct and Reporting of In Vivo Research.
Reichlin, Thomas S; Vogt, Lucile; Würbel, Hanno
2016-01-01
Reproducibility in animal research is alarmingly low, and a lack of scientific rigor has been proposed as a major cause. Systematic reviews found low reporting rates of measures against risks of bias (e.g., randomization, blinding), and a correlation between low reporting rates and overstated treatment effects. Reporting rates of measures against bias are thus used as a proxy measure for scientific rigor, and reporting guidelines (e.g., ARRIVE) have become a major weapon in the fight against risks of bias in animal research. Surprisingly, animal scientists have never been asked about their use of measures against risks of bias and how they report these in publications. Whether poor reporting reflects poor use of such measures, and whether reporting guidelines may effectively reduce risks of bias has therefore remained elusive. To address these questions, we asked in vivo researchers about their use and reporting of measures against risks of bias and examined how self-reports relate to reporting rates obtained through systematic reviews. An online survey was sent out to all registered in vivo researchers in Switzerland (N = 1891) and was complemented by personal interviews with five representative in vivo researchers to facilitate interpretation of the survey results. Return rate was 28% (N = 530), of which 302 participants (16%) returned fully completed questionnaires that were used for further analysis. According to the researchers' self-report, they use measures against risks of bias to a much greater extent than suggested by reporting rates obtained through systematic reviews. However, the researchers' self-reports are likely biased to some extent. Thus, although they claimed to be reporting measures against risks of bias at much lower rates than they claimed to be using these measures, the self-reported reporting rates were considerably higher than reporting rates found by systematic reviews. Furthermore, participants performed rather poorly when asked to
Structural priority approach to fluid-structure interaction problems
International Nuclear Information System (INIS)
Au-Yang, M.K.; Galford, J.E.
1981-01-01
In a large class of dynamic problems occurring in nuclear reactor safety analysis, the forcing function is derived from the fluid enclosed within the structure itself. Since the structural displacement depends on the fluid pressure, which in turn depends on the structural boundaries, a rigorous approach to this class of problems involves simultaneous solution of the coupled fluid mechanics and structural dynamics equations with the structural response and the fluid pressure as unknowns. This paper offers an alternate approach to the foregoing problems. 8 refs
Elements of a function analytic approach to probability.
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Roger Georges (University of Southern California, Los Angeles, CA); Red-Horse, John Robert
2008-02-01
We first provide a detailed motivation for using probability theory as a mathematical context in which to analyze engineering and scientific systems that possess uncertainties. We then present introductory notes on the function analytic approach to probabilistic analysis, emphasizing the connections to various classical deterministic mathematical analysis elements. Lastly, we describe how to use the approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting.
Cotar, Codina; Friesecke, Gero; Klüppelberg, Claudia
2018-06-01
We prove rigorously that the exact N-electron Hohenberg-Kohn density functional converges in the strongly interacting limit to the strictly correlated electrons (SCE) functional, and that the absolute value squared of the associated constrained search wavefunction tends weakly in the sense of probability measures to a minimizer of the multi-marginal optimal transport problem with Coulomb cost associated to the SCE functional. This extends our previous work for N = 2 ( Cotar etal. in Commun Pure Appl Math 66:548-599, 2013). The correct limit problem has been derived in the physics literature by Seidl (Phys Rev A 60 4387-4395, 1999) and Seidl, Gorigiorgi and Savin (Phys Rev A 75:042511 1-12, 2007); in these papers the lack of a rigorous proofwas pointed out.We also give amathematical counterexample to this type of result, by replacing the constraint of given one-body density—an infinite dimensional quadratic expression in the wavefunction—by an infinite-dimensional quadratic expression in the wavefunction and its gradient. Connections with the Lawrentiev phenomenon in the calculus of variations are indicated.
Gatica, M C; Monti, G E; Knowles, T G; Gallo, C B
2010-01-09
Two systems for transporting live salmon (Salmo salar) were compared in terms of their effects on blood variables, muscle pH and rigor index: an 'open system' well-boat with recirculated sea water at 13.5 degrees C and a stocking density of 107 kg/m3 during an eight-hour journey, and a 'closed system' well-boat with water chilled from 16.7 to 2.1 degrees C and a stocking density of 243.7 kg/m3 during a seven-hour journey. Groups of 10 fish were sampled at each of four stages: in cages at the farm, in the well-boat after loading, in the well-boat after the journey and before unloading, and in the processing plant after they were pumped from the resting cages. At each sampling, the fish were stunned and bled by gill cutting. Blood samples were taken to measure lactate, osmolality, chloride, sodium, cortisol and glucose, and their muscle pH and rigor index were measured at death and three hours later. In the open system well-boat, the initial muscle pH of the fish decreased at each successive stage, and at the final stage they had a significantly lower initial muscle pH and more rapid onset of rigor than the fish transported on the closed system well-boat. At the final stage all the blood variables except glucose were significantly affected in the fish transported on both types of well-boat.
An optimized electroporation approach for efficient CRISPR/Cas9 genome editing in murine zygotes.
Directory of Open Access Journals (Sweden)
Simon E Tröder
Full Text Available Electroporation of zygotes represents a rapid alternative to the elaborate pronuclear injection procedure for CRISPR/Cas9-mediated genome editing in mice. However, current protocols for electroporation either require the investment in specialized electroporators or corrosive pre-treatment of zygotes which compromises embryo viability. Here, we describe an easily adaptable approach for the introduction of specific mutations in C57BL/6 mice by electroporation of intact zygotes using a common electroporator with synthetic CRISPR/Cas9 components and minimal technical requirement. Direct comparison to conventional pronuclear injection demonstrates significantly reduced physical damage and thus improved embryo development with successful genome editing in up to 100% of living offspring. Hence, our novel approach for Easy Electroporation of Zygotes (EEZy allows highly efficient generation of CRISPR/Cas9 transgenic mice while reducing the numbers of animals required.
DEA-Risk Efficiency and Stochastic Dominance Efficiency of Stock Indices
Martin Branda; Miloš Kopa
2012-01-01
In this article, the authors deal with the efficiency of world stock indices. Basically, they compare three approaches: mean-risk, data envelopment analysis (DEA), and stochastic dominance (SD) efficiency. In the DEA methodology, efficiency is defined as a weighted sum of outputs compared to a weighted sum of inputs when optimal weights are used. In DEA-risk efficiency, several risk measures and functionals which quantify the risk of the indices (var, VaR, CVaR, etc.) as DEA inputs are used. ...
Directory of Open Access Journals (Sweden)
Sofia D Karamintziou
Full Text Available Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
Flagellum motion in 2-D: Work rate and efficiency of the non-sinusoidal approach
Viridi, Sparisoma; Nuraini, Nuning; Stephanie, Monica; Rifqi, Ainur; Christina, Dina; Thania, Elsa; Sihite, Erland
2018-03-01
Today microorganisms have been widely used to support human life. Some examples include foodstuffs (Spirulina.sp), to help with medical needs, for mining purposes and more. On the other hand, the development of technology is also very big influence on human life. The combination of technology and health science will be very useful if we can develop it. One is the cancer treatment by utilizing the movement of the flagella to be made a nanorobot used as a carrier of cancer drugs. Movement of flagella that resembles the shape of the arc and straight line can be searched formulation and then applied to the manufacture of nanorobot tail. Then the nanorobot will carry a cancer drug that leads directly to the cancer cells. So hopefully with this nanorobot, can minimize the death of healthy cells around cancer cells. From the results of research and analysis of the movement of flagella, it can be concluded that the smaller the mass of the flagella, the greater the efficiency will be or will be more efficient. So, the energy needed nanorobot will be smaller. Model with non-sinusoidal approach (Brokaw, 1965) is discussed in this work and formulation to get the energy efficiency is proposed and analyzed. Unfortunately, there is a negative value in the formulation.
A robust probabilistic approach for variational inversion in shallow water acoustic tomography
International Nuclear Information System (INIS)
Berrada, M; Badran, F; Crépon, M; Thiria, S; Hermand, J-P
2009-01-01
This paper presents a variational methodology for inverting shallow water acoustic tomography (SWAT) measurements. The aim is to determine the vertical profile of the speed of sound c(z), knowing the acoustic pressures generated by a frequency source and collected by a sparse vertical hydrophone array (VRA). A variational approach that minimizes a cost function measuring the distance between observations and their modeled equivalents is used. A regularization term in the form of a quadratic restoring term to a background is also added. To avoid inverting the variance–covariance matrix associated with the above-weighted quadratic background, this work proposes to model the sound speed vector using probabilistic principal component analysis (PPCA). The PPCA introduces an optimum reduced number of non-correlated latent variables η, which determine a new control vector and a new regularization term, expressed as η T η. The PPCA represents a rigorous formalism for the use of a priori information and allows an efficient implementation of the variational inverse method
Efficiency analysis of Chinese industry: A directional distance function approach
International Nuclear Information System (INIS)
Watanabe, Michio; Tanaka, Katsuya
2007-01-01
Two efficiency measures of Chinese industry were estimated at the provincial level from 1994 to 2002, using a directional output distance function. One is a traditional efficiency measure that considers only desirable output, while the other considers both desirable and undesirable outputs simultaneously. A comparison of the two measures revealed that efficiency levels are biased only if desirable output is considered. Five coastal provinces/municipalities that have attracted a large amount of foreign direct investment are found to be the most efficient when only desirable output is considered, and also when both desirable and undesirable outputs are considered. However, omitting undesirable output tends to lead to an overestimate of industrial efficiency levels in Shandong, Sichuan, and Hebei provinces. We also found that a province's industrial structure has significant effects on its efficiency levels
Series-Tuned High Efficiency RF-Power Amplifiers
DEFF Research Database (Denmark)
Vidkjær, Jens
2008-01-01
An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits.......An approach to high efficiency RF-power amplifier design is presented. It addresses simultaneously efficiency optimization and peak voltage limitations when transistors are pushed towards their power limits....
Kruskal, Jonathan B; Reedy, Allen; Pascal, Laurie; Rosen, Max P; Boiselle, Phillip M
2012-01-01
Many hospital radiology departments are adopting "lean" methods developed in automobile manufacturing to improve operational efficiency, eliminate waste, and optimize the value of their services. The lean approach, which emphasizes process analysis, has particular relevance to radiology departments, which depend on a smooth flow of patients and uninterrupted equipment function for efficient operation. However, the application of lean methods to isolated problems is not likely to improve overall efficiency or to produce a sustained improvement. Instead, the authors recommend a gradual but continuous and comprehensive "lean transformation" of work philosophy and workplace culture. Fundamental principles that must consistently be put into action to achieve such a transformation include equal involvement of and equal respect for all staff members, elimination of waste, standardization of work processes, improvement of flow in all processes, use of visual cues to communicate and inform, and use of specific tools to perform targeted data collection and analysis and to implement and guide change. Many categories of lean tools are available to facilitate these tasks: value stream mapping for visualizing the current state of a process and identifying activities that add no value; root cause analysis for determining the fundamental cause of a problem; team charters for planning, guiding, and communicating about change in a specific process; management dashboards for monitoring real-time developments; and a balanced scorecard for strategic oversight and planning in the areas of finance, customer service, internal operations, and staff development. © RSNA, 2012.
Directory of Open Access Journals (Sweden)
K. Di
2012-07-01
Full Text Available Chang'E-1(CE-1 and Chang'E-2(CE-2 are the two lunar orbiters of China's lunar exploration program. Topographic mapping using CE-1 and CE-2 images is of great importance for scientific research as well as for preparation of landing and surface operation of Chang'E-3 lunar rover. In this research, we developed rigorous sensor models of CE-1 and CE-2 CCD cameras based on push-broom imaging principle with interior and exterior orientation parameters. Based on the rigorous sensor model, the 3D coordinate of a ground point in lunar body-fixed (LBF coordinate system can be calculated by space intersection from the image coordinates of con-jugate points in stereo images, and the image coordinates can be calculated from 3D coordinates by back-projection. Due to uncer-tainties of the orbit and the camera, the back-projected image points are different from the measured points. In order to reduce these inconsistencies and improve precision, we proposed two methods to refine the rigorous sensor model: 1 refining EOPs by correcting the attitude angle bias, 2 refining the interior orientation model by calibration of the relative position of the two linear CCD arrays. Experimental results show that the mean back-projection residuals of CE-1 images are reduced to better than 1/100 pixel by method 1 and the mean back-projection residuals of CE-2 images are reduced from over 20 pixels to 0.02 pixel by method 2. Consequently, high precision DEM (Digital Elevation Model and DOM (Digital Ortho Map are automatically generated.
Direct integration of the S-matrix applied to rigorous diffraction
International Nuclear Information System (INIS)
Iff, W; Lindlein, N; Tishchenko, A V
2014-01-01
A novel Fourier method for rigorous diffraction computation at periodic structures is presented. The procedure is based on a differential equation for the S-matrix, which allows direct integration of the S-matrix blocks. This results in a new method in Fourier space, which can be considered as a numerically stable and well-parallelizable alternative to the conventional differential method based on T-matrix integration and subsequent conversions from the T-matrices to S-matrix blocks. Integration of the novel differential equation in implicit manner is expounded. The applicability of the new method is shown on the basis of 1D periodic structures. It is clear however, that the new technique can also be applied to arbitrary 2D periodic or periodized structures. The complexity of the new method is O(N 3 ) similar to the conventional differential method with N being the number of diffraction orders. (fast track communication)
Set-Theoretic Approach to Maturity Models
DEFF Research Database (Denmark)
Lasrado, Lester Allan
Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...
van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes
2014-01-01
The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and
Holladay, Jon; Day, Greg; Roberts, Barry; Leahy, Frank
2003-01-01
The efficiency of re-useable aerospace systems requires a focus on the total operations process rather than just orbital performance. For the Multi-Purpose Logistics Module this activity included special attention to terrestrial conditions both pre-launch and post-landing and how they inter-relate to the mission profile. Several of the efficiencies implemented for the MPLM Mission Engineering were NASA firsts and all served to improve the overall operations activities. This paper will provide an explanation of how various issues were addressed and the resulting solutions. Topics range from statistical analysis of over 30 years of atmospheric data at the launch and landing site to a new approach for operations with the Shuttle Carrier Aircraft. In each situation the goal was to "tune" the thermal management of the overall flight system for minimizing requirement risk while optimizing power and energy performance.
Rigorous derivation of porous-media phase-field equations
Schmuck, Markus; Kalliadasis, Serafim
2017-11-01
The evolution of interfaces in Complex heterogeneous Multiphase Systems (CheMSs) plays a fundamental role in a wide range of scientific fields such as thermodynamic modelling of phase transitions, materials science, or as a computational tool for interfacial flow studies or material design. Here, we focus on phase-field equations in CheMSs such as porous media. To the best of our knowledge, we present the first rigorous derivation of error estimates for fourth order, upscaled, and nonlinear evolution equations. For CheMs with heterogeneity ɛ, we obtain the convergence rate ɛ 1 / 4 , which governs the error between the solution of the new upscaled formulation and the solution of the microscopic phase-field problem. This error behaviour has recently been validated computationally in. Due to the wide range of application of phase-field equations, we expect this upscaled formulation to allow for new modelling, analytic, and computational perspectives for interfacial transport and phase transformations in CheMSs. This work was supported by EPSRC, UK, through Grant Nos. EP/H034587/1, EP/L027186/1, EP/L025159/1, EP/L020564/1, EP/K008595/1, and EP/P011713/1 and from ERC via Advanced Grant No. 247031.
An efficient multi-resolution GA approach to dental image alignment
Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany
2006-02-01
Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.
Directory of Open Access Journals (Sweden)
Raéf Bahrini
2017-02-01
Full Text Available This paper measures and analyzes the technical efficiency of Islamic banks in the Middle East and North Africa (MENA region during the period 2007–2012. To do this, the bootstrap Data Envelopment Analysis (DEA approach was employed in order to provide a robust estimation of the overall technical efficiency and its components: pure technical efficiency and scale efficiency in the case of MENA Islamic banks. The main results show that over the period of study, pure technical inefficiency was the main source of overall technical inefficiency instead of scale inefficiency. This finding was confirmed for all MENA Islamic banks as well as for the two subsamples: Gulf Cooperation Council (GCC and non-GCC Islamic banks. Furthermore, our results show that GCC Islamic banks had stable efficiency scores during the global financial crisis (2007–2008 and in the early post-crisis period (2009–2010. However, a decline in overall technical efficiency of all panels of MENA Islamic banks was recorded in the last two years of the study period (2011–2012. Thus, we recommend that MENA Islamic bank managers focus more on improving their management practices rather than increasing their sizes. We also recommend that financial authorities in MENA countries implement several regulatory and financial measures in order to ensure the development of MENA Islamic banking.
Measuring highway efficiency : A DEA approach and the Malquist index
Sarmento, Joaquim Miranda; Renneboog, Luc; Verga-Matos, Pedro
A growing concern exists regarding the efficiency of public resources spent in transport infrastructures. In this paper, we measure the efficiency of seven highway projects in Portugal over the past decade by means of a data envelopment analysis and the Malmquist productivity and efficiency indices.
Analyzing public health policy: three approaches.
Coveney, John
2010-07-01
Policy is an important feature of public and private organizations. Within the field of health as a policy arena, public health has emerged in which policy is vital to decision making and the deployment of resources. Public health practitioners and students need to be able to analyze public health policy, yet many feel daunted by the subject's complexity. This article discusses three approaches that simplify policy analysis: Bacchi's "What's the problem?" approach examines the way that policy represents problems. Colebatch's governmentality approach provides a way of analyzing the implementation of policy. Bridgman and Davis's policy cycle allows for an appraisal of public policy development. Each approach provides an analytical framework from which to rigorously study policy. Practitioners and students of public health gain much in engaging with the politicized nature of policy, and a simple approach to policy analysis can greatly assist one's understanding and involvement in policy work.
Vereecken, Luc; Peeters, Jozef
2003-09-01
The rigorous implementation of transition state theory (TST) for a reaction system with multiple reactant rotamers and multiple transition state conformers is discussed by way of a statistical rate analysis of the 1,5-H-shift in 1-butoxy radicals, a prototype reaction for the important class of H-shift reactions in atmospheric chemistry. Several approaches for deriving a multirotamer TST expression are treated: oscillator versus (hindered) internal rotor models; distinguishable versus indistinguishable atoms; and direct count methods versus degeneracy factors calculated by (simplified) direct count methods or from symmetry numbers and number of enantiomers, where applicable. It is shown that the various treatments are fully consistent, even if the TST expressions themselves appear different. The 1-butoxy H-shift reaction is characterized quantum chemically using B3LYP-DFT; the performance of this level of theory is compared to other methods. Rigorous application of the multirotamer TST methodology in an harmonic oscillator approximation based on this data yields a rate coefficient of k(298 K,1 atm)=1.4×105 s-1, and an Arrhenius expression k(T,1 atm)=1.43×1011 exp(-8.17 kcal mol-1/RT) s-1, which both closely match the experimental recommendations in the literature. The T-dependence is substantially influenced by the multirotamer treatment, as well as by the tunneling and fall-off corrections. The present results are compared to those of simplified TST calculations based solely on the properties of the lowest energy 1-butoxy rotamer.
Rigorous upper bounds for transport due to passive advection by inhomogeneous turbulence
International Nuclear Information System (INIS)
Krommes, J.A.; Smith, R.A.
1987-05-01
A variational procedure, due originally to Howard and explored by Busse and others for self-consistent turbulence problems, is employed to determine rigorous upper bounds for the advection of a passive scalar through an inhomogeneous turbulent slab with arbitrary generalized Reynolds number R and Kubo number K. In the basic version of the method, the steady-state energy balance is used as a constraint; the resulting bound, though rigorous, is independent of K. A pedagogical reference model (one dimension, K = ∞) is described in detail; the bound compares favorably with the exact solution. The direct-interaction approximation is also worked out for this model; it is somewhat more accurate than the bound, but requires considerably more labor to solve. For the basic bound, a general formalism is presented for several dimensions, finite correlation length, and reasonably general boundary conditions. Part of the general method, in which a Green's function technique is employed, applies to self-consistent as well as to passive problems, and thereby generalizes previous results in the fluid literature. The formalism is extended for the first time to include time-dependent constraints, and a bound is deduced which explicitly depends on K and has the correct physical scalings in all regimes of R and K. Two applications from the theory of turbulent plasmas ae described: flux in velocity space, and test particle transport in stochastic magnetic fields. For the velocity space problem the simplest bound reproduces Dupree's original scaling for the strong turbulence diffusion coefficient. For the case of stochastic magnetic fields, the scaling of the bounds is described for the magnetic diffusion coefficient as well as for the particle diffusion coefficient in the so-called collisionless, fluid, and double-streaming regimes
Provencher, Steeve; Archer, Stephen L; Ramirez, F Daniel; Hibbert, Benjamin; Paulin, Roxane; Boucherat, Olivier; Lacasse, Yves; Bonnet, Sébastien
2018-03-30
Despite advances in our understanding of the pathophysiology and the management of pulmonary arterial hypertension (PAH), significant therapeutic gaps remain for this devastating disease. Yet, few innovative therapies beyond the traditional pathways of endothelial dysfunction have reached clinical trial phases in PAH. Although there are inherent limitations of the currently available models of PAH, the leaky pipeline of innovative therapies relates, in part, to flawed preclinical research methodology, including lack of rigour in trial design, incomplete invasive hemodynamic assessment, and lack of careful translational studies that replicate randomized controlled trials in humans with attention to adverse effects and benefits. Rigorous methodology should include the use of prespecified eligibility criteria, sample sizes that permit valid statistical analysis, randomization, blinded assessment of standardized outcomes, and transparent reporting of results. Better design and implementation of preclinical studies can minimize inherent flaws in the models of PAH, reduce the risk of bias, and enhance external validity and our ability to distinguish truly promising therapies form many false-positive or overstated leads. Ideally, preclinical studies should use advanced imaging, study several preclinical pulmonary hypertension models, or correlate rodent and human findings and consider the fate of the right ventricle, which is the major determinant of prognosis in human PAH. Although these principles are widely endorsed, empirical evidence suggests that such rigor is often lacking in pulmonary hypertension preclinical research. The present article discusses the pitfalls in the design of preclinical pulmonary hypertension trials and discusses opportunities to create preclinical trials with improved predictive value in guiding early-phase drug development in patients with PAH, which will need support not only from researchers, peer reviewers, and editors but also from
Zhang, Panpan; Li, Jing; Lv, Lingxiao; Zhao, Yang; Qu, Liangti
2017-05-23
Efficient utilization of solar energy for clean water is an attractive, renewable, and environment friendly way to solve the long-standing water crisis. For this task, we prepared the long-range vertically aligned graphene sheets membrane (VA-GSM) as the highly efficient solar thermal converter for generation of clean water. The VA-GSM was prepared by the antifreeze-assisted freezing technique we developed, which possessed the run-through channels facilitating the water transport, high light absorption capacity for excellent photothermal transduction, and the extraordinary stability in rigorous conditions. As a result, VA-GSM has achieved average water evaporation rates of 1.62 and 6.25 kg m -2 h -1 under 1 and 4 sun illumination with a superb solar thermal conversion efficiency of up to 86.5% and 94.2%, respectively, better than that of most carbon materials reported previously, which can efficiently produce the clean water from seawater, common wastewater, and even concentrated acid and/or alkali solutions.
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data. © 2015 Wiley Periodicals, Inc.
Warren, Mark R.; Calderón, José; Kupscznk, Luke Aubry; Squires, Gregory; Su, Celina
2018-01-01
Contrary to the charge that advocacy-oriented research cannot meet social science research standards because it is inherently biased, the authors of this article argue that collaborative, community-engaged scholarship (CCES) must meet high standards of rigor if it is to be useful to support equity-oriented, social justice agendas. In fact, they…
Directory of Open Access Journals (Sweden)
Jinping Sun
2017-01-01
Full Text Available The multiple hypothesis tracker (MHT is currently the preferred method for addressing data association problem in multitarget tracking (MTT application. MHT seeks the most likely global hypothesis by enumerating all possible associations over time, which is equal to calculating maximum a posteriori (MAP estimate over the report data. Despite being a well-studied method, MHT remains challenging mostly because of the computational complexity of data association. In this paper, we describe an efficient method for solving the data association problem using graphical model approaches. The proposed method uses the graph representation to model the global hypothesis formation and subsequently applies an efficient message passing algorithm to obtain the MAP solution. Specifically, the graph representation of data association problem is formulated as a maximum weight independent set problem (MWISP, which translates the best global hypothesis formation into finding the maximum weight independent set on the graph. Then, a max-product belief propagation (MPBP inference algorithm is applied to seek the most likely global hypotheses with the purpose of avoiding a brute force hypothesis enumeration procedure. The simulation results show that the proposed MPBP-MHT method can achieve better tracking performance than other algorithms in challenging tracking situations.
Arriaza, Pablo; Nedjat-Haiem, Frances; Lee, Hee Yun; Martin, Shadi S
2015-01-01
The purpose of this article is to synthesize and chronicle the authors' experiences as four bilingual and bicultural researchers, each experienced in conducting cross-cultural/cross-language qualitative research. Through narrative descriptions of experiences with Latinos, Iranians, and Hmong refugees, the authors discuss their rewards, challenges, and methods of enhancing rigor, trustworthiness, and transparency when conducting cross-cultural/cross-language research. The authors discuss and explore how to effectively manage cross-cultural qualitative data, how to effectively use interpreters and translators, how to identify best methods of transcribing data, and the role of creating strong community relationships. The authors provide guidelines for health care professionals to consider when engaging in cross-cultural qualitative research.
Energy efficiency and behaviour
DEFF Research Database (Denmark)
Carstensen, Trine Agervig; Kunnasvirta, Annika; Kiviluoto, Katariina
separate key aspects hinders strategic energy efficiency planning. For this reason, the PLEEC project – “Planning for Energy Efficient Cities” – funded by the EU Seventh Framework Programme uses an integrative approach to achieve the sus‐ tainable, energy– efficient, smart city. By coordinating strategies...... to conduct behavioural interventions, to be presented in Deliverable 5.5., the final report. This report will also provide valuable information for the WP6 general model for an Energy-Smart City. Altogether 38 behavioural interventions are analysed in this report. Each collected and analysed case study...... of the European Union’s 20‐20‐20 plan is to improve energy efficiency by 20% in 2020. However, holistic knowledge about energy efficiency potentials in cities is far from complete. Currently, a WP4 location in PLEEC project page 3 variety of individual strategies and approaches by different stakeholders tackling...
Zhong, Victor W; Obeid, Jihad S; Craig, Jean B; Pfaff, Emily R; Thomas, Joan; Jaacks, Lindsay M; Beavers, Daniel P; Carey, Timothy S; Lawrence, Jean M; Dabelea, Dana; Hamman, Richard F; Bowlby, Deborah A; Pihoker, Catherine; Saydah, Sharon H
2016-01-01
Objective To develop an efficient surveillance approach for childhood diabetes by type across 2 large US health care systems, using phenotyping algorithms derived from electronic health record (EHR) data. Materials and Methods Presumptive diabetes cases diabetes-related billing codes, patient problem list, and outpatient anti-diabetic medications. EHRs of all the presumptive cases were manually reviewed, and true diabetes status and diabetes type were determined. Algorithms for identifying diabetes cases overall and classifying diabetes type were either prespecified or derived from classification and regression tree analysis. Surveillance approach was developed based on the best algorithms identified. Results We developed a stepwise surveillance approach using billing code–based prespecified algorithms and targeted manual EHR review, which efficiently and accurately ascertained and classified diabetes cases by type, in both health care systems. The sensitivity and positive predictive values in both systems were approximately ≥90% for ascertaining diabetes cases overall and classifying cases with type 1 or type 2 diabetes. About 80% of the cases with “other” type were also correctly classified. This stepwise surveillance approach resulted in a >70% reduction in the number of cases requiring manual validation compared to traditional surveillance methods. Conclusion EHR data may be used to establish an efficient approach for large-scale surveillance for childhood diabetes by type, although some manual effort is still needed. PMID:27107449
Kobayashi, M; Takatori, T; Iwadate, K; Nakajima, M
1996-10-25
We examined the changes in adenosine triphosphate (ATP), lactic acid, adenosine diphosphate (ADP) and adenosine monophosphate (AMP) in five different rat muscles after death. Rigor mortis has been thought to occur simultaneously in dead muscles and hence to start in small muscles sooner than in large muscles. In this study we found that the rate of decrease in ATP was significantly different in each muscle. The greatest drop in ATP was observed in the masseter muscle. These findings contradict the conventional theory of rigor mortis. Similarly, the rates of change in ADP and lactic acid, which are thought to be related to the consumption or production of ATP, were different in each muscle. However, the rate of change of AMP was the same in each muscle.
Directory of Open Access Journals (Sweden)
Hyunseung Choo
2009-03-01
Full Text Available Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs. They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR that efficiently forwards (or relays data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.
Jeon, Hyeonjae; Park, Kwangjin; Hwang, Dae-Joon; Choo, Hyunseung
2009-01-01
Sensor nodes transmit the sensed information to the sink through wireless sensor networks (WSNs). They have limited power, computational capacities and memory. Portable wireless devices are increasing in popularity. Mechanisms that allow information to be efficiently obtained through mobile WSNs are of significant interest. However, a mobile sink introduces many challenges to data dissemination in large WSNs. For example, it is important to efficiently identify the locations of mobile sinks and disseminate information from multi-source nodes to the multi-mobile sinks. In particular, a stationary dissemination path may no longer be effective in mobile sink applications, due to sink mobility. In this paper, we propose a Sink-oriented Dynamic Location Service (SDLS) approach to handle sink mobility. In SDLS, we propose an Eight-Direction Anchor (EDA) system that acts as a location service server. EDA prevents intensive energy consumption at the border sensor nodes and thus provides energy balancing to all the sensor nodes. Then we propose a Location-based Shortest Relay (LSR) that efficiently forwards (or relays) data from a source node to a sink with minimal delay path. Our results demonstrate that SDLS not only provides an efficient and scalable location service, but also reduces the average data communication overhead in scenarios with multiple and moving sinks and sources.
Measurement system for diffraction efficiency of convex gratings
Liu, Peng; Chen, Xin-hua; Zhou, Jian-kang; Zhao, Zhi-cheng; Liu, Quan; Luo, Chao; Wang, Xiao-feng; Tang, Min-xue; Shen, Wei-min
2017-08-01
A measurement system for diffraction efficiency of convex gratings is designed. The measurement system mainly includes four components as a light source, a front system, a dispersing system that contains a convex grating, and a detector. Based on the definition and measuring principle of diffraction efficiency, the optical scheme of the measurement system is analyzed and the design result is given. Then, in order to validate the feasibility of the designed system, the measurement system is set up and the diffraction efficiency of a convex grating with the aperture of 35 mm, the curvature-radius of 72mm, the blazed angle of 6.4°, the grating period of 2.5μm and the working waveband of 400nm-900nm is tested. Based on GUM (Guide to the Expression of Uncertainty in Measurement), the uncertainties in the measuring results are evaluated. The measured diffraction efficiency data are compared to the theoretical ones, which are calculated based on the grating groove parameters got by an atomic force microscope and Rigorous Couple Wave Analysis, and the reliability of the measurement system is illustrated. Finally, the measurement performance of the system is analyzed and tested. The results show that, the testing accuracy, the testing stability and the testing repeatability are 2.5%, 0.085% and 3.5% , respectively.
DEFF Research Database (Denmark)
Ou, Yiyu; Corell, Dennis Dan; Dam-Hansen, Carsten
2011-01-01
We have theoretically investigated the influence of antireflective sub-wavelength structures on a monolithic white light-emitting diode (LED). The simulation is based on the rigorous coupled wave analysis (RCWA) algorithm, and both cylinder and moth-eye structures have been studied in the work. Our...... simulation results show that a moth-eye structure enhances the light extraction efficiency over the entire visible light range with an extraction efficiency enhancement of up to 26 %. Also for the first time to our best knowledge, the influence of sub-wavelength structures on both the color rendering index...
Research on an efficient preconditioner using GMRES method for the MOC
International Nuclear Information System (INIS)
Takeda, Satoshi; Kitada, Takanori; Smith, Michael A.
2011-01-01
The modeling accuracy of reactor analysis techniques has improved considerably with the progressive improvements in computational capabilities. The method of characteristics (MOC) solves the neutron transport equation using tracking lines which simulates the neutron paths. The MOC is an accurate calculation method and is becoming a major solver because of the rapid advancement of the computer. In this methodology, the transport equation is discretized into many spatial meshes and energy wise groups. And the discretization generates a large system which needs a lot of computational costs. To reduce computational costs of MOC calculation, we investigate the Generalized Minimal RESidual (GMRES) method as an accelerator and developed an efficient preconditioner for the MOC calculation. The preconditioner we developed was made by simplifying rigorous preconditioner. And the efficiency was verified by comparing the number of iterations which is calculated by one dimensional MOC code
Thermodynamics of accuracy in kinetic proofreading: dissipation and efficiency trade-offs
International Nuclear Information System (INIS)
Rao, Riccardo; Peliti, Luca
2015-01-01
The high accuracy exhibited by biological information transcription processes is due to kinetic proofreading, i.e. by a mechanism which reduces the error rate of the information-handling process by driving it out of equilibrium. We provide a consistent thermodynamic description of enzyme-assisted assembly processes involving competing substrates, in a master equation framework. We introduce and evaluate a measure of the efficiency based on rigorous non-equilibrium inequalities. The performance of several proofreading models are thus analyzed and the related time, dissipation and efficiency versus error trade-offs exhibited for different discrimination regimes. We finally introduce and analyze in the same framework a simple model which takes into account correlations between consecutive enzyme-assisted assembly steps. This work highlights the relevance of the distinction between energetic and kinetic discrimination regimes in enzyme-substrate interactions. (paper)
Alamri, Haleema
2016-04-20
A highly efficient methodology, based on a novel catalyst switch approach with rapid crossover characteristics, was developed for the one-pot synthesis of block co/terpolymers of cyclic ethers and esters. This new approach, which takes advantage of one of the best catalysts for epoxide (t-BuP4) and cyclic ester (t-BuP2) polymerization, opens new horizons toward the synthesis of cyclic ether/ester complex macromolecular architectures. © The Royal Society of Chemistry 2016.
Alamri, Haleema; Hadjichristidis, Nikolaos
2016-01-01
A highly efficient methodology, based on a novel catalyst switch approach with rapid crossover characteristics, was developed for the one-pot synthesis of block co/terpolymers of cyclic ethers and esters. This new approach, which takes advantage of one of the best catalysts for epoxide (t-BuP4) and cyclic ester (t-BuP2) polymerization, opens new horizons toward the synthesis of cyclic ether/ester complex macromolecular architectures. © The Royal Society of Chemistry 2016.
Alternative approaches to research in physical therapy: positivism and phenomenology.
Shepard, K F; Jensen, G M; Schmoll, B J; Hack, L M; Gwyer, J
1993-02-01
This article presents philosophical approaches to research in physical therapy. A comparison is made to demonstrate how the research purpose, research design, research methods, and research data differ when one approaches research from the philosophical perspective of positivism (predominantly quantitative) as compared with the philosophical perspective of phenomenology (predominantly qualitative). Differences between the two approaches are highlighted by examples from research articles published in Physical Therapy. The authors urge physical therapy researchers to become familiar with the tenets, rigor, and knowledge gained from the use of both approaches in order to increase their options in conducting research relevant to the practice of physical therapy.