WorldWideScience

Sample records for case minimal deterministic

  1. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    CERN Document Server

    Asinari, Pietro

    2010-01-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both ...

  2. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    Science.gov (United States)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  3. Minimizing Regret : The General Case

    NARCIS (Netherlands)

    Rustichini, A.

    1998-01-01

    In repeated games with differential information on one side, the labelling "general case" refers to games in which the action of the informed player is not known to the uninformed, who can only observe a signal which is the random outcome of his and his opponent's action. Here we consider the proble

  4. Minimizing Platelet Activation-Induced Clogging in Deterministic Lateral Displacement Arrays for High-Throughput Capture of Circulating Tumor Cells

    Science.gov (United States)

    D'Silva, Joseph; Loutherback, Kevin; Austin, Robert; Sturm, James

    2013-03-01

    Deterministic lateral displacement arrays have been used to separate circulating tumor cells (CTCs) from diluted whole blood at flow rates up to 10 mL/min (K. Loutherback et al., AIP Advances, 2012). However, the throughput is limited to 2 mL equivalent volume of undiluted whole blood due to clogging of the array. Since the concentration of CTCs can be as low as 1-10 cells/mL in clinical samples, processing larger volumes of blood is necessary for diagnostic and analytical applications. We have identified platelet activation by the micro-post array as the primary cause of this clogging. In this talk, we (i) show that clogging occurs at the beginning of the micro-post array and not in the injector channels because both acceleration and deceleration in fluid velocity are required for clogging to occur, and (ii) demonstrate how reduction in platelet concentration and decrease in platelet contact time within the device can be used in combination to achieve a 10x increase in the equivalent volume of undiluted whole blood processed. Finally, we discuss experimental efforts to separate the relative contributions of contact activated coagulation and shear-induced platelet activation to clogging and approaches to minimize these, such as surface treatment and post geometry design.

  5. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study

    Directory of Open Access Journals (Sweden)

    King John R

    2010-03-01

    Full Text Available Abstract Background Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. Results In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Conclusions Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  6. 一种非确定型有穷自动机的极小化方法%A Minimization Method of Non-deterministic Finite Automata

    Institute of Scientific and Technical Information of China (English)

    张丽

    2012-01-01

    自动机状态极小化是寻求状态数较少的自动机,使其与原自动机接受相同的语言.确定型有穷状态自动机(DFA)极小化问题在平方时间内可解,通过状态集上引入等价关系导出的商自动机即为接受相同正则语言的极小化自动机.而非确定型有穷状态自动机(NFA)极小化问题尚未找到有效算法.尽管NFA可以转化为DFA且接受的语言不变,但可能会出现状态数指数级增加.从语言B可以构造一个接受自己的子语言自动机,同态压缩映射子语言自动机为最终系统,从而为接受语言B的极小化自动机.%Automata state minimization is to find an automata which have the less number of states, and accepted the same language with the original automata. The minimization problem of deterministic finite automata(DFA) can be solved in time square. Quotient automata is derived by introducing the equivalence relation in state set, which is a minimized automata accepted the same regular language. However the minimization problem of non-deterministic finite automata (NFA) has not yet found an effective method. Although a NFA can turn to a DFA and accept the same language, the number of states appears to increase exponentially. Language B can construct a sub-language automata to accept the same language, and get homomorphic contracting mapping sub-language automata as the final system. Then the minimized automata can accept B.

  7. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  8. A deterministic model of admixture and genetic introgression: the case of Neanderthal and Cro-Magnon.

    Science.gov (United States)

    Forhan, Gerald; Martiel, Jean-Louis; Blum, Michael G B

    2008-11-01

    There is an ongoing debate in the field of human evolution about the possible contribution of Neanderthals to the modern human gene pool. To study how the Neanderthal private alleles may have spread over the genes of Homo sapiens, we propose a deterministic model based on recursive equations and ordinary differential equations. If the Neanderthal population was large compared to the Homo sapiens population at the beginning of the contact period, we show that genetic introgression should have been fast and complete meaning that most of the Neanderthal private alleles should be found in the modern human gene pool in case of ancient admixture. In order to test/reject ancient admixture from genome-wide data, we incorporate the model of genetic introgression into a statistical hypothesis-testing framework. We show that the power to reject ancient admixture increases as the ratio, at the time of putative admixture, of the population size of Homo sapiens over that of Neanderthal decreases. We find that the power to reject ancient admixture might be particularly low if the population size of Homo sapiens was comparable to the Neanderthal population size. PMID:18768141

  9. Hyperketonemia in early lactation dairy cattle: a deterministic estimate of component and total cost per case.

    Science.gov (United States)

    McArt, J A A; Nydam, D V; Overton, M W

    2015-03-01

    The purpose of this study was to develop a deterministic economic model to estimate the costs associated with (1) the component cost per case of hyperketonemia (HYK) and (2) the total cost per case of HYK when accounting for costs related to HYK-attributed diseases. Data from current literature was used to model the incidence and risks of HYK (defined as a blood β-hydroxybutyrate concentration≥1.2 mmol/L), displaced abomasa (DA), metritis, disease associations, milk production, culling, and reproductive outcomes. The component cost of HYK was estimated based on 1,000 calvings per year; the incidence of HYK in primiparous and multiparous animals; the percent of animals receiving clinical treatment; the direct costs of diagnostics, therapeutics, labor, and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. Costs attributable to DA and metritis were estimated based on the incidence of each disease in the first 30 DIM; the number of cases of each disease attributable to HYK; the direct costs of diagnostics, therapeutics, discarded milk during treatment and the withdrawal period, veterinary service (DA only), and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. The component cost per case of HYK was estimated at $134 and $111 for primiparous and multiparous animals, respectively; the average component cost per case of HYK was estimated to be $117. Thirty-four percent of the component cost of HYK was due to future reproductive losses, 26% to death loss, 26% to future milk production losses, 8% to future culling losses, 3% to therapeutics, 2% to labor, and 1% to diagnostics. The total cost per case of HYK was estimated at $375 and $256 for primiparous and multiparous animals, respectively; the average total cost per case of HYK was $289. Forty-one percent of the total cost of HYK was due to the component cost of HYK, 33% to costs

  10. Hyperketonemia in early lactation dairy cattle: a deterministic estimate of component and total cost per case.

    Science.gov (United States)

    McArt, J A A; Nydam, D V; Overton, M W

    2015-03-01

    The purpose of this study was to develop a deterministic economic model to estimate the costs associated with (1) the component cost per case of hyperketonemia (HYK) and (2) the total cost per case of HYK when accounting for costs related to HYK-attributed diseases. Data from current literature was used to model the incidence and risks of HYK (defined as a blood β-hydroxybutyrate concentration≥1.2 mmol/L), displaced abomasa (DA), metritis, disease associations, milk production, culling, and reproductive outcomes. The component cost of HYK was estimated based on 1,000 calvings per year; the incidence of HYK in primiparous and multiparous animals; the percent of animals receiving clinical treatment; the direct costs of diagnostics, therapeutics, labor, and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. Costs attributable to DA and metritis were estimated based on the incidence of each disease in the first 30 DIM; the number of cases of each disease attributable to HYK; the direct costs of diagnostics, therapeutics, discarded milk during treatment and the withdrawal period, veterinary service (DA only), and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. The component cost per case of HYK was estimated at $134 and $111 for primiparous and multiparous animals, respectively; the average component cost per case of HYK was estimated to be $117. Thirty-four percent of the component cost of HYK was due to future reproductive losses, 26% to death loss, 26% to future milk production losses, 8% to future culling losses, 3% to therapeutics, 2% to labor, and 1% to diagnostics. The total cost per case of HYK was estimated at $375 and $256 for primiparous and multiparous animals, respectively; the average total cost per case of HYK was $289. Forty-one percent of the total cost of HYK was due to the component cost of HYK, 33% to costs

  11. Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis

    Science.gov (United States)

    de Oliveira, Gisele Pinto; Bierrenbach, Ana Luiza de Souza; de Camargo, Kenneth Rochel; Coeli, Cláudia Medina; Pinheiro, Rejane Sobrino

    2016-01-01

    ABSTRACT OBJECTIVE To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. METHODS The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring). Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System – Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. RESULTS Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. CONCLUSIONS The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used. PMID:27556963

  12. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  13. Probabilistic and Deterministic Seismic Hazard Assessment: A Case Study in Babol

    Directory of Open Access Journals (Sweden)

    H.R. Tavakoli

    2013-01-01

    Full Text Available The risk of earthquake ground motion parameters in seismic design of structures and Vulnerabilityand risk assessment of these structures against earthquake damage are important. The damages caused by theearthquake engineering and seismology of the social and economic consequences are assessed. This paperdetermined seismic hazard analysis in Babol via deterministic and probabilistic methods. Deterministic andprobabilistic methods seem to be practical tools for mutual control of results and to overcome the weaknessof approach alone. In the deterministic approach, the strong-motion parameters are estimated for the maximumcredible earthquake, assumed to occur at the closest possible distance from the site of interest, withoutconsidering the likelihood of its occurrence during a specified exposure period. On the other hand, theprobabilistic approach integrates the effects of all earthquakes expected to occur at different locations duringa specified life period, with the associated uncertainties and randomness taken into account. The calculatedbedrock horizontal and vertical peak ground acceleration (PGA for different years return period of the studyarea are presented.

  14. Coefficient of reversibility and two particular cases of deterministic many body systems

    International Nuclear Information System (INIS)

    We discuss the importance of a new measure of chaos in study of nonlinear dynamic systems, the - coefficient of reversibility-. This is defined as the probability of returning in the same point of phasic space. Is very interesting to compare this coefficient with other measures like fractal dimension or Liapunov exponent. We have also studied two very interesting many body systems, both having any number of particles but a deterministic evolution. One system is composed by n particles initially at rest, having the same mass and interacting through harmonic bi-particle forces, other is composed by two types of particles (with mass m1 and mass m2) initially at rest and interacting too through harmonic bi-particle forces

  15. Deterministic Seismic Hazard Assessment at Bed Rock Level: Case Study for the City of Bhubaneswar, India

    Directory of Open Access Journals (Sweden)

    Sukanti Rout

    2015-04-01

    Full Text Available In this study an updated deterministic seismic hazard contour map of Bhubaneswar (20°12'0"N to 20°23'0"N latitude and 85°44'0"E to 85° 54'0"E longitude one of the major city of India with tourist importance, has been prepared in the form of spectral acceleration values. For assessing the seismic hazard, the study area has been divided into small grids of size 30˝×30˝ (approximately 1.0 km×1.0 km, and the hazard parameters in terms of spectral acceleration at bedrock level, PGA are calculated as the center of each of these grid cells by considering the regional Seismo-tectonic activity within 400 km radius around the city center. The maximum credible earthquake in terms of moment magnitude of 7.2 has been used for calculation of hazard parameter, results in PGA value of 0.017g towards the northeast side of the city and the corresponding maximum spectral acceleration as 0.0501g for a predominant period of 0.05s at bedrock level.

  16. Using CSP To Improve Deterministic 3-SAT

    CERN Document Server

    Kutzkov, Konstantin

    2010-01-01

    We show how one can use certain deterministic algorithms for higher-value constraint satisfaction problems (CSPs) to speed up deterministic local search for 3-SAT. This way, we improve the deterministic worst-case running time for 3-SAT to O(1.439^n).

  17. Nonlinear Boltzmann equation for the homogeneous isotropic case: Some improvements to deterministic methods and applications to relaxation towards local equilibrium

    Science.gov (United States)

    Asinari, P.

    2011-03-01

    Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).

  18. Travel path uncertainty: a case study combining stochastic and deterministic hydrodynamic models in the Rhône valley, Switzerland

    OpenAIRE

    Kimmeier, Francesco; Bouzelboudjen, Mahmoud; Ababou, Rachid; Ribeiro, Luis

    2014-01-01

    In the framework of waste storage in geological formations at shallow or greater depths and accidental pollution, the numerical simulation of groundwater flow and contaminant transport represents an important instrument to predict and quantify the pollution as a function of time and space. The numerical simulation problem, and the required hydrogeologic data, are often approached in a deterministic fashion. However, deterministic models do not allow to evaluate the uncertainty of results. Fur...

  19. Deterministic multidimensional nonuniform gap sampling

    Science.gov (United States)

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.

  20. A NEW CASE FOR IMAGE COMPRESSION USING LOGIC FUNCTION MINIMIZATION

    Directory of Open Access Journals (Sweden)

    Behrouz Zolfaghari

    2011-05-01

    Full Text Available Sum of minterms is a canonical form for representing logic functions. There are classical methods such as Karnaugh map or Quine–McCluskey tabulation for minimizing a sum of products. This minimization reduces the minterms to smaller products called implicants. If minterms are represented by bit strings, the bit strings shrink through the minimization process. This can be considered as a kind of data compression provided that there is a way for retrieving the original bit strings from the compressed strings. This paper proposes implements and evaluates an image compression method called YALMIC (Yet Another Logic Minimization Based Image Compression which depends on logic function minimization. This method considers adjacent pixels of the image as disjointed minterms constructing a logic function and compresses the 24-bit color images through minimizing the function. We compare the compression ratio of the proposed method to those of existing methods and show that YALMIC improves the compression ratio by about 25% on average.

  1. A Comparison of deterministic and probabilistic approaches for assessing risks from contaminated aquifers: an Italian case study.

    Science.gov (United States)

    Rivera-Velasquez, Maria Fernanda; Fallico, Carmine; Guerra, Ignazio; Straface, Salvatore

    2013-12-01

    In this article we consider the methods of deterministic and probabilistic risk analysis regarding the presence of chemical contaminants in soil, water and air, with a broader meaning than usual for the latter, as we extended the probabilistic treatment to the parameters that influence the transport to a greater extent, in particular hydraulic conductivity and partition coefficient. These parameters, to which only one value is assigned, are considered here as random variables. The objective of the study reported herein was to demonstrate that application of the probabilistic method of risk assessment is preferable to the use of the deterministic method. Both methods yield contaminant removal levels that will reduce adverse effects on human health and the environment, but results from the deterministic method are typically more conservative than necessary, and are thus more costly to achieve. In addition, we found it essential to consider the importance of random variables (the parameters influencing the flow and the transport), such as the hydraulic conductivity and the partition coefficient, when assessing health risks. Both methodologies of health risk analysis, deterministic and probabilistic, were applied to a site in southern Italy, contaminated by heavy metals. The results obtained confirm the purposes of this study. PMID:24293229

  2. Inferring deterministic causal relations

    OpenAIRE

    Daniusis, Povilas; Janzing, Dominik; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard

    2012-01-01

    We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the ...

  3. Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report

    OpenAIRE

    Chandrashekar, B.

    2012-01-01

    Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure...

  4. Inferring deterministic causal relations

    CERN Document Server

    Daniusis, Povilas; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard

    2012-01-01

    We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.

  5. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro;

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem.......We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...

  6. Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report

    Directory of Open Access Journals (Sweden)

    B. Chandrashekar

    2012-01-01

    Full Text Available Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma.

  7. Minimally invasive approach to eliminate pyogenic granuloma: a case report.

    Science.gov (United States)

    Chandrashekar, B

    2012-01-01

    Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma. PMID:22567459

  8. Minimally invasive approaches in pancreatic pseudocyst: a Case report

    Directory of Open Access Journals (Sweden)

    Rohollah Y

    2009-09-01

    Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: According to importance of post operative period, admission duration, post operative pain, and acceptable rate of complications, minimally invasive approaches with endoscope in pancreatic pseudocyst management becomes more popular, but the best choice of procedure and patient selection is currently not completely established. During past decade endoscopic procedures are become first choice in most authors' therapeutic plans, however, open surgery remains gold standard in pancreatic pseudocyst treatment."n"nMethods: we present here a patient with pancreatic pseudocyst unresponsive to conservative management that is intervened endoscopically before 6th week, and review current literatures to depict a schema to management navigation."n"nResults: A 16 year old male patient presented with two episodes of acute pancreatitis with abdominal pain, nausea and vomiting. Hyperamilasemia, pancreatic ascites and a pseudocyst were found in our preliminary investigation. Despite optimal conservative management, including NPO (nil per os and total parentral nutrition, after four weeks, clinical and para-clinical findings deteriorated. Therefore, ERCP and trans-papillary cannulation with placement of 7Fr stent was

  9. Uniform deterministic dictionaries

    DEFF Research Database (Denmark)

    Ruzic, Milan

    2008-01-01

    We present a new analysis of the well-known family of multiplicative hash functions, and improved deterministic algorithms for selecting “good” hash functions. The main motivation is realization of deterministic dictionaries with fast lookups and reasonably fast updates. The model of computation...

  10. Fulminant ulcerative colitis associated with steroid-resistant minimal change disease and psoriasis: A case report

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease,and psoriasis. The possible pathological link between 3 diseases is discussed.

  11. Optimal Deterministic Auctions with Correlated Priors

    OpenAIRE

    Papadimitriou, Christos; Pierrakos, George

    2010-01-01

    We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction, resulting in a duality theorem and an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an NP-compl...

  12. A case of cutaneous paragonimiasis presented with minimal pleuritis.

    Science.gov (United States)

    Singh, T Shantikumar; Devi, Kh Ranjana; Singh, S Rajen; Sugiyama, Hiromu

    2012-07-01

    Clinically, paragonimiasis is broadly classified into pulmonary, pleuropulmonary, and extrapulmonary forms. The common extrapulmonary forms are cerebral and cutaneous paragonimiasis. The cutaneous paragonimiasis is usually presented as a slowly migrating and painless subcutaneous nodule. The correct diagnosis is often difficult or delayed or remained undiagnosed until the nodule becomes enlarged and painful and the cause is investigated. We report here a case of cutaneous paragonimiasis in a male child who presented with mild respiratory symptoms. The diagnosis of paragonimiasis was based on a history of consumption of crabs, positive specific serological test, and blood eosinophilia. The swelling and respiratory symptoms subsided after a prescribed course of praziquantel therapy.

  13. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  14. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2009-02-03

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  15. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2012-02-01

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  16. Abducens nerve palsy as a postoperative complication of minimally invasive thoracic spine surgery: a case report

    OpenAIRE

    Sandon, Luiz Henrique Dias; Choi, Gun; Park, EunSoo; Lee, Hyung-Chang

    2016-01-01

    Background Thoracic disc surgeries make up only a small number of all spine surgeries performed, but they can have a considerable number of postoperative complications. Numerous approaches have been developed and studied in an attempt to reduce the morbidity associated with the procedure; however, we still encounter cases that develop serious and unexpected outcomes. Case Presentation This case report presents a patient with abducens nerve palsy after minimally invasive surgery for thoracic d...

  17. Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case

    Energy Technology Data Exchange (ETDEWEB)

    Bettoni, Dario; Colombo, Mattia; Liberati, Stefano, E-mail: bettoni@sissa.it, E-mail: mattia.colombo@studenti.unitn.it, E-mail: liberati@sissa.it [SISSA, Via Bonomea 265, Trieste, 34136 (Italy)

    2014-02-01

    Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales.

  18. Minimal TestCase Generation for Object-Oriented Software with State Charts

    OpenAIRE

    Ranjita Kumari Swain; Prafulla Kumar Behera; Durga Prasad Mohapatra

    2012-01-01

    Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is one of the key issues in software testing. This paper proposes an reduction approach to test data generation for the state-based software testing. In this paper, first state transition graph is derived from state chart diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases are generated. Lastly, a set of test cases are minimized by calcu...

  19. The human ECG nonlinear deterministic versus stochastic aspects

    CERN Document Server

    Kantz, H; Kantz, Holger; Schreiber, Thomas

    1998-01-01

    We discuss aspects of randomness and of determinism in electrocardiographic signals. In particular, we take a critical look at attempts to apply methods of nonlinear time series analysis derived from the theory of deterministic dynamical systems. We will argue that deterministic chaos is not a likely explanation for the short time variablity of the inter-beat interval times, except for certain pathologies. Conversely, densely sampled full ECG recordings possess properties typical of deterministic signals. In the latter case, methods of deterministic nonlinear time series analysis can yield new insights.

  20. Deterministic Brownian Motion

    Science.gov (United States)

    Trefan, Gyorgy

    1993-01-01

    The goal of this thesis is to contribute to the ambitious program of the foundation of developing statistical physics using chaos. We build a deterministic model of Brownian motion and provide a microscopic derivation of the Fokker-Planck equation. Since the Brownian motion of a particle is the result of the competing processes of diffusion and dissipation, we create a model where both diffusion and dissipation originate from the same deterministic mechanism--the deterministic interaction of that particle with its environment. We show that standard diffusion which is the basis of the Fokker-Planck equation rests on the Central Limit Theorem, and, consequently, on the possibility of deriving it from a deterministic process with a quickly decaying correlation function. The sensitive dependence on initial conditions, one of the defining properties of chaos insures this rapid decay. We carefully address the problem of deriving dissipation from the interaction of a particle with a fully deterministic nonlinear bath, that we term the booster. We show that the solution of this problem essentially rests on the linear response of a booster to an external perturbation. This raises a long-standing problem concerned with Kubo's Linear Response Theory and the strong criticism against it by van Kampen. Kubo's theory is based on a perturbation treatment of the Liouville equation, which, in turn, is expected to be totally equivalent to a first-order perturbation treatment of single trajectories. Since the boosters are chaotic, and chaos is essential to generate diffusion, the single trajectories are highly unstable and do not respond linearly to weak external perturbation. We adopt chaotic maps as boosters of a Brownian particle, and therefore address the problem of the response of a chaotic booster to an external perturbation. We notice that a fully chaotic map is characterized by an invariant measure which is a continuous function of the control parameters of the map

  1. Deterministic Global Optimization

    CERN Document Server

    Scholz, Daniel

    2012-01-01

    This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization

  2. Generalized Deterministic Traffic Rules

    CERN Document Server

    Fuks, H; Fuks, Henryk; Boccara, Nino

    1997-01-01

    We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parametrized by the speed limit $m$ and another parameter $k$ that represents a ``degree of aggressiveness'' in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: ``conservative'' driving with high speed limit and ``aggressive'' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.

  3. Deterministic estimation of hydrological thresholds for shallow landslide initiation and slope stability models: case study from the Somma-Vesuvius area of southern Italy

    Science.gov (United States)

    Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.

    2012-01-01

    Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is

  4. Rhabdomyolysis and acute renal failure following minimally invasive spine surgery: report of 5 cases.

    Science.gov (United States)

    Dakwar, Elias; Rifkin, Stephen I; Volcan, Ildemaro J; Goodrich, J Allan; Uribe, Juan S

    2011-06-01

    Minimally invasive spine surgery is increasingly used to treat various spinal pathologies with the goal of minimizing destruction of the surrounding tissues. Rhabdomyolysis (RM) is a rare but known complication of spine surgery, and acute renal failure (ARF) is in turn a potential complication of severe RM. The authors report the first known case series of RM and ARF following minimally invasive lateral spine surgery. The authors retrospectively reviewed data in all consecutive patients who underwent a minimally invasive lateral transpsoas approach for interbody fusion with the subsequent development of RM and ARF at 2 institutions between 2006 and 2009. Demographic variables, patient home medications, preoperative laboratory values, and anesthetic used during the procedure were reviewed. All patient data were recorded including the operative procedure, patient positioning, postoperative hospital course, operative time, blood loss, creatine phosphokinase (CPK), creatinine, duration of hospital stay, and complications. Five of 315 consecutive patients were identified with RM and ARF after undergoing minimally invasive lateral transpsoas spine surgery. There were 4 men and 1 woman with a mean age of 66 years (range 60-71 years). The mean body mass index was 31 kg/m2 and ranged from 25 to 40 kg/m2. Nineteen interbody levels had been fused, with a range of 3-6 levels per patient. The mean operative time was 420 minutes and ranged from 315 to 600 minutes. The CPK ranged from 5000 to 56,000 U/L, with a mean of 25,861 U/L. Two of the 5 patients required temporary hemodialysis, while 3 required only aggressive fluid resuscitation. The mean duration of the hospital stay was 12 days, with a range of 3-25 days. Rhabdomyolysis is a rare but known potential complication of spine surgery. The authors describe the first case series associated with the minimally invasive lateral approach. Surgeons must be aware of the possibility of postoperative RM and ARF, particularly in

  5. Minimally invasive keyhole approaches in spinal intradural tumor surgery: report of two cases and conceptual considerations.

    Science.gov (United States)

    Reisch, Robert; Koechlin, Nicolas O; Marcus, Hani J

    2016-09-01

    Despite their predominantly histologically benign nature, intradural tumors may become symptomatic by virtue of their space-occupying effect, causing severe neurological deficits. The gold standard treatment is total excision of the lesion; however, extended dorsal and dorsolateral approaches may cause late complications due to iatrogenic destruction of the posterolateral elements of the spine. In this article, we describe our concept of minimally invasive spinal tumor surgery. Two illustrative cases demonstrate the feasibility and safety of keyhole fenestrations exposing the spinal canal. PMID:25336048

  6. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro;

    2012-01-01

    , such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem.......Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them...

  7. Deterministic analyses of severe accident issues

    International Nuclear Information System (INIS)

    Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents

  8. White sponge naevus with minimal clinical and histological changes: report of three cases.

    Science.gov (United States)

    Lucchese, Alberta; Favia, Gianfranco

    2006-05-01

    White sponge naevus (WSN) is a rare autosomal dominant disorder that predominantly affects non-cornified stratified squamous epithelia: oral mucosa, oesophagus, anogenital area. It has been shown to be related to keratin defects, because of mutations in the genes encoding mucosal-specific keratins K4 and K13. We illustrate three cases diagnosed as WSN, following the clinical and histological criteria, with unusual appearance. They presented with minimal clinical and histological changes that could be misleading in the diagnosis. The patients showed diffuse irregular plaques with a range of presentations from white to rose coloured mucosae involving the entire oral cavity. In one case the lesion was also present in the vaginal area. The histological findings included epithelial thickening, parakeratosis and extensive vacuolization of the suprabasal keratinocytes, confirming WSN diagnosis. Clinical presentation and histopathology of WSN are discussed in relation to the differential diagnosis of other oral leukokeratoses. PMID:16630298

  9. Deterministic behavioural models for concurrency

    DEFF Research Database (Denmark)

    Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn

    1993-01-01

    This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...

  10. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody J. H.

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  11. Minimally invasive surgery for superior mesenteric artery syndrome: A case report.

    Science.gov (United States)

    Yao, Si-Yuan; Mikami, Ryuichi; Mikami, Sakae

    2015-12-01

    Superior mesenteric artery (SMA) syndrome is defined as a compression of the third portion of the duodenum by the abdominal aorta and the overlying SMA. SMA syndrome associated with anorexia nervosa has been recognized, mainly among young female patients. The excessive weight loss owing to the eating disorder sometimes results in a reduced aorto-mesenteric angle and causes duodenal obstruction. Conservative treatment, including psychiatric and nutritional management, is recommended as initial therapy. If conservative treatment fails, surgery is often required. Currently, traditional open bypass surgery has been replaced by laparoscopic duodenojejunostomy as a curative surgical approach. However, single incision laparoscopic approach is rarely performed. A 20-year-old female patient with a diagnosis of anorexia nervosa and SMA syndrome was prepared for surgery after failed conservative management. As the patient had body image concerns, a single incision laparoscopic duodenojejunostomy was performed to achieve minimal scarring. As a result, good perioperative outcomes and cosmetic results were achieved. We show the first case of a young patient with SMA syndrome who was successfully treated by single incision laparoscopic duodenojejunostomy. This minimal invasive surgery would be beneficial for other patients with SMA syndrome associated with anorexia nervosa, in terms of both surgical and cosmetic outcomes. PMID:26668518

  12. Case Study : Auditory brain responses in a minimally verbal child with autism and cerebral palsy

    Directory of Open Access Journals (Sweden)

    Shu Hui Yau

    2015-06-01

    Full Text Available An estimated 30% of individuals with autism spectrum disorders (ASD remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG whilst passively listening to speech and nonspeech sounds. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65 – 165 ms (M50/M100 time window post stimulus onset. GM was retested aged 10 years using electroencephalography (EEG. Consistent with her MEG results, she showed an unusually early and strong response to pure tone stimuli. These results demonstrate both the potential and the feasibility of using MEG and EEG in the study of minimally verbal children with ASD.

  13. Minimal access direct spondylolysis repair using a pedicle screw-rod system: a case series

    Directory of Open Access Journals (Sweden)

    Mohi Eldin Mohamed

    2012-11-01

    Full Text Available Abstract Introduction Symptomatic spondylolysis is always challenging to treat because the pars defect causing the instability needs to be stabilized while segmental fusion needs to be avoided. Direct repair of the pars defect is ideal in cases of spondylolysis in which posterior decompression is not necessary. We report clinical results using segmental pedicle-screw-rod fixation with bone grafting in patients with symptomatic spondylolysis, a modification of a technique first reported by Tokuhashi and Matsuzaki in 1996. We also describe the surgical technique, assess the fusion and analyze the outcomes of patients. Case presentation At Cairo University Hospital, eight out of twelve Egyptian patients’ acute pars fractures healed after conservative management. Of those, two young male patients underwent an operative procedure for chronic low back pain secondary to pars defect. Case one was a 25-year-old Egyptian man who presented with a one-year history of axial low back pain, not radiating to the lower limbs, after falling from height. Case two was a 29-year-old Egyptian man who presented with a one-year history of axial low back pain and a one-year history of mild claudication and infrequent radiation to the leg, never below the knee. Utilizing a standardized mini-access fluoroscopically-guided surgical protocol, fixation was established with two titanium pedicle screws place into both pedicles, at the same level as the pars defect, without violating the facet joint. The cleaned pars defect was grafted; a curved titanium rod was then passed under the base of the spinous process of the affected vertebra, bridging the loose fragment, and attached to the pedicle screw heads, to uplift the spinal process, followed by compression of the defect. The patients were discharged three days after the procedure, with successful fusion at one-year follow-up. No rod breakage or implant-related complications were reported. Conclusions Where there is no

  14. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  15. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....

  16. An Approach to Composition Based on a Minimal Techno Case Study

    OpenAIRE

    Bougaïeff, Nicolas

    2013-01-01

    This dissertation examines key issues relating to minimal techno, a sub-genre of electronic dance music (EDM) that emerged in the early 1990s. These key issues are the aesthetics, composition, performance, and technology of minimal techno, as well as the economics of EDM production. The study aims to answer the following question. What is the musical and social significance of minimal techno production and performance? The study is conducted in two parts. The history of minimal music is ...

  17. Deterministic methods in radiation transport

    International Nuclear Information System (INIS)

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community

  18. Deterministic Real-time Thread Scheduling

    CERN Document Server

    Yun, Heechul; Sha, Lui

    2011-01-01

    Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.

  19. Regret Bounds for Deterministic Gaussian Process Bandits

    CERN Document Server

    de Freitas, Nando; Zoghi, Masrour

    2012-01-01

    This paper analyses the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of $O(\\frac{1}{\\sqrt{t}})$, where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to $O(e^{-\\frac{\\tau t}{(\\ln t)^{d/4}}})$ with high probability. Here, d is the dimension of the search space and $\\tau$ is a constant that depends on the behaviour of the objective function near its global maximum.

  20. Self-organized criticality in deterministic systems with disorder

    OpenAIRE

    Rios, Paolo De Los; Valleriani, Angelo; Vega, Jose Luis

    1997-01-01

    Using the Bak-Sneppen model of biological evolution as our paradigm, we investigate in which cases noise can be substituted with a deterministic signal without destroying Self-Organized Criticality (SOC). If the deterministic signal is chaotic the universality class is preserved; some non-universal features, such as the threshold, depend on the time correlation of the signal. We also show that, if the signal introduced is periodic, SOC is preserved but in a different universality class, as lo...

  1. A Dual Approach for Solving Nonlinear Infinity-Norm Minimization Problems with Applications in Separable Cases

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we consider nonlinear infinity-norm minimization problems. We device a reliable Lagrangian dual approach for solving this kind of problems and based on this method we propose an algorithm for the mixed linear and nonlinear infinitynorm minimization problems. Numerical results are presented.

  2. Minimally Invasive Antral Membrane Balloon Elevation (MIAMBE: A 3 cases report

    Directory of Open Access Journals (Sweden)

    Roberto Arroyo

    2013-12-01

    Full Text Available ABSTRACT Long-standing partial edentulism in the posterior segment of an atrophic maxilla is a challenging treatment. Sinus elevation via Cadwell Luc has several anatomical restrictions, post-operative discomfort and the need of complex surgical techniques. The osteotome approach is a significantly safe and efficient tecnique, as a variation of this technique the "minimal invasive antral membrane balloon elevation" (MIAMBE has been developed, which use a hydraulic system. We present three cases in which the system was used MIAMBE for tooth replacement in the posterior. This procedure seems to be a relatively simple and safe solution for the insertion of endo-osseus implants in the posterior atrophic maxilla. RESUMEN El edentulismo parcial de larga data en el segmento posterior en un maxilar atrófico supone un reto terapéutico. La elevación de seno vía Cadwell Luc presenta restricciones anatómicas, incomodidades post-operatorias y la necesidad de técnicas quirúrgicas complejas. El enfoque con osteotomos tiene una eficacia y seguridad considerable, como una variación a esta se ha desarrollado la "elevación mínimamente invasiva mediante globo de la membrana antral" (MIAMBE, que utiliza un sistema hidráulico. Se presentan tres casos en los que se utilizó el sistema MIAMBE para el reemplazo de dientes en el sector posterior. Este procedimiento parece ser una solución relativamente sencilla y segura para inserción de implates endo-óseos en el caso de un maxilar atrófico posterior.

  3. Deterministic dynamic behaviour

    International Nuclear Information System (INIS)

    The dynamic load as the second given quantity in a dynamic analysis is less problematic as a rule than structure mapping, except for those cases where it cannot be specified completely independently directly for the structure but interacts with the non-structure environment or is influenced by the latter. In these cases, one should always check whether the study cannot be simplified by separate investigations of the two types of problems. The determination of the system response from the given quantities 'structure' and 'load' is the central function of dynamic analysis, although the importance and problems of the mapping steps should not be underestimated. The paper focuses on some aspects of this problem. The available methods are classified as modal and non-modal (direct) methods. In the first of these, the eigenvectors of the system are used as generalizing coordinates, while the degrees of freedom describing the model are used in the latter. The criteria for assessing methods of calculation are the accuracy and numerical stability of their solutions as well as their simplicity of use. Response quantities are represented (and calculated) in the form of response-time functions, frequency response functions, spectral density functions (or stochastic parameters derived from these), response spectra. The multitude of problems in dynamic studies requires also a multitude of possible approaches, and the selection of the most appropriate method for a given case is no slight task. (orig./GL)

  4. Minimally invasive transforaminal lumbar interbody fusion Results of 23 consecutive cases

    Directory of Open Access Journals (Sweden)

    Amit Jhala

    2014-01-01

    Conclusion: The study demonstrates a good clinicoradiological outcome of minimally invasive TLIF. It is also superior in terms of postoperative back pain, blood loss, hospital stay, recovery time as well as medication use.

  5. Minimally invasive video-assisted thyroidectomy: experience of 200 cases in a single center

    OpenAIRE

    Haitao, Zheng; Jie, Xu; Lixin, Jiang

    2014-01-01

    Introduction Minimally invasive techniques in thyroid surgery including video-assisted technique originally described by Miccoli have been accepted in several continents for more than 10 years. Aim To analyze our preliminary results from minimally invasive video-assisted thyroidectomy (MIVAT) and to evaluate the feasibility and effects of this method in a general department over a 4-year period. Material and methods Initial experience was presented based on a series of 200 patients selected f...

  6. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  7. TACE with Ar-He Cryosurgery Combined Minimal Invasive Technique for the Treatment of Primary NSCLC in 139 Cases

    Directory of Open Access Journals (Sweden)

    Yunzhi ZHOU

    2010-01-01

    Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications

  8. Deterministic and risk-informed approaches for safety analysis of advanced reactors: Part I, deterministic approaches

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang Kyu [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Kim, Inn Seock, E-mail: innseockkim@gmail.co [ISSA Technology, 21318 Seneca Crossing Drive, Germantown, MD 20876 (United States); Oh, Kyu Myung [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)

    2010-05-15

    The objective of this paper and a companion paper in this issue (part II, risk-informed approaches) is to derive technical insights from a critical review of deterministic and risk-informed safety analysis approaches that have been applied to develop licensing requirements for water-cooled reactors, or proposed for safety verification of the advanced reactor design. To this end, a review was made of a number of safety analysis approaches including those specified in regulatory guides and industry standards, as well as novel methodologies proposed for licensing of advanced reactors. This paper and the companion paper present the review insights on the deterministic and risk-informed safety analysis approaches, respectively. These insights could be used in making a safety case or developing a new licensing review infrastructure for advanced reactors including Generation IV reactors.

  9. Waste minimization in a research and development environment - a case history

    International Nuclear Information System (INIS)

    Brookhaven National Laboratory (BNL) research and development activities generate small and variable waste streams that present a unique minimization challenge. This paper describes how B ampersand V Waste Science and TEchnology Corp. successfully planned and organized an assessment of these waste streams. It describes the procedures chosen to collect and evaluate data and the procedure adopted to determine the feasibility of waste minimization methods and program elements. The paper gives a brief account of the implementation of the assessment and summarizes the assessment results and recommendations. Also, the paper briefly describes a manual developed to train staff on materials handling and storage methods and a general information brochure to educate employees and visiting researchers. Both documents covered handling, storage, and disposal procedures that could be used to eliminate or minimize hazardous waste discharges to the environment

  10. Minimally invasive esophagectomy for cancer: Single center experience after 44 consecutive cases

    Directory of Open Access Journals (Sweden)

    Bjelović Miloš

    2015-01-01

    Full Text Available Introduction. At the Department of Minimally Invasive Upper Digestive Surgery of the Hospital for Digestive Surgery in Belgrade, hybrid minimally invasive esophagectomy (hMIE has been a standard of care for patients with resectable esophageal cancer since 2009. As a next and final step in the change management, from January 2015 we utilized total minimally invasive esophagectomy (tMIE as a standard of care. Objective. The aim of the study was to report initial experiences in hMIE (laparoscopic approach for cancer and analyze surgical technique, major morbidity and 30-day mortality. Methods. A retrospective cohort study included 44 patients who underwent elective hMIE for esophageal cancer at the Department for Minimally Invasive Upper Digestive Surgery, Hospital for Digestive Surgery, Clinical Center of Serbia in Belgrade from April 2009 to December 2014. Results. There were 16 (36% middle thoracic esophagus tumors and 28 (64% tumors of distal thoracic esophagus. Mean duration of the operation was 319 minutes (approximately five hours and 20 minutes. The average blood loss was 173.6 ml. A total of 12 (27% of patients had postoperative complications and mean intensive care unit stay was 2.8 days. Mean hospital stay after surgery was 16 days. The average number of harvested lymph nodes during surgery was 31.9. The overall 30-day mortality rate within 30 days after surgery was 2%. Conclusion. As long as MIE is an oncological equivalent to open esophagectomy (OE, better relation between cost savings and potentially increased effectiveness will make MIE the preferred approach in high-volume esophageal centers that are experienced in minimally invasive procedures.

  11. Analysis of FBC deterministic chaos

    Energy Technology Data Exchange (ETDEWEB)

    Daw, C.S.

    1996-06-01

    It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.

  12. Gamma Probe Guided Minimally Invasive Parathyroidectomy without Quick Parathyroid Hormone Measurement in the Cases of Solitary Parathyroid Adenomas

    OpenAIRE

    Savaş Karyağar; Karyağar, Sevda S; Orhan Yalçın; Enis Yüney; Mehmet Mülazımoğlu; Tevfik Özpaçacı; Oğuzhan Karatepe; Yaşar Özdenkaya

    2013-01-01

    Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP), conducted without the intra-operative quick parathyroid hormone (QPTH) measurement in the cases of solitary parathyroid adenomas (SPA) detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS) in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years) between Febru...

  13. Minimally invasive two-incision total hip arthroplasty:a short-term retrospective report of 27 cases

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xian-long; WANG Qi; SHEN Hao; JIANG Yao; ZENG Bing-fang

    2007-01-01

    Background Total hip arthroplasty (THA) is widely applied for the treatment of end-stage painful hip arthrosis.Traditional THA needed a long incision and caused significant soft tissue trauma. Patients usually required long recovery time after the operation. In this research we aimed to study the feasibility and clinical outcomes of minimally invasive two-incision THA.Methods From February 2004 to March 2005, 27 patients, 12 males and 15 females with a mean age of 71 years (55-76), underwent minimally invasive two-incision THA in our department. The patients included 9 cases of osteoarthritis, 10 cases of osteonecrosis, and 8 cases of femoral neck fracture. The operations were done with VerSys cementless prosthesis and minimally invasive instruments from Zimmer China. Operation time, blood loss, length of incision, postoperative hospital stay, and complications were observed.Results The mean operation time was 90 minutes (80-170 min). The mean blood loss was 260 ml (170-450 ml) and blood transfusion was carried out in 4 cases of femoral neck fracture (average 400 ml). The average length of the anterior incision was 5.0 cm (4.6-6.5 cm) and of the posterior incision 3.7 cm (3.0-4.2 cm). All of the patients were ambulant the day after surgery. Nineteen patients were discharged 5 days post-operatively and 8 patients 7 days post-operatively. The patients were followed for 18 months (13-25 months). One patient was complicated by a proximal femoral fracture intraoperatively. No other complications, including infections, dislocations, and vascular injuries, occurred. The mean Harris score was 94.5 (92-96).Conclusions Two-incision THA has the advantage of being muscle sparing and minimally invasive with less blood loss and rapid recovery. However, this technique is time consuming, technically demanding, and requires fluoroscopy.

  14. Minimally-invasive elimination of iatrogenic intravenous foreign bodies: initial experience in five cases

    International Nuclear Information System (INIS)

    Objective: To investigate the effectiveness, technical points and complications of the minimally-invasive treatment for iatrogenic intravenous foreign bodies. Methods: Five patients with iatrogenic intravenous foreign bodies due to the fracture or shift of venous catheter were enrolled in this study. By using grasping device, which was inserted into the target vein via right femoral vein, the foreign bodies within the venous system were successfully eliminated. Results: The vascular foreign bodies were successfully removed in all five patients, with a success rate of 100%. No operation-related complications, such as vascular rupture, pulmonary embolism, etc. occurred. Conclusion: As a minimally-invasive technique, the use grasping device for removing the iatrogenic vascular foreign bodies has higher success rate; thus, major surgical procedures can be avoided. (authors)

  15. Minimally invasive video-assisted thyroidectomy: seven-year experience with 240 cases

    OpenAIRE

    Barczyński, Marcin; Konturek, Aleksander; Stopa, Małgorzata; Papier, Aleksandra; Nowak, Wojciech

    2012-01-01

    Introduction Minimally invasive video-assisted thyroidectomy (MIVAT) has gained acceptance in recent years as an alternative to conventional thyroid surgery. Aim Assessment of our 7-year experience with MIVAT. Material and methods A retrospective study of 240 consecutive patients who underwent MIVAT at our institution between 01/2004 and 05/2011 was conducted. The inclusion criterion was a single thyroid nodule below 30 mm in diameter within the thyroid of 25 ml or less in volume. The exclusi...

  16. Minimally invasive intervention in a case of a noncarious lesion and severe loss of tooth structure.

    Science.gov (United States)

    Reston, Eduardo G; Corba, Vanessa D; Broliato, Gustavo; Saldini, Bruno P; Stefanello Busato, Adair L

    2012-01-01

    The present article describes a minimally invasive technique used for the restoration of loss of tooth structure caused by erosion of intrinsic etiology. First, the cause of erosion was treated and controlled. Subsequently, taking into consideration patient characteristics, especially a young age, a more conservative technique was chosen for dental rehabilitation with the use of composite resin. The advantages and disadvantages of the technique employed are discussed.

  17. Design of deterministic OS for SPLC

    Energy Technology Data Exchange (ETDEWEB)

    Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events.

  18. Streamflow disaggregation: a nonlinear deterministic approach

    Directory of Open Access Journals (Sweden)

    B. Sivakumar

    2004-01-01

    Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.

  19. Bottleneck Paths and Trees and Deterministic Graphical Games

    OpenAIRE

    Chechik, Shiri; Kaplan, Haim; Thorup, Mikkel; Zamir, Or; Zwick, Uri

    2016-01-01

    Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in O(m * log^*(n)) time, where m is the number of edges and n is the number of vertices in the graph. We p...

  20. Deterministic treatment of model error in geophysical data assimilation

    CERN Document Server

    Carrassi, Alberto

    2015-01-01

    This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...

  1. Optimal timing of insecticide fogging to minimize dengue cases: modeling dengue transmission among various seasonalities and transmission intensities.

    Directory of Open Access Journals (Sweden)

    Mika Oki

    2011-10-01

    Full Text Available BACKGROUND: Dengue infection is endemic in many regions throughout the world. While insecticide fogging targeting the vector mosquito Aedes aegypti is a major control measure against dengue epidemics, the impact of this method remains controversial. A previous mathematical simulation study indicated that insecticide fogging minimized cases when conducted soon after peak disease prevalence, although the impact was minimal, possibly because seasonality and population immunity were not considered. Periodic outbreak patterns are also highly influenced by seasonal climatic conditions. Thus, these factors are important considerations when assessing the effect of vector control against dengue. We used mathematical simulations to identify the appropriate timing of insecticide fogging, considering seasonal change of vector populations, and to evaluate its impact on reducing dengue cases with various levels of transmission intensity. METHODOLOGY/PRINCIPAL FINDINGS: We created the Susceptible-Exposed-Infectious-Recovered (SEIR model of dengue virus transmission. Mosquito lifespan was assumed to change seasonally and the optimal timing of insecticide fogging to minimize dengue incidence under various lengths of the wet season was investigated. We also assessed whether insecticide fogging was equally effective at higher and lower endemic levels by running simulations over a 500-year period with various transmission intensities to produce an endemic state. In contrast to the previous study, the optimal application of insecticide fogging was between the onset of the wet season and the prevalence peak. Although it has less impact in areas that have higher endemicity and longer wet seasons, insecticide fogging can prevent a considerable number of dengue cases if applied at the optimal time. CONCLUSIONS/SIGNIFICANCE: The optimal timing of insecticide fogging and its impact on reducing dengue cases were greatly influenced by seasonality and the level of

  2. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  3. Deterministic Circular Self Test Path

    Institute of Scientific and Technical Information of China (English)

    WEN Ke; HU Yu; LI Xiaowei

    2007-01-01

    Circular self test path (CSTP) is an attractive technique for testing digital integrated circuits(IC) in the nanometer era, because it can easily provide at-speed test with small test data volume and short test application time. However, CSTP cannot reliably attain high fault coverage because of difficulty of testing random-pattern-resistant faults. This paper presents a deterministic CSTP (DCSTP) structure that consists of a DCSTP chain and jumping logic, to attain high fault coverage with low area overhead. Experimental results on ISCAS'89 benchmarks show that 100% fault coverage can be obtained with low area overhead and CPU time, especially for large circuits.

  4. Minimally invasive direct coronary artery bypass plus coronary stent for acute coronary syndrome: a case report

    Institute of Scientific and Technical Information of China (English)

    Caiyi Lu; Gang Wang; Qi Zhou; Jinwen Tian; Lei Gao; Shenhua Zhou; Jinyue Zhai; Rui Chen; Zhongren Zhao; Cangqing Gao; Shiwen Wang; Yuxiao Zhang; Ming Yang; Qiao Xue; Cangsong Xiao; Wei Gao; Yang Wu

    2008-01-01

    A 69-year old female patient was admitted because of 3 days of worsened chest pain.Coronary angiography showed60% stenosis of distal left main stem,chronic total occlusion of left anterior descending (LAD),70% stenosis at the ostium of a smallleft circumflex,70-90%stenosis at the paroxysmal and middle part of a dominant fight coronary artery (RCA),and a normal left internalmammary artery (LIMA) with normal origination and orientation.Percutaneous intervention was attempted but failed on the occludedlesion of LAD.The patient received minimally invasive direct coronary artery bypass (MIDCAB) with left LIMA isolation by Davincirobot.Eleven days later,the RCA lesion was treated by Sirolimus Rapamicin eluting stents implantation percutaneously.Then thepatient was discharged uneventfully after 3 days hospitalization.Our experience suggests that two stop shops of hybrid technique befeasible and safe in the treatment of elderly patient with multiple coronary diseases.

  5. Adverse reaction after hyaluronan injection for minimally invasive papilla volume augmentation. A report on two cases

    DEFF Research Database (Denmark)

    Bertl, Kristina; Gotfredsen, Klaus; Jensen, Simon S;

    2016-01-01

    OBJECTIVES: To report two cases of adverse reaction after mucosal hyaluronan (HY) injection around implant-supported crowns, with the aim to augment the missing interdental papilla. MATERIAL AND METHODS: Two patients with single, non-neighbouring, implants in the anterior maxilla, who were treated...... directly above the mucogingival junction, (ii) injection into the attached gingiva/mucosa below the missing papilla, and (iii) injection 2-3 mm apically to the papilla tip. The whole-injection session was repeated once after approximately 4 weeks. RESULTS: Both patients presented with swelling and extreme...... tenderness with a burning sensation on the lip next to the injection area, after the second injection session. In one of the cases, a net-like skin discoloration (livedo reticularis) was also noted. The symptoms lasted for up to 7 days, and in both cases, symptoms resolved without any signs of skin...

  6. Full Mouth Rehabilitation by Minimally Invasive Cosmetic Dentistry Coupled with Computer Guided Occlusal Analysis: A Case Report.

    Science.gov (United States)

    Sarita; Thumati, Prafulla

    2014-12-01

    Evidence of dentistry dates back to 7000 B.C. and since then has come, indeed a long sophisticated way in treatment management of our dental patients. There have been admirable advances in the field of prosthodontics by the way of techniques and materials; enabling production of artificial teeth that feel, function and appear nothing but natural. The following case report describes the management of maxillary edentulousness with removable complete denture and mandibular attrition and missing teeth with onlays and FPD by the concept of minimally invasive cosmetic dentistry. Computer guided occlusal analysis was used to guide sequential occlusal adjustments to obtain measurable bilateral occlusal contacts simultaneously.

  7. Spontaneous spinal epidural hematoma management with minimally invasive surgery through tubular retractors: A case report and review of the literature.

    Science.gov (United States)

    Fu, Chao-Feng; Zhuang, Yuan-Dong; Chen, Chun-Mei; Cai, Gang-Feng; Zhang, Hua-Bin; Zhao, Wei; Ahmada, Said Idrissa; Devi, Ramparsad Doorga; Kibria, Md Golam

    2016-06-01

    To report a minimally invasive paraspinal approach in the treatment of a case of spontaneous spinal epidural hematoma (SSEH). We additionally aim to review the relevant literature to enhance our knowledge of this disease. SSEH is an uncommon but potentially catastrophic disease. Currently, most appropriate management is emergence decompression laminectomy and hematoma evacuation. An 81-year-old woman was admitted to the neurology department with a chief complaint of bilateral numbness and weakness of the lower limbs and difficulty walking for 4 days with progressive weakness developed over the following 3 days accompanied with pain in the lower limbs and lower back. No history of trauma was reported. Magnetic resonance imaging of the thoracolumbar spine demonstrated an epidural hematoma extending from T-12 to L-5 with thecal sac and cauda equina displacement anterior. The patient was treated in our department with a minimally invasive approach. This operation method had been approved by Chinese Independent Ethics Committee. Three months following the operation, the patient had regained the ability to walk with the aid of a cane and myodynamia tests revealed normal results for the left lower limb and a 4/5 grade for the right limb. Importantly, no complications were exhibited from the surgical operation. The minimally invasive paraspinal approach through tubular retractors is demonstrated here as an effective alternative method for the treatment of SSEH. PMID:27367986

  8. MANAGEMENT OF PERIPROSTHETIC DISTAL FEMORAL FRACTURE AFTER TOTAL KNEE ARTHROPLASTY USING MINIMALLY INVASIVE PLATE OSTEOSYNTHESIS: A CASE REPORT

    Directory of Open Access Journals (Sweden)

    Reddy

    2015-07-01

    Full Text Available CONTEXT: The approximate incidence of periprosthetic supracondylar femur fractures after total knee arthroplasty ranges from 0.3 to 2.5 percent. Various methods of treatment of these fractures have been suggested in the past, such as conservative management, open reduction and plate fixation and intramedullary nailing. However, there were complications like pain, stiffness, infection and delayed union. Minimally invasive plate osteosynthesis (MIPO is a relatively newer technique in the treatment of distal femoral fractures, as it preserves the periosteal blood supply an d bone perfusion as well as minimizes soft tissue dissection. AIM: To evaluate the effectiveness of MIPO technique in the treatment of periprosthetic distal femoral fracture. SETTINGS AND DESIGN : In this study, we present a case report of a 54 year old female patient who sustained type 2 (Rorabeck et al. classification periprosthetic distal femoral fractures after TKA. Her fracture fixation was done with distal femoral locking plates using minimally invasive technique. METHODS AND MATERIAL : We evaluated the clinical (using Oxford knee scoring system and radiological outcomes of the patient till six months post - operatively. Radiologically, the fracture showed complete union and she regained her full range of knee motion by the end of three months. CONCLUSION: We conclude that MIPO can be considered as an effective surgical treatment option in the management of periprosthetic distal femoral fractures after TKA

  9. The Significance of Minimally Invasive Core Needle Biopsy and Immunohistochemistry Analysis in 235 Cases with Breast Lesions

    Institute of Scientific and Technical Information of China (English)

    Yun Niu; Tieju Liu; Xuchen Cao; Xiumin Ding; Li Wei; Yuxia Gao; Jun Liu

    2009-01-01

    OBJECTIVE To evaluate core needle biopsy (CNB) as a mini-mally invasive method to examine breast lesions and discuss the clinical significance of subsequent immunohistochemistry (IHC)analysis.METHODS The clinical data and pathological results of 235 pa-tients with breast lesions, who Received CNB before surgery, were analyzed and compared. Based on the results of CNB done before surgery, 87 out of 204 patients diagnosed as invasive carcinoma were subjected to immunodetection for p53, c-erbB-2, ER and PR.The morphological change of cancer tissues in response to chemo-therapy was also evaluated.RESULTS In total of 235 cases receiving CNB examination, 204 were diagnosed as invasive carcinoma, reaching a 100% consistent rate with the surgical diagnosis. Sixty percent of the cases diag-nosed as non-invasive carcinoma by CNB was identified to have the presence of invading elements in surgical specimens, and simi-larly, 50% of the cases diagnosed as atypical ductal hyperplasia by CNB was confirmed to be carcinoma by the subsequent result of excision biopsy. There was no significant difference between the CNB biopsy and regular surgical samples in positive rate of im-munohistochemistry analysis (p53, c-erbB-2, ER and PR; P > 0.05).However, there was significant difference in the expression rate of p53 and c-erbB-2 between the cases with and without morphologi-cal change in response to chemotherapy (P < 0.05). In most cases with p53 and c-erbB-2 positive, there was no obvious morphologi-cal change after chemotherapy. CONCLUSION CNB is a cost-effective diagnostic method with minimal invasion for breast lesions, although it still has some limi-tations. Immunodetection on CNB tissue is expected to have great significance in clinical applications.

  10. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.

    2014-02-01

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  11. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    International Nuclear Information System (INIS)

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  12. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  13. Minimal invasive surgery for unicameral bone cyst using demineralized bone matrix: a case series

    Directory of Open Access Journals (Sweden)

    Cho Hwan

    2012-07-01

    Full Text Available Abstract Background Various treatments for unicameral bone cyst have been proposed. Recent concern focuses on the effectiveness of closed methods. This study evaluated the effectiveness of demineralized bone matrix as a graft material after intramedullary decompression for the treatment of unicameral bone cysts. Methods Between October 2008 and June 2010, twenty-five patients with a unicameral bone cyst were treated with intramedullary decompression followed by grafting of demineralized bone matrix. There were 21 males and 4 female patients with mean age of 11.1 years (range, 3–19 years. The proximal metaphysis of the humerus was affected in 12 patients, the proximal femur in five, the calcaneum in three, the distal femur in two, the tibia in two, and the radius in one. There were 17 active cysts and 8 latent cysts. Radiologic change was evaluated according to a modified Neer classification. Time to healing was defined as the period required achieving cortical thickening on the anteroposterior and lateral plain radiographs, as well as consolidation of the cyst. The patients were followed up for mean period of 23.9 months (range, 15–36 months. Results Nineteen of 25 cysts had completely consolidated after a single procedure. The mean time to healing was 6.6 months (range, 3–12 months. Four had incomplete healing radiographically but had no clinical symptom with enough cortical thickness to prevent fracture. None of these four cysts needed a second intervention until the last follow-up. Two of 25 patients required a second intervention because of cyst recurrence. All of the two had a radiographical healing of cyst after mean of 10 additional months of follow-up. Conclusions A minimal invasive technique including the injection of DBM could serve as an excellent treatment method for unicameral bone cysts.

  14. Full-mouth adhesive rehabilitation in case of severe dental erosion, a minimally invasive approach following the 3-step technique.

    Science.gov (United States)

    Grütter, Linda; Vailati, Francesca

    2013-01-01

    A full-mouth adhesive rehabilitation in case of severe dental erosion may present a challenge for both the clinician and the laboratory technician, not only for the multiple teeth to be restored, but also for their time schedule, difficult to be included in a busy agenda of a private practice. Thanks to the simplicity of the 3-step technique, full-mouth rehabilitations become easier to handle. In this article the treatment of a very compromised case of dental erosion (ACE class V) is illustrated, implementing only adhesive techniques. The very pleasing clinical outcome was the result of the esthetic, mechanic and most of all biological success achieved, confirming that minimally invasive dentistry should always be the driving motor of any rehabilitation, especially in patients who have already suffered from conspicuous tooth destruction.

  15. Survivability of Deterministic Dynamical Systems.

    Science.gov (United States)

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-01-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955

  16. Survivability of Deterministic Dynamical Systems

    Science.gov (United States)

    Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen

    2016-07-01

    The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.

  17. Piecewise deterministic Markov processes : an analytic approach

    NARCIS (Netherlands)

    Alkurdi, Taleb Salameh Odeh

    2013-01-01

    The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of sto

  18. Generalizing Several Theoretical Deterministic Secure Direct Bidirectional Communications to Improve Their Capacities

    OpenAIRE

    Zhang, Z. J.; Man, Z. X.

    2004-01-01

    Several theoretical Deterministic Secure Direct Bidirectional Communication protocols are generalized to improve their capacities by introducing the superdense-coding in the case of high-dimension quantum states.

  19. Control rod worth calculations using deterministic and stochastic methods

    Energy Technology Data Exchange (ETDEWEB)

    Varvayanni, M. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: melina@ipta.demokritos.g [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece)

    2009-11-15

    Knowledge of the efficiency of a control rod to absorb excess reactivity in a nuclear reactor, i.e. knowledge of its reactivity worth, is very important from many points of view. These include the analysis and the assessment of the shutdown margin of new core configurations (upgrade, conversion, refuelling, etc.) as well as several operational needs, such as calibration of the control rods, e.g. in case that reactivity insertion experiments are planned. The control rod worth can be assessed either experimentally or theoretically, mainly through the utilization of neutronic codes. In the present work two different theoretical approaches, i.e. a deterministic and a stochastic one are used for the estimation of the integral and the differential worth of two control rods utilized in the Greek Research Reactor (GRR-1). For the deterministic approach the neutronics code system SCALE (modules NITAWL/XSDRNPM) and CITATION is used, while the stochastic one is made using the Monte Carlo code TRIPOLI. Both approaches follow the procedure of reactivity insertion steps and their results are tested against measurements conducted in the reactor. The goal of this work is to examine the capability of a deterministic code system to reliably simulate the worth of a control rod, based also on comparisons with the detailed Monte Carlo simulation, while various options are tested with respect to the deterministic results' reliability.

  20. Maximizing Outcomes While Minimizing Morbidity: An Illustrated Case Review of Elbow Soft Tissue Reconstruction

    Science.gov (United States)

    Ooi, Adrian; Ng, Jonathan; Chui, Christopher; Goh, Terence; Tan, Bien Keem

    2016-01-01

    Background. Injuries to the elbow have led to consequences varying from significant limitation in function to loss of the entire upper limb. Soft tissue reconstruction with durable and pliable coverage balanced with the ability to mobilize the joint early to optimize rehabilitation outcomes is paramount. Methods. Methods of flap reconstruction have evolved from local and pedicled flaps to perforator-based flaps and free tissue transfer. Here we performed a review of 20 patients who have undergone flap reconstruction of the elbow at our institution. Discussion. 20 consecutive patients were identified and included in this study. Flap types include local (n = 5), regional pedicled (n = 7), and free (n = 8) flaps. The average size of defect was 138 cm2 (range 36–420 cm2). There were no flap failures in our series, and, at follow-up, the average range of movement of elbow flexion was 100°. Results. While the pedicled latissimus dorsi flap is the workhorse for elbow soft tissue coverage, advancements in microvascular knowledge and surgery have brought about great benefit, with the use of perforator flaps and free tissue transfer for wound coverage. Conclusion. We present here our case series on elbow reconstruction and an abbreviated algorithm on flap choice, highlighting our decision making process in the selection of safe flap choice for soft tissue elbow reconstruction. PMID:27313886

  1. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    CERN Document Server

    Bajc, Iztok; Žumer, Slobodan

    2015-01-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics, studied on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nemati...

  2. Minimally Invasive Alveolar Ridge Preservation Utilizing an In Situ Hardening β-Tricalcium Phosphate Bone Substitute: A Multicenter Case Series

    Science.gov (United States)

    Leventis, Minas D.; Fairbairn, Peter; Kakar, Ashish; Leventis, Angelos D.; Margaritis, Vasileios; Lückerath, Walter; Horowitz, Robert A.; Rao, Bappanadu H.; Lindner, Annette; Nagursky, Heiner

    2016-01-01

    Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP) granules coated with poly(lactic-co-glycolic acid) (PLGA) were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone) in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material) while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques. PMID:27190516

  3. Minimally Invasive Alveolar Ridge Preservation Utilizing an In Situ Hardening β-Tricalcium Phosphate Bone Substitute: A Multicenter Case Series

    Directory of Open Access Journals (Sweden)

    Minas D. Leventis

    2016-01-01

    Full Text Available Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP granules coated with poly(lactic-co-glycolic acid (PLGA were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques.

  4. Accomplishing Deterministic XML Query Optimization

    Institute of Scientific and Technical Information of China (English)

    Dun-Ren Che

    2005-01-01

    As the popularity of XML (eXtensible Markup Language) keeps growing rapidly, the management of XML compliant structured-document databases has become a very interesting and compelling research area. Query optimization for XML structured-documents stands out as one of the most challenging research issues in this area because of the much enlarged optimization (search) space, which is a consequence of the intrinsic complexity of the underlying data model of XML data. We therefore propose to apply deterministic transformations on query expressions to most aggressively prune the search space and fast achieve a sufficiently improved alternative (if not the optimal) for each incoming query expression. This idea is not just exciting but practically attainable. This paper first provides an overview of our optimization strategy, and then focuses on the key implementation issues of our rule-based transformation system for XML query optimization in a database environment. The performance results we obtained from experimentation show that our approach is a valid and effective one.

  5. Constructing stochastic models from deterministic process equations by propensity adjustment

    Directory of Open Access Journals (Sweden)

    Wu Jialiang

    2011-11-01

    Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic

  6. Detecting deterministic dy namics of cardiac rhythm

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Under the acceptable hypothesis that cardiac rhythm is an approximately deterministic process with a small scale noise component, an available way is provided to construct a model that can reflect its prominent dynamics of the deterministic component. When applied to the analysis of 19 heart rate data sets, three main findings are stated. The obtained model can reflect prominent dynamics of the deterministic component of cardiac rhythm; cardiac chaos is stated in a reliable way; dynamical noise plays an important role in the generation of complex cardiac rhythm.``

  7. Gamma Probe Guided Minimally Invasive Parathyroidectomy without Quick Parathyroid Hormone Measurement in the Cases of Solitary Parathyroid Adenomas

    Directory of Open Access Journals (Sweden)

    Savaş Karyağar

    2013-04-01

    Full Text Available Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP, conducted without the intra-operative quick parathyroid hormone (QPTH measurement in the cases of solitary parathyroid adenomas (SPA detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years between February 2006 and January 2009. All patients were operated within 30 days after the detection of the SPA with dual phase 99mTc-MIBI PS and USG. The GP-MIP was done 90-120 min after the iv injection of 740 MBq 99mTc-MIBI. In all cases, except 1 patient, the GP-MIP was performed under local anesthesia; due to the enormity of size of SPA, then general anesthesia is chosen. Results: The operation time was 30-60 min, mean 38,2±7 min. In the first postoperative day, there was a more than 50% decrease in PTH levels in all patients and all but one had normal serum calcium levels. Transient hypocalcemia was detected in one patient. Conclusion: GP-MIP without intra-operative QPTH measurement is a suitable method in the surgical treatment of SPA detected by dual phase 99mTc-MIBI PS and USG.

  8. Cell sorting by deterministic cell rolling

    OpenAIRE

    Choi, Sungyoung; Karp, Jeffrey M.; Karnik, Rohit

    2011-01-01

    This communication presents the concept of “deterministic cell rolling”, which leverages transient cell-surface molecular interactions that mediate cell rolling to sort cells with high purity and efficiency in a single step.

  9. Linear systems control deterministic and stochastic methods

    CERN Document Server

    Hendricks, Elbert; Sørensen, Paul Haase

    2008-01-01

    Linear Systems Control provides a very readable graduate text giving a good foundation for reading more rigorous texts. There are multiple examples, problems and solutions. This unique book successfully combines stochastic and deterministic methods.

  10. A Deterministic and Polynomial Modified Perceptron Algorithm

    Directory of Open Access Journals (Sweden)

    Olof Barr

    2006-01-01

    Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.

  11. Deterministic algorithm with agglomerative heuristic for location problems

    Science.gov (United States)

    Kazakovtsev, L.; Stupina, A.

    2015-10-01

    Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.

  12. Minimally invasive technologies in the treatment of closed fractures of the intercondylar elevation of the knee: a clinical case

    Directory of Open Access Journals (Sweden)

    Евгений Владимирович Ворончихин

    2015-12-01

    Full Text Available Тhis article presents a clinical case of the surgical treatment of a fracture in the intercondylar eminences of the knee joint in a 7-year-old child. Closed fractures of the intercondylar exaltation are mainly a characteristic of childhood. This type of damage occurs by dysfunction of the knee resulting from instability. Because the fracture of the intercondylar eminences of the knee joint in children is similar to the damage of the anterior cruciate ligament in adults, the current course of knee surgery is a minimally invasive technique. These include fixation of the intercondylar exaltation using video stroboscopy as well as the assistance of various implants (e.g., screw, wire, and Dacron. In the children's Department of Traumatology and Orthopedics of the Federal Center of Traumatology, Orthopedics and Endoprosthesis Replacement in Barnaul, various surgeries are performed, including arthroscopy of the right knee joint, intercondylar exaltation reposition, and fixation of the intercondylar exaltation latch Lupine (De PuyMitek.

  13. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    Science.gov (United States)

    Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan

    2016-09-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.

  14. Linear embedding of free energy minimization

    OpenAIRE

    Moussa, Jonathan E.

    2016-01-01

    Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-r...

  15. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  16. Effect of Uncertainty on Deterministic Runway Scheduling

    Science.gov (United States)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  17. Exploiting Deterministic TPG for Path Delay Testing

    Institute of Scientific and Technical Information of China (English)

    李晓维

    2000-01-01

    Detection of path delay faults requires two-pattern tests. BIST technique provides a low-cost test solution. This paper proposes an approach to designing a cost-effective deterministic test pattern generator (TPG) for path delay testing. Given a set of pre-generated test-pairs with pre-determined fault coverage, a deterministic TPG is synthesized to apply the given test-pair set in a limited test time. To achieve this objective, configurable linear feedback shift register (LFSR) structures are used. Techniques are developed to synthesize such a TPG, which is used to generate an unordered deterministic test-pair set. The resulting TPG is very efficient in terms of hardware size and speed performance. Simulation of academic benchmark circuits has given good results when compared to alternative solutions.

  18. Optimal Deterministic Investment Strategies for Insurers

    Directory of Open Access Journals (Sweden)

    Ulrich Rieder

    2013-11-01

    Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.

  19. Stochastic versus deterministic systems of differential equations

    CERN Document Server

    Ladde, G S

    2003-01-01

    This peerless reference/text unfurls a unified and systematic study of the two types of mathematical models of dynamic processes-stochastic and deterministic-as placed in the context of systems of stochastic differential equations. Using the tools of variational comparison, generalized variation of constants, and probability distribution as its methodological backbone, Stochastic Versus Deterministic Systems of Differential Equations addresses questions relating to the need for a stochastic mathematical model and the between-model contrast that arises in the absence of random disturbances/flu

  20. A new deterministic model for chaotic reversals

    CERN Document Server

    Gissinger, Christophe

    2011-01-01

    In this article, we present a new chaotic system of three coupled ordinary differential equations, limited to quadratic terms. A wide variety of dynamical regimes are reported. For some parameters, chaotic reversals of the amplitudes are produced by crisis-induced intermittency, following a mechanism different from what is generally observed in similar deterministic models. Despite its simplicity, this system therefore generates a rich dynamics, able to model more complex physical systems. In particular, a comparison with reversals of the magnetic field of the Earth shows a surprisingly good agreement, and highlights the relevance of deterministic chaos to describe geomagnetic field dynamics.

  1. Deterministic doping and the exploration of spin qubits

    Energy Technology Data Exchange (ETDEWEB)

    Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)

    2015-01-09

    Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.

  2. Shock-induced explosive chemistry in a deterministic sample configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith

    2005-10-01

    Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.

  3. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  4. DETERMINISTIC HOMOGENIZATION OF QUASILINEAR DAMPED HYPERBOLIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Gabriel Nguetseng; Hubert Nnang; Nils Svanstedt

    2011-01-01

    Deterministic homogenization is studied for quasilinear monotone hyperbolic problems with a linear damping term.It is shown by the sigma-convergence method that the sequence of solutions to a class of multi-scale highly oscillatory hyperbolic problems converges to the solution to a homogenized quasilinear hyperbolic problem.

  5. Deterministic Kalman filtering in a behavioral framework

    NARCIS (Netherlands)

    Fagnani, F; Willems, JC

    1997-01-01

    The purpose of this paper is to obtain a deterministic version of the Kalman filtering equations. We will use a behavioral description of the plant, specifically, an image representation. The resulting algorithm requires a matrix spectral factorization. We also show that the filter can be implemente

  6. Nonterminals, homomorphisms and codings in different variations of OL-systems. I. Deterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto;

    1974-01-01

    The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...

  7. Reinforcement learning output feedback NN control using deterministic learning technique.

    Science.gov (United States)

    Xu, Bin; Yang, Chenguang; Shi, Zhongke

    2014-03-01

    In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456

  8. Spatial continuity measures for probabilistic and deterministic geostatistics

    Energy Technology Data Exchange (ETDEWEB)

    Isaaks, E.H.; Srivastava, R.M.

    1988-05-01

    Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.

  9. Minimizing Tardy Jobs in a Single Machine Scheduling Problem with Fuzzy Processing Times and Due Dates

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The optimality of a fuzzy logic alternative to the usual treatment of uncertainties in a scheduling system using fuzzy numbers is examined formally. Processing times and due dates are fuzzified and presented by fuzzy numbers. With introducing the necessity measure, we compare fuzzy completion times of jobs with fuzzy due dates to decide whether jobs are tardy. The object is to minimize the numbers of tardy jobs.The efficient solution method for this problem is proposed. And deterministic counterpart of this single machine scheduling problem is a special case of fuzzy version.

  10. Influence of Deterministic Attachments for Large Unifying Hybrid Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Large unifying hybrid network model (LUHPM) introduced the deterministic mixing ratio fd on the basis of the harmonious unification hybrid preferential model, to describe the influence of deterministic attachment to the network topology characteristics,

  11. Use of fuzzy set theory for minimizing overbreak in underground blasting operations-A case study of Alborz Tunnel, Iran

    Institute of Scientific and Technical Information of China (English)

    Mohammadi Mohammad; Hossaini Mohammad Farouq; Mirzapour Bahman; Hajiantilaki Nabiollah

    2015-01-01

    In order to increase the safety of working environment and decrease the unwanted costs related to over-break in tunnel excavation projects, it is necessary to minimize overbreak percentage. Thus, based on regression analysis and fuzzy inference system, this paper tries to develop predictive models to estimate overbreak caused by blasting at the Alborz Tunnel. To develop the models, 202 datasets were utilized, out of which 182 were used for constructing the models. To validate and compare the obtained results, determination coefficient (R2) and root mean square error (RMSE) indexes were chosen. For the fuzzy model, R2 and RMSE are equal to 0.96 and 0.55 respectively, whereas for regression model, they are 0.41 and 1.75 respectively, proving that the fuzzy predictor performs, significantly, better than the statistical method. Using the developed fuzzy model, the percentage of overbreak was minimized in the Alborz Tunnel.

  12. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  13. Deterministic nonlinear systems a short course

    CERN Document Server

    Anishchenko, Vadim S; Strelkova, Galina I

    2014-01-01

    This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems.  This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.

  14. Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy

    Science.gov (United States)

    Kanamori, H.

    2014-12-01

    Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.

  15. Deterministic quantum teleportation between distant atomic objects

    OpenAIRE

    Krauter, H.; D Salart; Muschik, C. A.; Petersen, J. M.; Shen, Heng; Fernholz, T.; Polzik, E. S.

    2013-01-01

    Quantum teleportation is a key ingredient of quantum networks and a building block for quantum computation. Teleportation between distant material objects using light as the quantum information carrier has been a particularly exciting goal. Here we demonstrate a new element of the quantum teleportation landscape, the deterministic continuous variable (cv) teleportation between distant material objects. The objects are macroscopic atomic ensembles at room temperature. Entanglement required for...

  16. Deterministically distinguishing a remote Bell state

    Institute of Scientific and Technical Information of China (English)

    Zhao Zhi-Guo; Peng Wei-Min; Tan Yong-Gang

    2011-01-01

    It has been proven that, with a single copy provided, the four Bell states cannot be distinguished by local operations and classical communications (LOCC). Traditionally, a Bell basis projective measurement is needed to distinguish the four Bell states, which is usually carried out with a local interference between two particles. This paper presents an interesting protocol that allows two remote parties to distinguish four Bell states deterministically. We prove that our protocol of distinguishing remote Bell states is beyond LOCC.

  17. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....

  18. Deterministic MST Sparsification in the Congested Clique

    OpenAIRE

    Korhonen, Janne H.

    2016-01-01

    We give a simple deterministic constant-round algorithm in the congested clique model for reducing the number of edges in a graph to $n^{1+\\varepsilon}$ while preserving the minimum spanning forest, where $\\varepsilon > 0$ is any constant. This implies that in the congested clique model, it is sufficient to improve MST and other connectivity algorithms on graphs with slightly superlinear number of edges to obtain a general improvement. As a byproduct, we also obtain a simple alternative proof...

  19. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song

    2001-01-01

    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  20. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  1. Treatment of Retinitis Pigmentosa-Associated Cystoid Macular Oedema Using Intravitreal Aflibercept (Eylea) despite Minimal Response to Ranibizumab (Lucentis): A Case Report

    Science.gov (United States)

    Strong, Stacey A.; Gurbaxani, Avinash; Michaelides, Michel

    2016-01-01

    Background We present an interesting case of bilateral retinitis pigmentosa (RP)-associated cystoid macular oedema that responded on two separate occasions to intravitreal injections of aflibercept, despite previously demonstrating only minimal response to intravitreal ranibizumab. This unique case would support a trial of intravitreal aflibercept for the treatment of RP-associated cystoid macular oedema. Case Presentation A 38-year-old man from Dubai, United Arab Emirates, presented to the UK with a 3-year history of bilateral RP-associated cystoid macular oedema. Previous treatment with topical dorzolamide, oral acetazolamide, and intravitreal ranibizumab had demonstrated only minimal reduction of cystoid macular oedema. Following re-confirmation of the diagnosis by clinical examination and optical coherence tomography imaging, bilateral loading doses of intravitreal aflibercept were given. Central macular thickness reduced and the patient returned to Dubai. After 6 months, the patient was treated with intravitreal ranibizumab due to re-accumulation of fluid and the unavailability of aflibercept in Dubai. Only minimal reduction of central macular thickness was observed. Once available in Dubai, intravitreal aflibercept was administered bilaterally with further reduction of central macular thickness observed. Visual acuity remained stable throughout. Conclusions This is the first case report to demonstrate a reduction of RP-associated CMO following intravitreal aflibercept, despite inadequate response to ranibizumab on two separate occasions. Aflibercept may provide superior action to other anti-VEGF medications due to its intermediate size (115 kDa) and higher binding affinity. This is worthy of further investigation in a large prospective cohort over an extended time to determine the safety and efficacy of intravitreal aflibercept for use in this condition.

  2. Treatment of Retinitis Pigmentosa-Associated Cystoid Macular Oedema Using Intravitreal Aflibercept (Eylea despite Minimal Response to Ranibizumab (Lucentis: A Case Report

    Directory of Open Access Journals (Sweden)

    Stacey A. Strong

    2016-09-01

    Full Text Available Background: We present an interesting case of bilateral retinitis pigmentosa (RP-associated cystoid macular oedema that responded on two separate occasions to intravitreal injections of aflibercept, despite previously demonstrating only minimal response to intravitreal ranibizumab. This unique case would support a trial of intravitreal aflibercept for the treatment of RP-associated cystoid macular oedema. Case Presentation: A 38-year-old man from Dubai, United Arab Emirates, presented to the UK with a 3-year history of bilateral RP-associated cystoid macular oedema. Previous treatment with topical dorzolamide, oral acetazolamide, and intravitreal ranibizumab had demonstrated only minimal reduction of cystoid macular oedema. Following re-confirmation of the diagnosis by clinical examination and optical coherence tomography imaging, bilateral loading doses of intravitreal aflibercept were given. Central macular thickness reduced and the patient returned to Dubai. After 6 months, the patient was treated with intravitreal ranibizumab due to re-accumulation of fluid and the unavailability of aflibercept in Dubai. Only minimal reduction of central macular thickness was observed. Once available in Dubai, intravitreal aflibercept was administered bilaterally with further reduction of central macular thickness observed. Visual acuity remained stable throughout. Conclusions: This is the first case report to demonstrate a reduction of RP-associated CMO following intravitreal aflibercept, despite inadequate response to ranibizumab on two separate occasions. Aflibercept may provide superior action to other anti-VEGF medications due to its intermediate size (115 kDa and higher binding affinity. This is worthy of further investigation in a large prospective cohort over an extended time to determine the safety and efficacy of intravitreal aflibercept for use in this condition.

  3. Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives

    DEFF Research Database (Denmark)

    Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle;

    2002-01-01

    In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...

  4. A Line Source In Minkowski For The de Sitter Spacetime Scalar Green's Function: Massless Minimally Coupled Case

    CERN Document Server

    Chu, Yi-Zen

    2013-01-01

    We show how, for certain classes of curved spacetimes, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d >= 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d+1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the "wave equation" perpendicular to the hyperboloid -- followed by a one dimensional integral -- needs to be evaluated. A t...

  5. Rare earth elements minimal harvest year variation facilitates robust geographical origin discrimination: The case of PDO "Fava Santorinis".

    Science.gov (United States)

    Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A

    2016-12-15

    This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate. PMID:27451177

  6. Successful Experience of Laparoscopic Pancreaticoduodenectomy and Digestive Tract Reconstruction With Minimized Complications Rate by 14 Case Reports

    Science.gov (United States)

    Fan, Yong; Zhao, Yanhui; Pang, Lan; Kang, Yingxing; Kang, Boxiong; Liu, Yongyong; Fu, Jie; Xia, Bowei; Wang, Chen; Zhang, Youcheng

    2016-01-01

    Abstract Laparoscopic pancreatic surgery is one of the most sophisticated and advanced applications of laparoscopy in the current surgical practice. The adoption of laparoscopic pancreaticoduodenectomy (LPD) has been relatively slow due to the technical challenges. The aim of this study is to review and characterize our successful LPD experiences in patients with distal bile duct carcinoma, periampullary adenocarcinoma, pancreas head cancer, and duodenal cancer and evaluate the clinical outcomes of LPD for its potential in oncologic surgery applications. We retrospectively analyzed the clinical data from 14 patients who underwent LPD from August 2013 to February 2015 in our institute. We presented our LPD experience with no cases converted to open surgery in all 14 cases, which included 10 cases of laparoscopic digestive tract reconstruction and 4 cases of open digestive tract reconstructions. There were no deaths during the perioperative period and no case of gastric emptying disorder or postoperative bleeding. The other clinical indexes were comparable to or better than open surgery. Based on our experience, LPD could be potentially safe and feasible for the treatment of early pancreas head cancer, distal bile duct carcinoma, periampullary adenocarcinoma, and duodenal cancer. The master of LPD procedure requires technical expertise but it can be accomplished with a short learning curve. PMID:27124014

  7. Deterministic Thinning of Finite Poisson Processes

    CERN Document Server

    Angel, Omer; Soo, Terry

    2009-01-01

    Let Pi and Gamma be homogeneous Poisson point processes on a fixed set of finite volume. We prove a necessary and sufficient condition on the two intensities for the existence of a coupling of Pi and Gamma such that Gamma is a deterministic function of Pi, and all points of Gamma are points of Pi. The condition exhibits a surprising lack of monotonicity. However, in the limit of large intensities, the coupling exists if and only if the expected number of points is at least one greater in Pi than in Gamma.

  8. Deterministic atom-light quantum interface

    CERN Document Server

    Sherson, J; Polzik, E S; Julsgaard, Brian; Sherson, Jacob

    2006-01-01

    The notion of an atom-light quantum interface has been developed in the past decade, to a large extent due to demands within the new field of quantum information processing and communication. A promising type of such interface using large atomic ensembles has emerged in the past several years. In this article we review this area of research with a special emphasis on deterministic high fidelity quantum information protocols. Two recent experiments, entanglement of distant atomic objects and quantum memory for light are described in detail.

  9. Deterministic quantum computation with one photonic qubit

    Science.gov (United States)

    Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.

    2015-07-01

    We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.

  10. Explicit Protocol for Deterministic Entanglement Concentration

    Institute of Scientific and Technical Information of China (English)

    GU Yong-Jian; GAO Peng; GUO Guang-Can

    2005-01-01

    @@ We present an explicit protocol for extraction of an EPR pair from two partially entangled pairs in a deterministic fashion via local operations and classical communication. This protocol is constituted by a local measurement described by a positive operator-valued measure (POVM), one-way classical communication, and a corresponding local unitary operation or a choice between the two pairs. We explicitly construct the required POVM by the analysis of the doubly stochastic matrix connecting the initial and the final states. Our scheme might be useful in future quantum communication.

  11. Experimental Demonstration of Deterministic Entanglement Transformation

    Institute of Scientific and Technical Information of China (English)

    CHEN Geng; XU Jin-Shi; LI Chuan-Feng; GONG Ming; CHEN Lei; GUO Guang-Can

    2009-01-01

    According to Nielsen's theorem [Phys.Rev.Lett.83 (1999) 436]and as a proof of principle,we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system.The output states are verified with a quantum tomography process.We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated.Our results may be useful in quantum information processing.

  12. Deterministic and probabilistic approach to safety analysis

    International Nuclear Information System (INIS)

    The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)

  13. Nine challenges for deterministic epidemic models

    DEFF Research Database (Denmark)

    Roberts, Mick G; Andreasen, Viggo; Lloyd, Alun;

    2015-01-01

    Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections......, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern...

  14. Minimally Invasive Treatment of Biventricular Hydrocephalus Caused by a Giant Basilar Apex Aneurysm via a Staged Combination of Endoscopy and Endovascular Embolization: A Case Report.

    Science.gov (United States)

    Setty, Pradeep; Volkov, Andrey; Richards, Boyd; Barrett, Ryan

    2015-01-01

    Biventricular hydrocephalus caused by a Giant Basilar Apex Aneurysm (GBAA) is a rare finding that presents unique and challenging treatment decisions. We report a case of GBAA causing a life-threatening biventricular hydrocephalus in which both the aneurysm and hydrocephalus were given definitive treatment through a staged, minimally invasive approach. An obtunded 82-year-old male was found to have biventricular hydrocephalus caused by an unruptured GBAA obstructing the foramina of Monro. The patient was treated via staged, minimally invasive technique that first involved endoscopic fenestration of the septum pellucidum to create communication between the lateral ventricles. A programmable ventriculo-peritoneal shunt was then placed with a high-pressure setting. The patient was then loaded with dual anti-platelet therapy prior to undergoing endovascular coiling of the GBAA with adjacent stenting of the Posterior Cerebral Artery. He remained on dual anti-platelet therapy and the shunt setting was lowered at the bedside to treat the hydrocephalus. At 6-month follow up, the patient had returned to his cognitive baseline, speaking fluently and appropriately. Biventricular hydrocephalus caused by a GBAA can successfully be treated in a minimally invasive fashion utilizing a combination of endoscopy and endovascular therapy, even when a stent-assisted coiling is needed.

  15. Measuring progress in reactor conversion and HEU minimization towards 2020 - the case of HEU-fuelled research facilities

    International Nuclear Information System (INIS)

    This paper analyzes how to measure progress in the minimization of HEU-fueled research reactors with respect to the International Fuel Cycle Evaluation (INFCE) completed in 1978, and the establishment of new objectives towards 2020. All HEU-fueled research facilities converted, commissioned or decommissioned after 1978, in total more than 310 facilities, are included. More than 130 HEU-fuelled facilities still remain in operation today. The most important measure has been facility shut-down, accounting for 62% of the reduction in U-235 consumption from 1978 to 2007. Presently, only three regions worldwide use significant amounts of HEU; North-America, Russia with the Newly Independent States, and Europe. Projected HEU consumption in 2020 will drop to less 50 kg as the current HEU-fueled steady-state reactors are shut-down or converted. However. if the current lack of concern for HEU in life-time cores is not changed, in particular in Russia, 50-100 such facilities may continue to be in operation in 2020. (author)

  16. $H^0 \\rightarrow Z^0 \\gamma$ channel in ATLAS. \\\\ A study of the Standard Model and \\\\ Minimal Supersymmetric SM case

    CERN Document Server

    Kiourkos, S

    1999-01-01

    One of the potentially accessible decay modes of the Higgs boson in the mass region $100 < m_H < 180$ GeV is the $H^0 \\rightarrow Z^0 \\gamma$ channel. The work presented in this note examines the Standard Model and Minimal Supersymmetric Standard Model predictions for the observability of this channel using particle level simulation as well as the ATLAS fast simulation (ATLFAST). It compares present estimates for the signal observability with previously reported ones in \\cite{unal} specifying the changes arising from the assumed energy of the colliding protons and the improvements in the treatment of theoretical predictions. With the present estimates, the expected significance for the SM Higgs does not exceed, in terms of $\\frac{S}{\\sqrt{B}}$, 1.5 $\\sigma$ (including $Z^0 \\rightarrow e^+ e^-$ and $Z^0 \\rightarrow {\\mu}^+ {\\mu}^-$) for an integrated luminosity of $10^5$ pb$^{-1}$ therefore not favouring this channel for SM Higgs searches. Comparable discovery potential is expected at most for the MSSM $...

  17. A line source in Minkowski for the de Sitter spacetime scalar Green's function: Massless minimally coupled case

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Yi-Zen [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)

    2014-09-15

    Motivated by the desire to understand the causal structure of physical signals produced in curved spacetimes – particularly around black holes – we show how, for certain classes of geometries, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d ≥ 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d + 1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the “wave equation” perpendicular to the hyperboloid – followed by a one-dimensional integral – needs to be evaluated. A topological obstruction to the general construction is also discussed by utilizing it to derive a generalized Green's function of the Laplacian on the (d ≥ 2)-dimensional sphere.

  18. Hazardous waste minimization

    International Nuclear Information System (INIS)

    This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect

  19. Plausible Suggestion for a Deterministic Wave Function

    CERN Document Server

    Schulz, P

    2006-01-01

    A deterministic axial vector model for photons is presented which is suitable also for particles. During a rotation around an axis the deterministic wave function a has the following form a = ws r exp(+-i wb t). ws is either the axial or scalar spin rotation frequency (the latter is proportional to the mass), r radius of the orbit (also amplitude of a vibration arising later from the interaction by fusing of two oppositely circling photons), wb orbital angular frequency (proportional to the velocity) and t time. "+" before the imaginary i means a right-handed and "-" a left-handed rotation. An interaction happens if particles (including the photons) meet themselves through collision and melt together. ----- Es wird ein deterministisches Drehvektor-Modell fuer Photonen vorgestellt, das auch fuer Teilchen geeignet ist. Bei einer Kreisbewegung um eine Achse hat die deterministische Wellenfunktion a die folgende Form a = ws r exp(+-i wb t). Dabei bedeuten ws entweder die axiale oder die skalare Spin-Kreisfrequenz...

  20. A mathematical theory for deterministic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)

    2007-05-15

    Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.

  1. Deterministic Function Computation with Chemical Reaction Networks

    CERN Document Server

    Chen, Ho-Lin; Soloveichik, David

    2012-01-01

    We study the deterministic computation of functions on tuples of natural numbers by chemical reaction networks (CRNs). CRNs have been shown to be efficiently Turing-universal when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates. We introduce the notion of function, rather than predicate, computation by representing the output of a function f:N^k --> N^l by a count of some molecular species, i.e., if the CRN starts with n_1,...,n_k molecules of some "input" species X_1,...,X_k, the CRN is guaranteed to converge to having f(n_1,...,n_k) molecules of the "output" species Y_1,...,Y_l. We show that a function f:N^k --> N^l is deterministically computed by a CRN if and only if its graph {(x,y) \\in N^k x N^l | f(x) = y} is a semilinear set. Finally, we show that each semilinear function f can be computed on input x in expected time O(polylog |x|).

  2. Additivity principle in high-dimensional deterministic systems.

    Science.gov (United States)

    Saito, Keiji; Dhar, Abhishek

    2011-12-16

    The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed. PMID:22243060

  3. Classification and unification of the microscopic deterministic traffic models.

    Science.gov (United States)

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles. PMID:26565284

  4. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko Dimitrov

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  5. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  6. Deterministic Aided STAP for Target Detection in Heterogeneous Situations

    Directory of Open Access Journals (Sweden)

    J.-F. Degurse

    2013-01-01

    Full Text Available Classical space-time adaptive processing (STAP detectors are strongly limited when facing highly heterogeneous environments. Indeed, in this case, representative target free data are no longer available. Single dataset algorithms, such as the MLED algorithm, have proved their efficiency in overcoming this problem by only working on primary data. These methods are based on the APES algorithm which removes the useful signal from the covariance matrix. However, a small part of the clutter signal is also removed from the covariance matrix in this operation. Consequently, a degradation of clutter rejection performance is observed. We propose two algorithms that use deterministic aided STAP to overcome this issue of the single dataset APES method. The results on realistic simulated data and real data show that these methods outperform traditional single dataset methods in detection and in clutter rejection.

  7. Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide

    International Nuclear Information System (INIS)

    The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References.

  8. Minimal cosmography

    Science.gov (United States)

    Piazza, Federico; Schücker, Thomas

    2016-04-01

    The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.

  9. Minimal cosmography

    CERN Document Server

    Piazza, Federico

    2015-01-01

    The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...

  10. Deterministic, Nanoscale Fabrication of Mesoscale Objects

    Energy Technology Data Exchange (ETDEWEB)

    Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M

    2004-12-08

    Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF

  11. Strategies to enhance waste minimization and energy conservation within organizations: a case study from the UK construction sector.

    Science.gov (United States)

    Jones, Jo; Jackson, Janet; Tudor, Terry; Bates, Margaret

    2012-09-01

    Strategies for enhancing environmental management are a key focus for the government in the UK. Using a manufacturing company from the construction sector as a case study, this paper evaluates selected interventionist techniques, including environmental teams, awareness raising and staff training to improve environmental performance. The study employed a range of methods including questionnaire surveys and audits of energy consumption and generation of waste to examine the outcomes of the selected techniques. The results suggest that initially environmental management was not a focus for either the employees or the company. However, as a result of employing the techniques, the company was able to reduce energy consumption, increase recycling rates and achieve costs savings in excess of £132,000.

  12. Appropriate small dam management for minimizing catchment-wide safety threats: International benchmarked guidelines and demonstrative cases studies

    Science.gov (United States)

    Pisaniello, John D.; Tingey-Holyoak, Joanne; Burritt, Roger L.

    2012-01-01

    Small dam safety is generally being ignored. The potential for dam failure resulting in catastrophic consequences for downstream communities, property, and the environment, warrants exploration of the threats and policy issues associated with the management of small/farm dams. The paper achieves this through a comparative analysis of differing levels of dam safety assurance policy: absent, driven, strong, and model. A strategic review is undertaken to establish international dam safety policy benchmarks and to identify a best practice model. A cost-effective engineering/accounting tool is presented to assist the policy selection process and complement the best practice model. The paper then demonstrates the significance of the small-dam safety problem with a case study of four Australian States,policy-absent South Australia, policy-driven Victoria, policy-strong New South Wales, and policy-modelTasmania. Surveys of farmer behavior practices provide empirical evidence of the importance of policy and its proper implementation. Both individual and cumulative farm dam failure threats are addressed and, with supporting empirical evidence, the need for "appropriate" supervision of small dams is demonstrated. The paper adds to the existing international dam policy literature by identifying acceptable minimum level practice in private/farm dam safety assurance policy as well as updated international best practice policy guidelines while providing case study demonstration of how to apply the guidelines and empirical reinforcement of the need for "appropriate" policy. The policy guidelines, cost-effective technology, and comparative lessons presented can assist any jurisdiction to determine and implement appropriate dam safety policy.

  13. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  14. Primality deterministic and primality probabilistic tests

    Directory of Open Access Journals (Sweden)

    Alfredo Rizzi

    2007-10-01

    Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.

  15. Deterministic polishing from theory to practice

    Science.gov (United States)

    Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary

    2015-10-01

    Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.

  16. Mechanics From Newton's Laws to Deterministic Chaos

    CERN Document Server

    Scheck, Florian

    2010-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...

  17. Deterministic aspects of nonlinear modulation instability

    CERN Document Server

    van Groesen, E; Karjanto, N

    2011-01-01

    Different from statistical considerations on stochastic wave fields, this paper aims to contribute to the understanding of (some of) the underlying physical phenomena that may give rise to the occurrence of extreme, rogue, waves. To that end a specific deterministic wavefield is investigated that develops extreme waves from a uniform background. For this explicitly described nonlinear extension of the Benjamin-Feir instability, the soliton on finite background of the NLS equation, the global down-stream evolving distortions, the time signal of the extreme waves, and the local evolution near the extreme position are investigated. As part of the search for conditions to obtain extreme waves, we show that the extreme wave has a specific optimization property for the physical energy, and comment on the possible validity for more realistic situations.

  18. Anisotropic permeability in deterministic lateral displacement arrays

    CERN Document Server

    Vernekar, Rohan; Loutherback, Kevin; Morton, Keith; Inglis, David

    2016-01-01

    We investigate anisotropic permeability of microfluidic deterministic lateral displacement (DLD) arrays. A DLD array can achieve high-resolution bimodal size-based separation of micro-particles, including bioparticles such as cells. Correct operation requires that the fluid flow remains at a fixed angle with respect to the periodic obstacle array. We show via experiments and lattice-Boltzmann simulations that subtle array design features cause anisotropic permeability. The anisotropy, which indicates the array's intrinsic tendency to induce an undesired lateral pressure gradient, can lead to off-axis flows and therefore local changes in the critical separation size. Thus, particle trajectories can become unpredictable and the device useless for the desired separation duty. We show that for circular posts the rotated-square layout, unlike the parallelogram layout, does not suffer from anisotropy and is the preferred geometry. Furthermore, anisotropy becomes severe for arrays with unequal axial and lateral gaps...

  19. Deterministic polarization chaos from a laser diode

    CERN Document Server

    Virte, Martin; Thienpont, Hugo; Sciamanna, Marc

    2014-01-01

    Fifty years after the invention of the laser diode and fourty years after the report of the butterfly effect - i.e. the unpredictability of deterministic chaos, it is said that a laser diode behaves like a damped nonlinear oscillator. Hence no chaos can be generated unless with additional forcing or parameter modulation. Here we report the first counter-example of a free-running laser diode generating chaos. The underlying physics is a nonlinear coupling between two elliptically polarized modes in a vertical-cavity surface-emitting laser. We identify chaos in experimental time-series and show theoretically the bifurcations leading to single- and double-scroll attractors with characteristics similar to Lorenz chaos. The reported polarization chaos resembles at first sight a noise-driven mode hopping but shows opposite statistical properties. Our findings open up new research areas that combine the high speed performances of microcavity lasers with controllable and integrated sources of optical chaos.

  20. Deterministic seismic hazard macrozonation of India

    Indian Academy of Sciences (India)

    Sreevalsa Kolathayar; T G Sitharam; K S Vipin

    2012-10-01

    Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.

  1. Minimally Invasive Dentistry

    Science.gov (United States)

    ... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...

  2. Simple deterministically constructed cycle reservoirs with regular jumps.

    Science.gov (United States)

    Rodan, Ali; Tiňo, Peter

    2012-07-01

    A new class of state-space models, reservoir models, with a fixed state transition structure (the "reservoir") and an adaptable readout from the state space, has recently emerged as a way for time series processing and modeling. Echo state network (ESN) is one of the simplest, yet powerful, reservoir models. ESN models are generally constructed in a randomized manner. In our previous study (Rodan & Tiňo, 2011), we showed that a very simple, cyclic, deterministically generated reservoir can yield performance competitive with standard ESN. In this contribution, we extend our previous study in three aspects. First, we introduce a novel simple deterministic reservoir model, cycle reservoir with jumps (CRJ), with highly constrained weight values, that has superior performance to standard ESN on a variety of temporal tasks of different origin and characteristics. Second, we elaborate on the possible link between reservoir characterizations, such as eigenvalue distribution of the reservoir matrix or pseudo-Lyapunov exponent of the input-driven reservoir dynamics, and the model performance. It has been suggested that a uniform coverage of the unit disk by such eigenvalues can lead to superior model performance. We show that despite highly constrained eigenvalue distribution, CRJ consistently outperforms ESN (which has much more uniform eigenvalue coverage of the unit disk). Also, unlike in the case of ESN, pseudo-Lyapunov exponents of the selected optimal CRJ models are consistently negative. Third, we present a new framework for determining the short-term memory capacity of linear reservoir models to a high degree of precision. Using the framework, we study the effect of shortcut connections in the CRJ reservoir topology on its memory capacity. PMID:22428595

  3. Minimally invasive presacral approach for revision of an Axial Lumbar Interbody Fusion rod due to fall-related lumbosacral instability: a case report

    Directory of Open Access Journals (Sweden)

    Cohen Anders

    2011-09-01

    Full Text Available Abstract Introduction The purpose of this study was to describe procedural details of a minimally invasive presacral approach for revision of an L5-S1 Axial Lumbar Interbody Fusion rod. Case presentation A 70-year-old Caucasian man presented to our facility with marked thoracolumbar scoliosis, osteoarthritic changes characterized by high-grade osteophytes, and significant intervertebral disc collapse and calcification. Our patient required crutches during ambulation and reported intractable axial and radicular pain. Multi-level reconstruction of L1-4 was accomplished with extreme lateral interbody fusion, although focal lumbosacral symptoms persisted due to disc space collapse at L5-S1. Lumbosacral interbody distraction and stabilization was achieved four weeks later with the Axial Lumbar Interbody Fusion System (TranS1 Inc., Wilmington, NC, USA and rod implantation via an axial presacral approach. Despite symptom resolution following this procedure, our patient suffered a fall six weeks postoperatively with direct sacral impaction resulting in symptom recurrence and loss of L5-S1 distraction. Following seven months of unsuccessful conservative care, a revision of the Axial Lumbar Interbody Fusion rod was performed that utilized the same presacral approach and used a larger diameter implant. Minimal adhesions were encountered upon presacral re-entry. A precise operative trajectory to the base of the previously implanted rod was achieved using fluoroscopic guidance. Surgical removal of the implant was successful with minimal bone resection required. A larger diameter Axial Lumbar Interbody Fusion rod was then implanted and joint distraction was re-established. The radicular symptoms resolved following revision surgery and our patient was ambulating without assistance on post-operative day one. No adverse events were reported. Conclusions The Axial Lumbar Interbody Fusion distraction rod may be revised and replaced with a larger diameter rod using

  4. Recognition of deterministic ETOL languages in logarithmic space

    DEFF Research Database (Denmark)

    Jones, Neil D.; Skyum, Sven

    1977-01-01

    It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian par...

  5. Atomic routing in a deterministic queuing model

    Directory of Open Access Journals (Sweden)

    T.L. Werth

    2014-03-01

    We also consider the makespan objective (arrival time of the last user and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.

  6. FP/FIFO scheduling: coexistence of deterministic and probabilistic QoS guarantees

    Directory of Open Access Journals (Sweden)

    Pascale Minet

    2007-01-01

    Full Text Available In this paper, we focus on applications having quantitative QoS (Quality of Service requirements on their end-to-end response time (or jitter. We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.

  7. A case-study of landfill minimization and material recovery via waste co-gasification in a new waste management scheme

    International Nuclear Information System (INIS)

    Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the

  8. A case-study of landfill minimization and material recovery via waste co-gasification in a new waste management scheme

    Energy Technology Data Exchange (ETDEWEB)

    Tanigaki, Nobuhiro, E-mail: tanigaki.nobuhiro@eng.nssmc.com [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (EUROPEAN OFFICE), Am Seestern 8, 40547 Dusseldorf (Germany); Ishida, Yoshihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., 46-59, Nakabaru, Tobata-ku, Kitakyushu, Fukuoka 804-8505 (Japan); Osada, Morihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (Head Office), Osaki Center Building 1-5-1, Osaki, Shinagawa-ku, Tokyo 141-8604 (Japan)

    2015-03-15

    Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the

  9. Human gait recognition via deterministic learning.

    Science.gov (United States)

    Zeng, Wei; Wang, Cong

    2012-11-01

    Recognition of temporal/dynamical patterns is among the most difficult pattern recognition tasks. Human gait recognition is a typical difficulty in the area of dynamical pattern recognition. It classifies and identifies individuals by their time-varying gait signature data. Recently, a new dynamical pattern recognition method based on deterministic learning theory was presented, in which a time-varying dynamical pattern can be effectively represented in a time-invariant manner and can be rapidly recognized. In this paper, we present a new model-based approach for human gait recognition via the aforementioned method, specifically for recognizing people by gait. The approach consists of two phases: a training (learning) phase and a test (recognition) phase. In the training phase, side silhouette lower limb joint angles and angular velocities are selected as gait features. A five-link biped model for human gait locomotion is employed to demonstrate that functions containing joint angle and angular velocity state vectors characterize the gait system dynamics. Due to the quasi-periodic and symmetrical characteristics of human gait, the gait system dynamics can be simplified to be described by functions of joint angles and angular velocities of one side of the human body, thus the feature dimension is effectively reduced. Locally-accurate identification of the gait system dynamics is achieved by using radial basis function (RBF) neural networks (NNs) through deterministic learning. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. A gait signature is then derived from the extracted gait system dynamics along the phase portrait of joint angles versus angular velocities. A bank of estimators is constructed using constant RBF networks to represent the training gait patterns. In the test phase, by comparing the set of estimators with the test gait pattern, a set of recognition errors are generated, and the average L(1) norms

  10. A minimally invasive technique for closing an iatrogenic subclavian artery cannulation using the Angio-Seal closure device: two case reports

    Directory of Open Access Journals (Sweden)

    Szkup Peter L

    2012-03-01

    Full Text Available Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA. This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery and can be a life-saving procedure. Case presentation In the first case, an anesthetist attempted ultrasound-guided access to the right internal jugular vein during the preoperative preparation of a 66-year-old Caucasian man. A 7-French (Fr triple-lumen catheter was inadvertently placed into his arterial system. In the second case, an emergency physician inadvertently placed a 7-Fr catheter into the subclavian artery of a 77-year-old Caucasian woman whilst attempting access to her right internal jugular vein. Both arterial punctures were successfully closed by means of a percutaneous closure device (Angio-Seal. No complications were observed. Conclusions Inadvertent subclavian arterial puncture can be successfully managed with no adverse clinical sequelae by using a percutaneous vascular closure device. This minimally invasive technique may be an option for patients with non-compressible arterial punctures. This report demonstrates two practical points that may help clinicians in decision-making during daily practice. First, it provides a practical solution to a well-known vascular complication. Second, it emphasizes a role for proper vascular ultrasound training for the non-radiologist.

  11. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  12. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  13. Simple Deterministically Constructed Recurrent Neural Networks

    Science.gov (United States)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  14. Deterministic Random Walks on Regular Trees

    CERN Document Server

    Cooper, Joshua; Friedrich, Tobias; Spencer, Joel; 10.1002/rsa.20314

    2010-01-01

    Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid $\\Z^d$ and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite $k$-ary tree ($k \\ge 3$), we show that for any deviation $D$ there is an initial configuration of chips such that after running the Propp model for a ...

  15. Deterministic Secure Positioning in Wireless Sensor Networks

    CERN Document Server

    Delaët, Sylvie; Rokicki, Mariusz; Tixeuil, Sébastien

    2007-01-01

    Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does \\emph{not} rely on a subset of \\emph{trusted} nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most $\\lfloor \\frac{n}{2} \\rfloor-2$ faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most $\\lfloor \\frac{n}{2} \\rfloor...

  16. Traffic chaotic dynamics modeling and analysis of deterministic network

    Science.gov (United States)

    Wu, Weiqiang; Huang, Ning; Wu, Zhitao

    2016-07-01

    Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.

  17. Seismic-Hazard Assessment for a Characteristic Earthquake Scenario: An Integrated Probabilistic–Deterministic Method

    OpenAIRE

    Convertito, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione OV, Napoli, Italia; Emolo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli; Zollo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli

    2006-01-01

    Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristi...

  18. Minimally invasive periodontal therapy.

    Science.gov (United States)

    Dannan, Aous

    2011-10-01

    Minimally invasive dentistry is a concept that preserves dentition and supporting structures. However, minimally invasive procedures in periodontal treatment are supposed to be limited within periodontal surgery, the aim of which is to represent alternative approaches developed to allow less extensive manipulation of surrounding tissues than conventional procedures, while accomplishing the same objectives. In this review, the concept of minimally invasive periodontal surgery (MIPS) is firstly explained. An electronic search for all studies regarding efficacy and effectiveness of MIPS between 2001 and 2009 was conducted. For this purpose, suitable key words from Medical Subject Headings on PubMed were used to extract the required studies. All studies are demonstrated and important results are concluded. Preliminary data from case cohorts and from many studies reveal that the microsurgical access flap, in terms of MIPS, has a high potential to seal the healing wound from the contaminated oral environment by achieving and maintaining primary closure. Soft tissues are mostly preserved and minimal gingival recession is observed, an important feature to meet the demands of the patient and the clinician in the esthetic zone. However, although the potential efficacy of MIPS in the treatment of deep intrabony defects has been proved, larger studies are required to confirm and extend the reported positive preliminary outcomes.

  19. Optical Realization of Deterministic Entanglement Concentration of Polarized Photons

    Institute of Scientific and Technical Information of China (English)

    GU Yong-Jian; XIAN Liang; LI Wen-Dong; MA Li-Zhen

    2008-01-01

    @@ We propose a scheme for optical realization of deterministic entanglement concentration of polarized photons.To overcome the difficulty due to the lack of sufficiently strong interactions between photons, teleportation is employed to transfer the polarization states of two photons onto the path and polarization states of a third photon, which is made possible by the recent experimental realization of the deterministic and complete Bell state measurement. Then the required positive operator-valued measurement and further operations can be implemented deterministically by using a linear optical setup. All these are within the reach of current technology.

  20. CONDUCCIÓN ANESTÉSICA DE LA SUSTITUCIÓN VALVULAR MITRAL MÍNIMAMENTE INVASIVA. PRIMEROS CASOS REALIZADOS EN CUBA / Anaesthetic management of minimally invasive mitral valve replacement. First cases performed in Cuba

    OpenAIRE

    Odalys Ojeda Mollinedo; Elizabeth Rodríguez Rosales; Miguel Ángel Carrasco Molina; Amaury Fernández Molina; Antonio de Arazoza Hernández; Fausto Leonel Rodríguez Salgueiro

    2011-01-01

    Minimally invasive heart surgery has many advantages for the patient, however, difficulties in performing and implementing this procedure are not only found in surgical technique, but in the design of the anesthetic technique, which becomes a challenge for the anesthesiologist. This article presents the first two cases of minimally invasive mitral valve replacement performed in the country. The anesthetic techniques and obtained results are described, and the advantages and complications of t...

  1. Understanding Vertical Jump Potentiation: A Deterministic Model.

    Science.gov (United States)

    Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L

    2016-06-01

    This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength.

  2. Deterministic versus stochastic trends: Detection and challenges

    Science.gov (United States)

    Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.

    2009-09-01

    The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.

  3. Understanding Vertical Jump Potentiation: A Deterministic Model.

    Science.gov (United States)

    Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L

    2016-06-01

    This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510

  4. A Method to Separate Stochastic and Deterministic Information from Electrocardiograms

    CERN Document Server

    Gutíerrez, R M

    2004-01-01

    In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.

  5. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  6. A proof system for asynchronously communicating deterministic processes

    NARCIS (Netherlands)

    de Boer, F.S.; van Hulst, M.

    1994-01-01

    We introduce in this paper new communication and synchronization constructs which allow deterministic processes, communicating asynchronously via unbounded FIFO buffers, to cope with an indeterminate environment. We develop for the resulting parallel programming language, which subsumes deterministi

  7. Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Niels Jacob

    1994-01-01

    An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential...... for dealing with these uncertainties. Interpretations of hydraulic performance, model of daily variation and simple unit hydrographs are used for illustration. The development is to use grey box stochastic models, which combine the virtues of deterministic and stochastic features....

  8. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  9. Universal quantification for deterministic chaos in dynamical systems

    OpenAIRE

    Selvam, A. Mary

    2000-01-01

    A cell dynamical system model for deterministic chaos enables precise quantification of the round-off error growth,i.e., deterministic chaos in digital computer realizations of mathematical models of continuum dynamical systems. The model predicts the following: (a) The phase space trajectory (strange attractor) when resolved as a function of the computer accuracy has intrinsic logarithmic spiral curvature with the quasiperiodic Penrose tiling pattern for the internal structure. (b) The unive...

  10. An alternative approach to measure similarity between two deterministic transient signals

    Science.gov (United States)

    Shin, Kihong

    2016-06-01

    In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.

  11. Deterministic chaos in the pitting phenomena of passivable alloys

    International Nuclear Information System (INIS)

    It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl-]/[NO3-] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author)

  12. Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation

    CERN Document Server

    Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F

    2002-01-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...

  13. Longevity, Growth and Intergenerational Equity - The Deterministic Case

    DEFF Research Database (Denmark)

    Andersen, Torben M.; Gestsson, Marias Halldór

    . We develop an overlapping generations model in continuous time which encompasses different generations with different mortality rates and thus longevity. Allowing for both trend increases in longevity and productivity, we address the issue of intergenerational equity under a utilitarian criterion...

  14. Longevity, Growth and Intergenerational Equity: The Deterministic Case

    DEFF Research Database (Denmark)

    Andersen, Torben M.; Gestsson, Marias Halldór

    2016-01-01

    Challenges raised by aging (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from presavings to indexation of retirement ages have been justified in this way. We...

  15. Deterministic effects of the ionizing radiation

    International Nuclear Information System (INIS)

    Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of

  16. Esophagectomy - minimally invasive

    Science.gov (United States)

    Minimally invasive esophagectomy; Robotic esophagectomy; Removal of the esophagus - minimally invasive; Achalasia - esophagectomy; Barrett esophagus - esophagectomy; Esophageal cancer - esophagectomy - laparoscopic; Cancer of the ...

  17. Deterministic and heuristic models of forecasting spare parts demand

    Directory of Open Access Journals (Sweden)

    Ivan S. Milojević

    2012-04-01

    Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing

  18. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  19. Ceramage – A Ceramo Polymer Restoration to be Used as an Alternative to Ceramics; As an Indirect Restorative Material in a Minimally Invasive Cosmetic Dentistry Protocol - A Case Report

    OpenAIRE

    Thumati, Prafulla; Reddy, K. Raghavendra

    2013-01-01

    Tooth wear and discoloration is a normal process in the life time of an individual. Severe wear and discoloration can result in cosmetic concern and loss of vertical dimension. These problems can best be treated by giving fixed prosthesis. This case provides the management using the concept of Minimally Invasive Cosmetic Dentistry (MICD) with Ceramopolymer as the restorative material. Computer Guided Occlusal Analysis (CGOA) was used for establishing uniform occlusal force distribution. Case ...

  20. A NEW DETERMINISTIC FORMULATION FOR DYNAMIC STOCHASTIC PROGRAMMING PROBLEMS AND ITS NUMERICAL COMPARISON WITH OTHERS

    Institute of Scientific and Technical Information of China (English)

    陈志平

    2003-01-01

    A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We then check the impact of the new deterministic formulation and other two deterministic formulations on the corresponding problem size,nonzero elements and solution time by solving some typical dynamic stochastic programming problems with different interior point algorithms.Numerical results show the advantage and application of the new deterministic formulation.

  1. The Cover Time of Deterministic Random Walks for General Transition Probabilities

    OpenAIRE

    Shiraga, Takeharu

    2016-01-01

    The deterministic random walk is a deterministic process analogous to a random walk. While there are some results on the cover time of the rotor-router model, which is a deterministic random walk corresponding to a simple random walk, nothing is known about the cover time of deterministic random walks emulating general transition probabilities. This paper is concerned with the SRT-router model with multiple tokens, which is a deterministic process coping with general transition probabilities ...

  2. Deterministic sensing matrices in compressive sensing: a survey.

    Science.gov (United States)

    Nguyen, Thu L N; Shin, Yoan

    2013-01-01

    Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.

  3. Estimating the epidemic threshold on networks by deterministic connections

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu [School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004 (China); Fu, Xinchu [Department of Mathematics, Shanghai University, Shanghai 200444 (China); Small, Michael [School of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009 (Australia)

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.

  4. MIMO capacity for deterministic channel models: sublinear growth

    CERN Document Server

    Bentosela, Francois; Marchetti, Nicola

    2012-01-01

    This is the second paper of the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. Starting from the Maxwell equations, we have described in \\cite{BCFM} the generic structure of such a deterministic transfer matrix. In the current paper we apply the results of \\cite{BCFM} in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment, and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given, fixed volume. Under some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior.

  5. Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion

    Science.gov (United States)

    Majda, Andrew J.; Tong, Xin T.

    2016-10-01

    Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.

  6. A model of deterministic detector with dynamical decoherence

    OpenAIRE

    Lee, Jae Weon; Dmitri V. Averin; Benenti, Giuliano; Shepelyansky, Dima L.

    2005-01-01

    We discuss a deterministic model of detector coupled to a two-level system (a qubit). The detector is a quasi-classical object whose dynamics is described by the kicked rotator Hamiltonian. We show that in the regime of quantum chaos the detector acts as a chaotic bath and induces decoherence of the qubit. We discuss the dephasing and relaxation rates and demonstrate that several features of Ohmic baths can be reproduced by our fully deterministic model. Moreover, we show that, for strong eno...

  7. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus;

    2007-01-01

    be derived rigorously for low-pressure conditions from the microscopic model, which is characterized as a moderately interacting many-particle system, in the limit as the particle number tends to infinity. Also the mesoscopic model is given by a many-particle system. However, the particles move on a lattice......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  8. MIMO capacity for deterministic channel models: sublinear growth

    DEFF Research Database (Denmark)

    Bentosela, Francois; Cornean, Horia; Marchetti, Nicola

    2013-01-01

    This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....

  9. Deterministic approaches for noncoherent communications with chaotic carriers

    Institute of Scientific and Technical Information of China (English)

    Liu Xiongying; Qiu Shuisheng; Francis. C. M. Lau

    2005-01-01

    Two problems are proposed. The first one is the noise decontamination of chaotic carriers using a deterministic approach to reconstruct pseudo trajectories, the second is the design of communications schemes with chaotic carriers. After presenting our deterministic noise decontamination algorithm, conventional chaos shift keying (CSK) communication system is applied. The difference of Euclidean distance between noisy trajectory and decontaminated trajectory in phase space could be utilized to non-coherently detect the sent symbol simply and effectively. It is shown that this detection method can achieve the bit error rate performance comparable to other non-coherent systems.

  10. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  11. Deterministic mathematical morphology for CAD/CAM

    OpenAIRE

    Sarabia Pérez, Rubén; Jimeno Morenilla, Antonio; Molina Carmona, Rafael

    2014-01-01

    Purpose – The purpose of this paper is to present a new geometric model based on the mathematical morphology paradigm, specialized to provide determinism to the classic morphological operations. The determinism is needed to model dynamic processes that require an order of application, as is the case for designing and manufacturing objects in CAD/CAM environments. Design/methodology/approach – The basic trajectory-based operation is the basis of the proposed morphological specialization. This ...

  12. An Efficient and Flexible Deterministic Framework for Multithreaded Programs

    Institute of Scientific and Technical Information of China (English)

    卢凯; 周旭; 王小平; 陈沉

    2015-01-01

    Determinism is very useful to multithreaded programs in debugging, testing, etc. Many deterministic ap-proaches have been proposed, such as deterministic multithreading (DMT) and deterministic replay. However, these sys-tems either are inefficient or target a single purpose, which is not flexible. In this paper, we propose an efficient and flexible deterministic framework for multithreaded programs. Our framework implements determinism in two steps: relaxed determinism and strong determinism. Relaxed determinism solves data races efficiently by using a proper weak memory consistency model. After that, we implement strong determinism by solving lock contentions deterministically. Since we can apply different approaches for these two steps independently, our framework provides a spectrum of deterministic choices, including nondeterministic system (fast), weak deterministic system (fast and conditionally deterministic), DMT system, and deterministic replay system. Our evaluation shows that the DMT configuration of this framework could even outperform a state-of-the-art DMT system.

  13. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  14. Minimal Exit Trajectories with Optimum Correctional Manoeuvres

    Directory of Open Access Journals (Sweden)

    T. N. Srivastava

    1980-10-01

    Full Text Available Minimal exit trajectories with optimum correctional manoeuvers to a rocket between two coplaner, noncoaxial elliptic orbits in an inverse square gravitational field have been investigated. Case of trajectories with no correctional manoeuvres has been analysed. In the end minimal exit trajectories through specified orbital terminals are discussed and problem of ref. (2 is derived as a particular case.

  15. Classification and Unification of the Microscopic Deterministic Traffic Models with Identical Drivers

    OpenAIRE

    Yang, Bo; Monterola, Christopher

    2015-01-01

    We show that all existing deterministic microscopic traffic models with identical drivers (including both two-phase and three-phase models) can be understood as special cases from a master model by expansion around well-defined ground states. This allows two traffic models to be compared in a well-defined way. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver models (IDM) is...

  16. Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.

  17. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    Science.gov (United States)

    Szymanowski, Mariusz; Kryza, Maciej

    2015-11-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  18. Minimal surfaces in Riemannian manifolds

    International Nuclear Information System (INIS)

    A multiple solution to the Plateau problem in a Riemannian manifold is established. In Sn, the existence of two solutions to this problem is obtained. The Morse-Tompkins-Shiffman theorem is extended to the case when the ambient space admits no minimal sphere. (author). 20 refs

  19. Scheme for deterministic Bell-state-measurement-free quantum teleportation

    OpenAIRE

    Yang, Ming; Cao, Zhuo-Liang

    2004-01-01

    A deterministic teleportation scheme for unknown atomic states is proposed in cavity QED. The Bell state measurement is not needed in the teleportation process, and the success probability can reach 1.0. In addition, the current scheme is insensitive to the cavity decay and thermal field.

  20. Enhanced deterministic phase retrieval using a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa;

    2012-01-01

    A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle in...

  1. A Deterministic Annealing Approach to Clustering AIRS Data

    Science.gov (United States)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  2. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  3. From deterministic cellular automata to coupled map lattices

    Science.gov (United States)

    García-Morales, Vladimir

    2016-07-01

    A general mathematical method is presented for the systematic construction of coupled map lattices (CMLs) out of deterministic cellular automata (CAs). The entire CA rule space is addressed by means of a universal map for CAs that we have recently derived and that is not dependent on any freely adjustable parameters. The CMLs thus constructed are termed real-valued deterministic cellular automata (RDCA) and encompass all deterministic CAs in rule space in the asymptotic limit κ \\to 0 of a continuous parameter κ. Thus, RDCAs generalize CAs in such a way that they constitute CMLs when κ is finite and nonvanishing. In the limit κ \\to ∞ all RDCAs are shown to exhibit a global homogeneous fixed-point that attracts all initial conditions. A new bifurcation is discovered for RDCAs and its location is exactly determined from the linear stability analysis of the global quiescent state. In this bifurcation, fuzziness gradually begins to intrude in a purely deterministic CA-like dynamics. The mathematical method presented allows to get insight in some highly nontrivial behavior found after the bifurcation.

  4. Algebra and Theory of Order-Deterministic Pomsets

    NARCIS (Netherlands)

    Rensink, A.

    1996-01-01

    This paper is about partially ordered multisets (pomsets for short). We investigate a particular class of pomsets that we call order-deterministic, properly including all partially ordered sets, which satisfies a number of interesting properties: among other things, it forms a distributive lattice u

  5. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we p

  6. Deterministic control of ferroelastic switching in multiferroic materials

    NARCIS (Netherlands)

    Balke, N.; Choudhury, S.; Jesse, S.; Huijben, M.; Chu, Y.-H.; Baddorf, A.P.; Chen, L.Q.; Ramesh, R.; Kalinin, S.V.

    2009-01-01

    Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate

  7. Deterministic superresolution with coherent states at the shot noise limit

    DEFF Research Database (Denmark)

    Distante, Emanuele; Jezek, Miroslav; Andersen, Ulrik L.

    2013-01-01

    detection approaches. Here we show that superresolving phase measurements at the shot noise limit can be achieved without resorting to nonclassical optical states or to low-efficiency detection processes. Using robust coherent states of light, high-efficiency homodyne detection, and a deterministic...

  8. Controllability of deterministic networks with the identical degree sequence.

    Directory of Open Access Journals (Sweden)

    Xiujuan Ma

    Full Text Available Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y-flower. We analysed controllability of the two deterministic networks ((1, 3-flower and (2, 2-flower by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y-flower networks. Our results show that the family of (x,y-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected.

  9. Line and lattice networks under deterministic interference models

    NARCIS (Netherlands)

    Goseling, Jasper; Gastpar, Michael; Weber, Jos H.

    2011-01-01

    Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of re

  10. Reasoning against a deterministic conception of the world

    NARCIS (Netherlands)

    L. Huppes-Cluysenaer

    2011-01-01

    Aristotle situates freedom in nature and slavery in reason. His concept of freedom is inherently connected with the indeterminist belief in a double impulse of the body. The deterministic conception of nature - introduced during Enlightenment - has brought a reversal of this relation: nature is slav

  11. Deterministic teleportation using single-photon entanglement as a resource

    DEFF Research Database (Denmark)

    Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.

    2012-01-01

    We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...

  12. Deterministic event-based simulation of quantum phenomena

    NARCIS (Netherlands)

    De Raedt, K; De Raedt, H; Michielsen, K

    2005-01-01

    We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the inform

  13. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  14. Applicability of deterministic propagation models for mobile operators

    NARCIS (Netherlands)

    Mantel, O.C.; Oostveen, J.C.; Popova, M.P.

    2007-01-01

    Deterministic propagation models based on ray tracing or ray launching are widely studied in the scientific literature, because of their high accuracy. Also many commercial propagation modelling tools include ray-based models. In spite of this, they are hardly used in commercial operations by cellul

  15. The integrated model for solving the single-period deterministic inventory routing problem

    Science.gov (United States)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  16. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  17. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... minimally invasive approach in terms of, you know, effectiveness of treating lumbar herniations? 2 Well, the minimally ... think it’s important to stress here that the effectiveness of this procedure is about the same as ...

  18. Minimal distances between SCFTs

    Science.gov (United States)

    Buican, Matthew

    2014-01-01

    We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For = 1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from =2 UV SCFTs is more subtle. We argue that for RG flows preserving the full =2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break = 2 → = 1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.

  19. Minimal distances between SCFTs

    Energy Technology Data Exchange (ETDEWEB)

    Buican, Matthew [Department of Physics and Astronomy, Rutgers University,Piscataway, NJ 08854 (United States)

    2014-01-28

    We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break N=2→N=1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.

  20. Mimicking dark matter in clusters through a non-minimal gravitational coupling with matter: the case of the Abell cluster A586

    CERN Document Server

    Bertolami, Orfeu; Páramos, Jorge

    2011-01-01

    In this work, one shows that a specific non-minimal coupling between the scalar curvature and matter can mimic the dark matter component of relaxed galaxy clusters. For this purpose, one assesses the Abell Cluster A586, a massive strong-lensing nearby relaxed cluster of galaxies in virial equilibrium, where direct mass estimates are possible. The total density, which generally follows a cusped profile and reveals a very small baryonic component, can be effectively described within this framework.

  1. Effect of gamma irradiation on microbial quality of minimally processed carrot and lettuce: A case study in Greater Accra region of Ghana

    Science.gov (United States)

    Frimpong, G. K.; Kottoh, I. D.; Ofosu, D. O.; Larbi, D.

    2015-05-01

    The effect of ionizing radiation on the microbiological quality on minimally processed carrot and lettuce was studied. The aim was to investigate the effect of irradiation as a sanitizing agent on the bacteriological quality of some raw eaten salad vegetables obtained from retailers in Accra, Ghana. Minimally processed carrot and lettuce were analysed for total viable count, total coliform count and pathogenic organisms. The samples collected were treated and analysed for a 15 day period. The total viable count for carrot ranged from 1.49 to 14.01 log10 cfu/10 g while that of lettuce was 0.70 to 8.5 7 log10 cfu/10 g. It was also observed that total coliform count for carrot was 1.46-7.53 log10 cfu/10 g and 0.14-7.35 log10 cfu/10 g for lettuce. The predominant pathogenic organisms identified were Bacillus cereus, Cronobacter sakazakii, Staphylococcus aureus, and Klebsiella spp. It was concluded that 2 kGy was most effective for medium dose treatment of minimally processed carrot and lettuce.

  2. Effect of gamma irradiation on microbial quality of minimally processed carrot and lettuce: A case study in Greater Accra region of Ghana

    International Nuclear Information System (INIS)

    The effect of ionizing radiation on the microbiological quality on minimally processed carrot and lettuce was studied. The aim was to investigate the effect of irradiation as a sanitizing agent on the bacteriological quality of some raw eaten salad vegetables obtained from retailers in Accra, Ghana. Minimally processed carrot and lettuce were analysed for total viable count, total coliform count and pathogenic organisms. The samples collected were treated and analysed for a 15 day period. The total viable count for carrot ranged from 1.49 to 14.01 log10 cfu/10 g while that of lettuce was 0.70 to 8.5 7 log10 cfu/10 g. It was also observed that total coliform count for carrot was 1.46–7.53 log10 cfu/10 g and 0.14–7.35 log10 cfu/10 g for lettuce. The predominant pathogenic organisms identified were Bacillus cereus, Cronobacter sakazakii, Staphylococcus aureus, and Klebsiella spp. It was concluded that 2 kGy was most effective for medium dose treatment of minimally processed carrot and lettuce. - Highlights: • The microbial load on the cut-vegetables was beyond acceptable level for consumption. • The microbial contamination of carrot was found to be higher than that of lettuce. • 2 kGy was most appropriate in treating cut-vegetables for microbial safety

  3. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  4. Minimizing the number of late jobs in a stochastic setting using a chance constraint

    NARCIS (Netherlands)

    Akker, M. van den; Hoogeveen, H.

    2007-01-01

    We consider the single-machine scheduling problem of minimizing the number of late jobs. We omit here one of the standard assumptions in scheduling theory, which is that the processing times are deterministic. In this scheduling environment, the completion times will be stochastic variables as well.

  5. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  6. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    International Nuclear Information System (INIS)

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  7. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  8. MOx benchmark calculations by deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.

  9. Deterministic error correction for nonlocal spatial-polarization hyperentanglement

    Science.gov (United States)

    Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu

    2016-02-01

    Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.

  10. On the secure obfuscation of deterministic finite automata.

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, William Erik

    2008-06-01

    In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.

  11. Deterministic generation of multiparticle entanglement by quantum Zeno dynamics

    CERN Document Server

    Barontini, Giovanni; Haas, Florian; Estève, Jérôme; Reichel, Jakob

    2016-01-01

    Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications.

  12. The road to deterministic matrices with the restricted isometry property

    CERN Document Server

    Bandeira, Afonso S; Mixon, Dustin G; Wong, Percy

    2012-01-01

    The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are R...

  13. Deterministic event-based simulation of quantum interference

    OpenAIRE

    De Raedt, K.; De Raedt, H.; Michielsen, K.

    2004-01-01

    We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the information of the individual events. We also demonstrate that properly designed networks of these machines exhibit behavior that is usually only attributed to quantum systems. We present networks that s...

  14. Deterministic event-based simulation of quantum phenomena

    OpenAIRE

    De Raedt, K.; De Raedt, H.; Michielsen, K.

    2005-01-01

    We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the information of the individual events. We also demonstrate that properly designed networks of these machines exhibit behavior that is usually only attributed to quantum systems. We present networks that s...

  15. Deterministic and stochastic study of wind farm harmonic currents

    OpenAIRE

    Sainz Sapera, Luis; Mesas García, Juan José; Teodorescu, Remus; Rodríguez Cortés, Pedro

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18MWwind farm are investigated using extensive measurements, and the deterministic and stochastic characterization of wind farm harmonic currents is analyzed. Specific issues addressed in the paper include the harmonic variation with the wind farm operating point and the random char...

  16. Multidirectional sorting modes in deterministic lateral displacement devices

    DEFF Research Database (Denmark)

    Long, B.R.; Heller, Martin; Beech, J.P.;

    2008-01-01

    Deterministic lateral displacement (DLD) devices separate micrometer-scale particles in solution based on their size using a laminar microfluidic flow in an array of obstacles. We investigate array geometries with rational row-shift fractions in DLD devices by use of a simple model including both...... advection and diffusion. Our model predicts multidirectional sorting modes that could be experimentally tested in high-throughput DLD devices containing obstacles that are much smaller than the separation between obstacles....

  17. Notes on Deterministic Programming of Quantum Observables and Channels

    OpenAIRE

    Heinosaari, Teiko; Tukiainen, Mikko

    2014-01-01

    We study the limitations of deterministic programmability of quantum circuits, e.g., quantum computer. More precisely, we analyse the programming of quantum observables and channels via quantum multimeters. We show that the programming vectors for any two different sharp observables are necessarily orthogonal, whenever post-processing is not allowed. This result then directly implies that also any two different unitary channels require orthogonal programming vectors. This approach generalizes...

  18. Deterministic linear optics quantum computation utilizing linked photon circuits

    CERN Document Server

    Yoran, N; Yoran, Nadav; Reznik, Benni

    2003-01-01

    We suggest an efficient scheme for quantum computation with linear optical elements utilizing "linked" photon states. The linked states are designed according to the particular quantum circuit one wishes to process. Once a linked-state has been successfully prepared, the computation is pursued deterministically by a sequence of teleportation steps. The present scheme enables a significant reduction of the average number of elementary gates per logical gate to about 20-30 CZ_{9/16} gates.

  19. Receding Horizon Temporal Logic Control for Finite Deterministic Systems

    OpenAIRE

    Ding, Xuchu; Lazar, Mircea; Belta, Calin

    2012-01-01

    This paper considers receding horizon control of finite deterministic systems, which must satisfy a high level, rich specification expressed as a linear temporal logic formula. Under the assumption that time-varying rewards are associated with states of the system and they can be observed in real-time, the control objective is to maximize the collected reward while satisfying the high level task specification. In order to properly react to the changing rewards, a controller synthesis framewor...

  20. Deterministic Dynamic Programming in Discrete Time: A Monotone Convergence Principle

    OpenAIRE

    Takashi Kamihigashi; Masayuki Yao

    2015-01-01

    We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function is always a fixed point of a modified version of the Bellman operator. We also show that value iteration monotonically converges to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator, and satisfies a transversality-like condition. These results require no assumption except for the general framewo...

  1. Location deterministic biosensing from quantum-dot-nanowire assemblies

    OpenAIRE

    Liu, Chao; Kim, Kwanoh; Fan, D. L.

    2014-01-01

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively ...

  2. Deterministically – Probabilistic Approach for Determining the Steels Elasticity Modules

    Directory of Open Access Journals (Sweden)

    Popov Alexander

    2015-03-01

    Full Text Available The known deterministic relationships to estimate the elastic characteristics of materials are not well accounted for significant variability of these parameters in solids. Therefore, it is given a probabilistic approach to determine the modules of elasticity, adopted to random values, which increases the accuracy of the obtained results. By an ultrasonic testing, a non-destructive evaluation of the investigated steels structure and properties has been made.

  3. Deterministic and Stochastic Study of Wind Farm Harmonic Currents

    DEFF Research Database (Denmark)

    Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus;

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic...... characterization of wind farm harmonic currents is analyzed. Specific issues addressed in the paper include the harmonic variation with the wind farm operating point and the random characteristics of their magnitude and phase angle....

  4. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    Science.gov (United States)

    Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  5. Uniform Deterministic Discrete Method for Three Dimensional Systems

    Institute of Scientific and Technical Information of China (English)

    1997-01-01

    For radiative direct exchange areas in three dimensional system,the Uniform Deterministic Discrete Method(UDDM) was adopted.The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs.The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numercal accuracy.

  6. A Semi-Deterministic Channel Model for VANETs Simulations

    Directory of Open Access Journals (Sweden)

    Jonathan Ledy

    2012-01-01

    Full Text Available Today's advanced simulators facilitate thorough studies on Vehicular Ad hoc NETworks (VANETs. However the choice of the physical layer model in such simulators is a crucial issue that impacts the results. A solution to this challenge might be found with a hybrid model. In this paper, we propose a semi-deterministic channel propagation model for VANETs called UM-CRT. It is based on CRT (Communication Ray Tracer and SCME—UM (Spatial Channel Model Extended—Urban Micro which are, respectively, a deterministic channel simulator and a statistical channel model. It uses a process which adjusts the statistical model using relevant parameters obtained from the deterministic simulator. To evaluate realistic VANET transmissions, we have integrated our hybrid model in fully compliant 802.11 p and 802.11 n physical layers. This framework is then used with the NS-2 network simulator. Our simulation results show that UM-CRT is adapted for VANETs simulations in urban areas as it gives a good approximation of realistic channel propagation mechanisms while improving significantly simulation time.

  7. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  8. Demographic noise can reverse the direction of deterministic selection.

    Science.gov (United States)

    Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E

    2016-08-01

    Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085

  9. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  10. CONDUCCIÓN ANESTÉSICA DE LA SUSTITUCIÓN VALVULAR MITRAL MÍNIMAMENTE INVASIVA. PRIMEROS CASOS REALIZADOS EN CUBA / Anaesthetic management of minimally invasive mitral valve replacement. First cases performed in Cuba

    Directory of Open Access Journals (Sweden)

    Odalys Ojeda Mollinedo

    2011-09-01

    Full Text Available Minimally invasive heart surgery has many advantages for the patient, however, difficulties in performing and implementing this procedure are not only found in surgical technique, but in the design of the anesthetic technique, which becomes a challenge for the anesthesiologist. This article presents the first two cases of minimally invasive mitral valve replacement performed in the country. The anesthetic techniques and obtained results are described, and the advantages and complications of these two techniques (anesthesia and surgery are discussed. Although this series is small, we believe that it is the basis for developing this technique in our center, which is a safe option for patients with mitral valve disease who are not accepted for interventional cardiology.

  11. Minimally invasive approach for the Wheat procedure:report of 4 cases%微创Wheat手术4例报道

    Institute of Scientific and Technical Information of China (English)

    沈金强; 夏利民; 魏来; 宋凯; 杨兆华; 刘欢; 朱家泗; 王春生

    2013-01-01

    目的 介绍我院采用经胸骨上段“J”型切口行微创Wheat手术的临床经验.方法 2011年11月,我院采用胸骨上段“J”型切口为4例主动脉瓣病变合并有升主动脉瘤样扩张患者行微创Wheat手术,其中男性3例,女性1例,年龄43~62岁,平均55.3岁.结果 无院内死亡,低心排血量综合征、脑血管意外、肾功能不全、伤口愈合不良等并发症.术后1年随访升主动脉人工血管和主动脉瓣瓣膜功能良好,心功能恢复至Ⅰ~Ⅱ级.结论 经胸骨上段“J”型切口行微创Wheat手术可安全有效地应用于主动脉瓣病变合并升主动脉瘤样扩张的患者,值得临床选择性应用.%Objective To share clinical experience of minimally invasive Wheat procedure via upper ministernotomy.Methods During Nov.2011,we performed minimally invasive Wheat procedure though upper ministernotomy in our hospital for 4 patients with aortic valve diseases combined with dilated ascending aortic.There were 3 male and 1 female aged from 43 to 62 years old(mean age 55.3 years old).Results There was no in-hospital death.All patients recovered well without low cardiac output syndrome,cerebrovascular accident,renal insufficiency and unfavorable wound healing.The follow-up one year postoperation indicated that all patients were in HYHA chass Ⅰ-Ⅱ with good function in atificial vascular graft of ascending aorta and aortic valve.Conclusions Minimally invasive Wheat procedure via upper ministernotomy is a safe and feasible method and is worth applying to selective clinical application for the treatment of aortic valve diseases combined with dilated ascending aortic aneurysm.

  12. Magnetic resonance imaging findings and prognosis of gastric-type mucinous adenocarcinoma (minimal deviation adenocarcinoma or adenoma malignum) of the uterine corpus: Two case reports

    OpenAIRE

    HINO, MAYO; Yamaguchi, Ken; Abiko, Kaoru; YOSHIOKA, YUMIKO; HAMANISHI, JUNZO; Kondoh, Eiji; Koshiyama, Masafumi; Baba, Tsukasa; Matsumura, Noriomi; Minamiguchi, Sachiko; Kido, Aki; Konishi, Ikuo

    2016-01-01

    Our group previously documented the first, very rare case of primary gastric-type mucinous adenocarcinoma of the uterine corpus. Although this type of endometrial cancer appears to be similar to the gastric-type adenocarcinoma of the uterine cervix, its main symptoms, appearance on magnetic resonance imaging (MRI) and prognosis have not been fully elucidated due to its rarity. We herein describe an additional case of gastric-type mucinous adenocarcinoma of the endometrium and review the relev...

  13. Optimal Dividend Payments for the Piecewise-Deterministic Poisson Risk Model

    CERN Document Server

    Feng, Runhuan; Zhu, Chao

    2011-01-01

    This paper deals with optimal dividend payment problem in the general setup of a piecewise-deterministic compound Poisson risk model. The objective of an insurance business under consideration is to maximize the expected discounted dividend payout up to the time of ruin. Both restricted and unrestricted payment schemes are considered. In the case of restricted payment scheme, the value function is shown to be a classical solution of the corresponding Hamilton-Jacobi-Bellman equation, which, in turn, leads to an optimal restricted dividend payment policy. When the claims are exponentially distributed, the value function and an optimal dividend payment policy of the threshold type are determined in closed forms under certain conditions. The case of unrestricted payment scheme gives rise to a singular stochastic control problem. By solving the associated integro-differential quasi-variational inequality, the value function and an optimal barrier strategy are determined explicitly in exponential claim size distri...

  14. Four small supernumerary marker chromosomes derived from chromosomes 6, 8, 11 and 12 in a patient with minimal clinical abnormalities: a case report

    Directory of Open Access Journals (Sweden)

    Hamid Ahmed B

    2010-08-01

    Full Text Available Abstract Introduction Small supernumerary marker chromosomes are still a problem in cytogenetic diagnostic and genetic counseling. This holds especially true for the rare cases with multiple small supernumerary marker chromosomes. Most such cases are reported to be clinically severely affected due to the chromosomal imbalances induced by the presence of small supernumerary marker chromosomes. Here we report the first case of a patient having four different small supernumerary marker chromosomes which, apart from slight developmental retardation in youth and non-malignant hyperpigmentation, presented no other clinical signs. Case presentation Our patient was a 30-year-old Caucasian man, delivered by caesarean section because of macrosomy. At birth he presented with bilateral cryptorchidism but no other birth defects. At age of around two years he showed psychomotor delay and a bilateral convergent strabismus. Later he had slight learning difficulties, with normal social behavior and now lives an independent life as an adult. Apart from hypogenitalism, he has multiple hyperpigmented nevi all over his body, short feet with pes cavus and claw toes. At age of 30 years, cytogenetic and molecular cytogenetic analysis revealed a karyotype of 50,XY,+min(6(:p11.1-> q11.1:,+min(8(:p11.1->q11.1:,+min(11(:p11.11->q11:,+min(12(:p11.2~12->q10:, leading overall to a small partial trisomy in 12p11.1~12.1. Conclusions Including this case, four single case reports are available in the literature with a karyotype 50,XN,+4mar. For prenatally detected multiple small supernumerary marker chromosomes in particular we learn from this case that such a cytogenetic condition may be correlated with a positive clinical outcome.

  15. Algorithms for Deterministic Call Admission Control of Pre-stored VBR Video Streams

    Directory of Open Access Journals (Sweden)

    Christos Tryfonas

    2009-08-01

    Full Text Available We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accommodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm along with the experimental results we provide indicate that the proposed algorithm is suitable for real-time determination of the time displacement parameter during the call admission phase.

  16. Similarity matrix analysis and divergence measures for statistical detection of unknown deterministic signals hidden in additive noise

    International Nuclear Information System (INIS)

    This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector. - Highlights: • We model the distribution of the similarity matrix coefficients of a Gaussian noise. • We use divergence measures for goodness-of-fit test between a model and measured data. • We distinguish deterministic signal and Gaussian noise with similarity matrix analysis. • Similarity matrix analysis outperforms energy detector

  17. Similarity matrix analysis and divergence measures for statistical detection of unknown deterministic signals hidden in additive noise

    Energy Technology Data Exchange (ETDEWEB)

    Le Bot, O., E-mail: lebotol@gmail.com [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Mars, J.I. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Gervaise, C. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Chaire CHORUS, Foundation of Grenoble Institute of Technology, 46 Avenue Félix Viallet, 38031 Grenoble Cedex 1 (France)

    2015-10-23

    This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector. - Highlights: • We model the distribution of the similarity matrix coefficients of a Gaussian noise. • We use divergence measures for goodness-of-fit test between a model and measured data. • We distinguish deterministic signal and Gaussian noise with similarity matrix analysis. • Similarity matrix analysis outperforms energy detector.

  18. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  19. A deterministic solution of the first order linear Boltzmann transport equation in the presence of external magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    St Aubin, J., E-mail: joel.st.aubin@albertahealthservices.ca; Keyvanloo, A.; Fallone, B. G. [Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue Northwest, Edmonton, Alberta T6G 1Z2 (Canada); Vassiliev, O. [Department of Medical Physics, Tom Baker Cancer Center, 1331 29 Street Northwest, Calgary, Alberta T2N 4N2 (Canada)

    2015-02-15

    Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of today’s treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 × 10 cm{sup 2} field as well as a smaller 2 × 2 cm{sup 2} field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithm’s high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization

  20. Oscillation and chaos in a deterministic traffic network

    International Nuclear Information System (INIS)

    Traffic dynamics of regular networks are of importance in theory and practice. In this paper, we study such a problem with a regular lattice structure. We specify the network structure and traffic protocols so that all the random features are removed. When a node is attacked and then removed, the traffic redistributes, causing complicated dynamical results. With different system redundancy, we observe rich dynamics, ranging from stable state to periodic to chaotic oscillation. Since this is a completely deterministic system, we can conclude that the nonlinear dynamics is purely due to the interior nonlinear feature of the traffic.

  1. Deterministic ants in labirynth -- information gained by map sharing

    CERN Document Server

    Malinowski, Janusz

    2014-01-01

    A few of ant robots are dropped to a labirynth, formed by a square lattice with a small number of nodes removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which she has visited. Once two ants met, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.

  2. Deterministic Single-Phonon Source Triggered by a Single Photon

    CERN Document Server

    Söllner, Immo; Lodahl, Peter

    2016-01-01

    We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.

  3. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  4. Deterministic combination of numerical and physical coastal wave models

    DEFF Research Database (Denmark)

    Zhang, H.W.; Schäffer, Hemming Andreas; Jakobsen, K.P.

    2007-01-01

    the interface between the numerical and physical models. The link between numerical and physical models is given by an ad hoc unified wave generation theory which is devised in the study. This wave generation theory accounts for linear dispersion and shallow water non-linearity. Local wave phenomena (evanescent......A deterministic combination of numerical and physical models for coastal waves is developed. In the combined model, a Boussinesq model MIKE 21 BW is applied for the numerical wave computations. A piston-type 2D or 3D wavemaker and the associated control system with active wave absorption provides...

  5. A Deterministic Transport Code for Space Environment Electrons

    Science.gov (United States)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  6. Noise-based deterministic logic and computing: a brief survey

    CERN Document Server

    Kish, Laszlo B; Bezrukov, Sergey M; Peper, Ferdinand; Gingl, Zoltan; Horvath, Tamas

    2010-01-01

    A short survey is provided about our recent explorations of the young topic of noise-based logic. After outlining the motivation behind noise-based computation schemes, we present a short summary of our ongoing efforts in the introduction, development and design of several noise-based deterministic multivalued logic schemes and elements. In particular, we describe classical, instantaneous, continuum, spike and random-telegraph-signal based schemes with applications such as circuits that emulate the brain's functioning and string verification via a slow communication channel.

  7. Deterministic secure quantum communication over a collective-noise channel

    Institute of Scientific and Technical Information of China (English)

    GU Bin; PEI ShiXin; SONG Biao; ZHONG Kun

    2009-01-01

    We present two deterministic secure quantum communication schemes over a collective-noise. One is used to complete the secure quantum communication against a collective-rotation noise and the other is used against a collective-dephasing noise. The two parties of quantum communication can exploit the correlation of their subsystems to check eavesdropping efficiently. Although the sender should prepare a sequence of three-photon entangled states for accomplishing secure communication against a collective noise, the two parties need only single-photon measurements, rather than Bell-state measurements, which will make our schemes convenient in practical application.

  8. Steering Multiple Reverse Current into Unidirectional Current in Deterministic Ratchets

    Institute of Scientific and Technical Information of China (English)

    韦笃取; 罗晓曙; 覃英华

    2011-01-01

    Recent investigations have shown that with varying the amplitude of the external force, the deterministic ratchets exhibit multiple current reversals, which are undesirable in certain circumstances. To control the multiple reverse current to unidirectional current, an adaptive control law is presented inspired from the relation between multiple reversaJs current and the chaos-periodic/quasiperiodic transition of the transport velocity. The designed controller can stabilize the transport velocity of ratchets to steady state and suppress any chaos-periodic/quasiperiodic transition, namely, the stable transport in ratchets is achieved, which makes the current sign unchanged.

  9. Deterministic entanglement of Rydberg ensembles by engineered dissipation

    DEFF Research Database (Denmark)

    Dasari, Durga; Mølmer, Klaus

    2014-01-01

    We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can...... be driven towards a dark steady state that entangles all atoms. The long-range resonant dipole-dipole interaction between different Rydberg states extends the entanglement beyond the van der Walls interaction range with perspectives for entangling large and distant ensembles....

  10. The deterministic optical alignment of the HERMES spectrograph

    Science.gov (United States)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  11. Deterministic approximation for the cover time of trees

    CERN Document Server

    Feige, Uriel

    2009-01-01

    We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n/epsilon, and hence remains polynomial in n also for epsilon = 1/n^{O(1)}. We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.

  12. Deterministic versus stochastic aspects of superexponential population growth models

    Science.gov (United States)

    Grosjean, Nicolas; Huillet, Thierry

    2016-08-01

    Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.

  13. Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.

    Science.gov (United States)

    Chew, Lock Yue

    2012-01-01

    We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.

  14. Deterministic multidimensional growth model for small-world networks

    CERN Document Server

    Peng, Aoyuan

    2011-01-01

    We proposed a deterministic multidimensional growth model for small-world networks. The model can characterize the distinguishing properties of many real-life networks with geometric space structure. Our results show the model possesses small-world effect: larger clustering coefficient and smaller characteristic path length. We also obtain some accurate results for its properties including degree distribution, clustering coefficient and network diameter and discuss them. It is also worth noting that we get an accurate analytical expression for calculating the characteristic path length. We verify numerically and experimentally these main features.

  15. Intestinal intussusception and occlusion caused by small bowel polyps in the Peutz-Jeghers syndrome. Management by combined intraoperative enteroscopy and resection through minimal enterostomy: case report

    OpenAIRE

    Gama-Rodrigues Joaquim J.; Silva José Hyppolito da; Aisaka Adilson A.; Jureidini Ricardo; Falci Júnior Renato; Maluf Filho Fauze; Chong A. Kim; Tsai André Wan Wen; Bresciani Cláudio

    2000-01-01

    The Peutz-Jeghers syndrome is a hereditary disease that requires frequent endoscopic and surgical intervention, leading to secondary complications such as short bowel syndrome. CASE REPORT: This paper reports on a 15-year-old male patient with a family history of the disease, who underwent surgery for treatment of an intestinal occlusion due to a small intestine intussusception. DISCUSSION: An intra-operative fiberscopic procedure was included for the detection and treatment of numerous polyp...

  16. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... possible incision to minimize the injury to the tissues, particularly the muscles, the skin, and the ligaments, ... easier, and it limits the damage to the tissues around. So it’s a much safer procedure for ...

  17. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... approach. The word that we put in the operative record is minimal. We’re talking about maybe ... we’re looking at 20-times magnification. The operative area, the field that they’re working is ...

  18. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  19. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... with the smallest possible incision to minimize the injury to the tissues, particularly the muscles, the skin, ... the world. It’s one of the most common injuries and one of the most common causes of ...

  20. Minimal Orderings Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Peyton, B.W.

    1999-07-01

    When minimum orderings proved too difficult to deal with, Rose, Tarjan, and Leuker instead studied minimal orderings and how to compute them (Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput., 5:266-283, 1976). This paper introduces an algorithm that is capable of computing much better minimal orderings much more efficiently than the algorithm in Rose et al. The new insight is a way to use certain structures and concepts from modern sparse Cholesky solvers to re-express one of the basic results in Rose et al. The new algorithm begins with any initial ordering and then refines it until a minimal ordering is obtained. it is simple to obtain high-quality low-cost minimal orderings by using fill-reducing heuristic orderings as initial orderings for the algorithm. We examine several such initial orderings in some detail.

  1. Analysis of 285 cases with infratentorial lessions operated by minimally invasive keyhole approach of ;craniotomy%微创锁孔幕下手术285例分析

    Institute of Scientific and Technical Information of China (English)

    周明卫; 傅震; 朱风仪; 赵春生; 曹胜武; 骆慧; 刘宁

    2015-01-01

    目的总结微创锁孔开颅幕下手术的临床效果。方法285例幕下病变在微创锁孔显微镜及神经内镜下完成。皮肤切口3‐5 cm ,骨瓣直径1‐3 cm。小脑半球病变共4例采用锁孔中线入路;桥小脑角区手术281例采用锁孔枕下乙状窦后入路。结果成功实施了肿瘤切除152例;其中,听神经瘤96例,脑膜瘤23例,胆脂瘤17例,三叉神经鞘瘤12例(全切除8例,4例跨中颅窝的为次全切除)。成功完成微血管减压术129例、畸形血管切除3例和巨大蛛网膜囊肿切除1例。结论显微镜下神经内镜微创锁孔入路在幕下手术中能获得有效的操作空间,具有创伤小、并发症少、恢复快等优点,可应用于小脑、桥小脑角区病变的手术。%Objective To summary the outcomes of minimally invasive keyhole approach of craniotomy for infratentorial lesions .Methods The minimally invasive keyhole approach of craniotomy was performed in 285 cases with infratentorial lesions .The skin incision was 3‐5 cm in length and the bone flap was 1‐3 cm in diameter .The post‐middle line keyhole approach was used in 4 cases with cerebellar hemisphere lesions and the suboccipital retrosigmoid keyhole approach was used in 281 cases with the lesions in cerebellopontine angle area .Results Tumor resection surgeries were performed successfully in 152 cases ,of whom 96 cases were with acoustic neurinoma ,23 cases with meningioma ,17 cases with cholesteatoma ,and 12 cases with trigeminal neurinoma(total resection in 8 cases and partial resection in 4 cases due to extending to the middle cranial fossa) .The cranial neural micro‐vascular decompression was performed in 129 cases ,and the resections of three deformed vessels and one large arachnoid cyst were carried out ,which were all successful .Conclusion Asisted by microscope and endoscope ,the minimally invasive keyhole approach of craniotomy has the advantages of providing effective space

  2. Power Minimization techniques for Networked Data Centers.

    Energy Technology Data Exchange (ETDEWEB)

    Low, Steven; Tang, Kevin

    2011-09-28

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  3. Power Minimization techniques for Networked Data Centers

    International Nuclear Information System (INIS)

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  4. Hazardous waste minimization tracking system

    International Nuclear Information System (INIS)

    Under RCRA section 3002 9(b) and 3005f(h), hazardous waste generators and owners/operators of treatment, storage, and disposal facilities (TSDFs) are required to certify that they have a program in place to reduce the volume or quantity and toxicity of hazardous waste to the degree determined to be economically practicable. In many cases, there are environmental, as well as, economic benefits, for agencies that pursue pollution prevention options. Several state governments have already enacted waste minimization legislation (e.g., Massachusetts Toxic Use Reduction Act of 1989, and Oregon Toxic Use Reduction Act and Hazardous Waste Reduction Act, July 2, 1989). About twenty six other states have established legislation that will mandate some type of waste minimization program and/or facility planning. The need to address the HAZMIN (Hazardous Waste Minimization) Program at government agencies and private industries has prompted us to identify the importance of managing The HAZMIN Program, and tracking various aspects of the program, as well as the progress made in this area. The open-quotes WASTEclose quotes is a tracking system, which can be used and modified in maintaining the information related to Hazardous Waste Minimization Program, in a manageable fashion. This program maintains, modifies, and retrieves information related to hazardous waste minimization and recycling, and provides automated report generating capabilities. It has a built-in menu, which can be printed either in part or in full. There are instructions on preparing The Annual Waste Report, and The Annual Recycling Report. The program is very user friendly. This program is available in 3.5 inch or 5 1/4 inch floppy disks. A computer with 640K memory is required

  5. Electrocardiogram (ECG) pattern modeling and recognition via deterministic learning

    Institute of Scientific and Technical Information of China (English)

    Xunde DONG; Cong WANG; Junmin HU; Shanxing OU

    2014-01-01

    A method for electrocardiogram (ECG) pattern modeling and recognition via deterministic learning theory is presented in this paper. Instead of recognizing ECG signals beat-to-beat, each ECG signal which contains a number of heartbeats is recognized. The method is based entirely on the temporal features (i.e., the dynamics) of ECG patterns, which contains complete information of ECG patterns. A dynamical model is employed to demonstrate the method, which is capable of generating synthetic ECG signals. Based on the dynamical model, the method is shown in the following two phases:the identification (training) phase and the recognition (test) phase. In the identification phase, the dynamics of ECG patterns is accurately modeled and expressed as constant RBF neural weights through the deterministic learning. In the recognition phase, the modeling results are used for ECG pattern recognition. The main feature of the proposed method is that the dynamics of ECG patterns is accurately modeled and is used for ECG pattern recognition. Experimental studies using the Physikalisch-Technische Bundesanstalt (PTB) database are included to demonstrate the effectiveness of the approach.

  6. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-01

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution. PMID:24048479

  7. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-07-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  8. Deterministic chaos in the X-Ray sources

    CERN Document Server

    Grzedzielski, M; Janiuk, A

    2015-01-01

    Hardly any of the observed black hole accretion disks in X-Ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the Quasi-Periodic Oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binaries vari- ability. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the sur- rogate series. We analyze here the data of two X-Ray binaries - XTE J1550-564, and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leadin...

  9. Deterministic Chaos in the X-ray Sources

    Science.gov (United States)

    Grzedzielski, M.; Sukova, P.; Janiuk, A.

    2015-12-01

    Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.

  10. A local deterministic model of quantum spin measurement

    CERN Document Server

    Palmer, T N

    1995-01-01

    The conventional view, that Einstein was wrong to believe that quantum physics is local and deterministic, is challenged. A parametrised model, Q, for the state vector evolution of spin 1/2 particles during measurement is developed. Q draws on recent work on so-called riddled basins in dynamical systems theory, and is local, deterministic, nonlinear and time asymmetric. Moreover the evolution of the state vector to one of two chaotic attractors (taken to represent observed spin states) is effectively uncomputable. Motivation for this model arises from Penrose's speculations about the nature and role of quantum gravity. Although the evolution of Q's state vector is uncomputable, the probability that the system will evolve to one of the two attractors is computable. These probabilities correspond quantitatively to the statistics of spin 1/2 particles. In an ensemble sense the evolution of the state vector towards an attractor can be described by a diffusive random walk. Bell's theorem and a version of the Bell-...

  11. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-01

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  12. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  13. Quantum secure direct communication and deterministic secure quantum communication

    Institute of Scientific and Technical Information of China (English)

    LONG Gui-lu; DENG Fu-guo; WANG Chuan; LI Xi-han; WEN Kai; WANG Wan-ying

    2007-01-01

    In this review article,we review the recent development of quantum secure direct communication(QSDC)and deterministic secure quantum communication(DSQC) which both are used to transmit secret message,including the criteria for QSDC,some interesting QSDC protocols,the DSQC protocols and QSDC network,etc.The difference between these two branches of quantum Communication is that DSOC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit,and QSDC does not.They are attractivebecause they are deterministic,in particular,the QSDC protocol is fully quantum mechanical.With sophisticated quantum technology in the future,the QSDC may become more and more popular.For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel,a quantum privacy amplification protocol has been proposed.It involves very simple CHC operations and reduces the information leakage to a negligible small level.Moreover,with the one-party quantum error correction,a relation has been established between classical linear codes and quantum one-party codes,hence it is convenient to transfer many good classical error correction codes to the quantum world.The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.

  14. On the deterministic and stochastic use of hydrologic models

    Science.gov (United States)

    Farmer, William H.; Vogel, Richard M.

    2016-01-01

    Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.

  15. Deterministic approach to microscopic three-phase traffic theory

    CERN Document Server

    Kerner, B S; Kerner, Boris S.; Klenov, Sergey L.

    2005-01-01

    A deterministic approach to three-phase traffic theory is presented. Two different deterministic microscopic traffic flow models are introduced. In an acceleration time delay model (ATD-model), different time delays in driver acceleration associated with driver behavior in various local driving situations are explicitly incorporated into the model. Vehicle acceleration depends on local traffic situation, i.e., whether a driver is within the free flow, or synchronized flow, or else wide moving jam traffic phase. In a speed adaptation model (SA-model), driver time delays are simulated as a model effect: Rather than driver acceleration, vehicle speed adaptation occurs with different time delays depending on one of the three traffic phases in which the vehicle is in. It is found that the ATD- and SA-models show spatiotemporal congested traffic patterns that are adequate with empirical results. It is shown that in accordance with empirical results in the ATD- and SA-models the onset of congestion in free flow at a...

  16. Pest persistence and eradication conditions in a deterministic model for sterile insect release.

    Science.gov (United States)

    Gordillo, Luis F

    2015-01-01

    The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population. PMID:25105593

  17. Evaluating consistency of deterministic streamline tractography in non-linearly warped DTI data

    CERN Document Server

    Adluru, Nagesh; Tromp, Do P M; Davidson, Richard J; Zhang, Hui; Alexander, Andrew L

    2016-01-01

    Tractography is typically performed for each subject using the diffusion tensor imaging (DTI) data in its native subject space rather than in some space common to the entire study cohort. Despite performing tractography on a population average in a normalized space, the latter is considered less favorably at the \\emph{individual} subject level because it requires spatial transformations of DTI data that involve non-linear warping and reorientation of the tensors. Although the commonly used reorientation strategies such as finite strain and preservation of principle direction are expected to result in adequate accuracy for voxel based analyses of DTI measures such as fractional anisotropy (FA), mean diffusivity (MD), the reorientations are not always exact except in the case of rigid transformations. Small imperfections in reorientation at individual voxel level accumulate and could potentially affect the tractography results adversely. This study aims to evaluate and compare deterministic white matter fiber t...

  18. Deterministic Partial Differential Equation Model for Dose Calculation in Electron Radiotherapy

    CERN Document Server

    Duclous, Roland; Frank, Martin

    2009-01-01

    Treatment with high energy ionizing radiation is one of the main methods in modern cancer therapy that is in clinical use. During the last decades, two main approaches to dose calculation were used, Monte Carlo simulations and semi-empirical models based on Fermi-Eyges theory. A third way to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. Starting from these, we derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free-streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on [BerCharDub], that exactly preserves key properties of the analytical solution on the discrete level. Several numerical results for test cases from the medical physics literature are presented.

  19. Automated Controller Synthesis for non-Deterministic Piecewise-Affine Hybrid Systems

    DEFF Research Database (Denmark)

    Grunnet, Jacob Deleuran

    interferometric measurements. Control of satellite formations presents a whole new set of challenges for spacecraft control systems requiring advances in actuation, sensing, communication, and control algorithms. Specifically having the control system duplicated onto multiple satellites increases the possibility...... of negating faults while the added number of components increases the likelihood of faults occurring. Combined with the fact that once a mission is launched it is prohibitively expensive to repair a failing component there is a good case for designing fault tolerant controllers specifically for satellite...... formations. This thesis uses a hybrid systems model of a satellite formation with possible actuator faults as a motivating example for developing an automated control synthesis method for non-deterministic piecewise-affine hybrid systems (PAHS). The method does not only open an avenue for further research...

  20. Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide (Russian Edition)

    International Nuclear Information System (INIS)

    The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References

  1. Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared

    Energy Technology Data Exchange (ETDEWEB)

    Chouhan, S.L.; Davis, P

    2002-04-01

    The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs

  2. Developments based on stochastic and determinist methods for studying complex nuclear systems

    International Nuclear Information System (INIS)

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  3. What is the relationship between the minimally important difference and health state utility values? The case of the SF-6D

    Directory of Open Access Journals (Sweden)

    Brazier John E

    2003-04-01

    Full Text Available Abstract Background The SF-6D is a new single summary preference-based measure of health derived from the SF-36. Empirical work is required to determine what is the smallest change in SF-6D scores that can be regarded as important and meaningful for health professionals, patients and other stakeholders. Objectives To use anchor-based methods to determine the minimally important difference (MID for the SF-6D for various datasets. Methods All responders to the original SF-36 questionnaire can be assigned an SF-6D score provided the 11 items used in the SF-6D have been completed. The SF-6D can be regarded as a continuous outcome scored on a 0.29 to 1.00 scale, with 1.00 indicating "full health". Anchor-based methods examine the relationship between an health-related quality of life (HRQoL measure and an independent measure (or anchor to elucidate the meaning of a particular degree of change. One anchor-based approach uses an estimate of the MID, the difference in the QoL scale corresponding to a self-reported small but important change on a global scale. Patients were followed for a period of time, then asked, using question 2 of the SF-36 as our global rating scale, (which is not part of the SF-6D, if there general health is much better (5, somewhat better (4, stayed the same (3, somewhat worse (2 or much worse (1 compared to the last time they were assessed. We considered patients whose global rating score was 4 or 2 as having experienced some change equivalent to the MID. In patients who reported a worsening of health (global change of 1 or 2 the sign of the change in the SF-6D score was reversed (i.e. multiplied by minus one. The MID was then taken as the mean change on the SF-6D scale of the patients who scored (2 or 4. Results This paper describes the MID for the SF-6D from seven longitudinal studies that had previously used the SF-36. Conclusions From the seven reviewed studies (with nine patient groups the MID for the SF-6D ranged from 0

  4. Anti-deterministic behavior of discrete systems that are less predictable than noise

    OpenAIRE

    Urbanowicz, Krzysztof; Kantz, Holger; Janusz A. HOLYST

    2004-01-01

    We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in Recurrence Plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact less pr...

  5. Adaptive Alternating Minimization Algorithms

    CERN Document Server

    Niesen, Urs; Wornell, Gregory

    2007-01-01

    The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...

  6. Minimal gaugino mediation

    International Nuclear Information System (INIS)

    We propose minimal gaugino mediation as the simplest known solution to the supersymmetric flavor and CP problems. The framework predicts a very minimal structure for the soft parameters at ultrahigh energies: gaugino masses are unified and non-vanishing whereas all other soft supersymmetry breaking parameters vanish. We show that this boundary condition naturally arises from a small extra dimension and present a complete model which includes a new extra-dimensional solution to the μ problem. We briefly discuss the predicted superpartner spectrum as a function of the two parameters of the model. The commonly ignored renormalization group evolution above the GUT scale is crucial to the viability of minimal gaugino mediation but does not introduce new model dependence

  7. Minimal Gaugino Mediation

    International Nuclear Information System (INIS)

    The authors propose Minimal Gaugino Mediation as the simplest known solution to the supersymmetric flavor and CP problems. The framework predicts a very minimal structure for the soft parameters at ultra-high energies: gaugino masses are unified and non-vanishing whereas all other soft supersymmetry breaking parameters vanish. The authors show that this boundary condition naturally arises from a small extra dimension and present a complete model which includes a new extra-dimensional solution to the mu problem. The authors briefly discuss the predicted superpartner spectrum as a function of the two parameters of the model. The commonly ignored renormalization group evolution above the GUT scale is crucial to the viability of Minimal Gaugino Mediation but does not introduce new model dependence

  8. Minimally Invasive Treatment of a Complex Tibial Plateau Fracture with Diaphyseal Extension in a Patient with Uncontrolled Diabetes Mellitus: A Case Report and Review of Literature.

    Science.gov (United States)

    Rathod, Ashok K; Dhake, Rakesh P; Pawaskar, Aditya

    2016-01-01

    Fractures of the proximal tibia comprise a huge spectrum of injuries with different fracture configurations. The combination of tibia plateau fracture with diaphyseal extension is a rare injury with sparse literature being available on treatment of the same. Various treatment modalities can be adopted with the aim of achieving a well-aligned, congruous, stable joint, which allows early motion and function. We report a case of a 40-year-old male who sustained a Schatzker type VI fracture of left tibial plateau with diaphyseal extension. On further investigations, the patient was diagnosed to have diabetes mellitus with grossly deranged blood sugar levels. The depressed tibial condyle was manipulated to lift its articular surface using K-wire as a joystick and stabilized with an additional K-wire. Distal tibial skeletal traction was maintained for three weeks followed by an above knee cast. At eight months of follow-up, X-rays revealed a well-consolidated fracture site, and the patient had attained a reasonably good range of motion with only terminal restriction of squatting. Tibial plateau fractures with diaphyseal extension in a patient with uncontrolled diabetes mellitus is certainly a challenging entity. After an extended search of literature, we could not find any reports highlighting a similar method of treatment for complex tibial plateau injuries in a patient with uncontrolled diabetes mellitus. PMID:27335711

  9. Minimal Genus One Curves

    OpenAIRE

    Sadek, Mohammad

    2010-01-01

    In this paper we consider genus one equations of degree $n$, namely a (generalised) binary quartic when $n=2$, a ternary cubic when $n=3$, and a pair of quaternary quadrics when $n=4$. A new definition for the minimality of genus one equations of degree $n$ over local fields is introduced. The advantage of this definition is that it does not depend on invariant theory of genus one curves. We prove that this definition coincides with the classical definition of minimality for all $n\\le4$. As a...

  10. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... called a “minimally invasive microscopic lumbar discectomy.” Now this is a patient who a 46-year-old ... L-5, S-1. So that’s why she’s having this procedure. The man who is doing the procedure ...

  11. Periodic minimal surfaces

    Science.gov (United States)

    Mackay, Alan L.

    1985-04-01

    A minimal surface is one for which, like a soap film with the same pressure on each side, the mean curvature is zero and, thus, is one where the two principal curvatures are equal and opposite at every point. For every closed circuit in the surface, the area is a minimum. Schwarz1 and Neovius2 showed that elements of such surfaces could be put together to give surfaces periodic in three dimensions. These periodic minimal surfaces are geometrical invariants, as are the regular polyhedra, but the former are curved. Minimal surfaces are appropriate for the description of various structures where internal surfaces are prominent and seek to adopt a minimum area or a zero mean curvature subject to their topology; thus they merit more complete numerical characterization. There seem to be at least 18 such surfaces3, with various symmetries and topologies, related to the crystallographic space groups. Recently, glyceryl mono-oleate (GMO) was shown by Longley and McIntosh4 to take the shape of the F-surface. The structure postulated is shown here to be in good agreement with an analysis of the fundamental geometry of periodic minimal surfaces.

  12. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Frandsen, Mads Toudal

    2007-01-01

    I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the under...

  13. Logarithmic Superconformal Minimal Models

    CERN Document Server

    Pearce, Paul A; Tartaglia, Elena

    2013-01-01

    The higher fusion level logarithmic minimal models LM(P,P';n) have recently been constructed as the diagonal GKO cosets (A_1^{(1)})_k oplus (A_1^{(1)})_n / (A_1^{(1)})_{k+n} where n>0 is an integer fusion level and k=nP/(P'-P)-2 is a fractional level. For n=1, these are the logarithmic minimal models LM(P,P'). For n>1, we argue that these critical theories are realized on the lattice by n x n fusion of the n=1 models. For n=2, we call them logarithmic superconformal minimal models LSM(p,p') where P=|2p-p'|, P'=p' and p,p' are coprime, and they share the central charges of the rational superconformal minimal models SM(P,P'). Their mathematical description entails the fused planar Temperley-Lieb algebra which is a spin-1 BMW tangle algebra with loop fugacity beta_2=x^2+1+x^{-2} and twist omega=x^4 where x=e^{i(p'-p)pi/p'}. Examples are superconformal dense polymers LSM(2,3) with c=-5/2, beta_2=0 and superconformal percolation LSM(3,4) with c=0, beta_2=1. We calculate the free energies analytically. By numerical...

  14. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... part of the sciatic nerve. You know one good important thing to talk about is the concept of “I ... in the minimally invasive procedures? The most important thing is to have a good trusting relationship between your surgeon and yourself, and ...

  15. Minimally Invasive Lumbar Discectomy

    Medline Plus

    Full Text Available ... Miami’s Baptist Hospital. You’re going to be a seeing a procedure called a “minimally invasive microscopic lumbar discectomy.” Now this is a patient who a 46-year-old woman who ...

  16. Minimally Invasive Stomas

    OpenAIRE

    Hellinger, Michael D.; Al Haddad, Abdullah

    2008-01-01

    Traditionally, stoma creation and end stoma reversal have been performed via a laparotomy incision. However, in many situations, stoma construction may be safely performed in a minimally invasive nature. This may include a trephine, laparoscopic, or combined approach. Furthermore, Hartmann's colostomy reversal, a procedure traditionally associated with substantial morbidity, may also be performed laparoscopically. The authors briefly review patient selection, preparation, and indications, and...

  17. Scattering of electromagnetic light waves from a deterministic anisotropic medium

    Science.gov (United States)

    Li, Jia; Chang, Liping; Wu, Pinghui

    2015-11-01

    Based on the weak scattering theory of electromagnetic waves, analytical expressions are derived for the spectral densities and degrees of polarization of an electromagnetic plane wave scattered from a deterministic anisotropic medium. It is shown that the normalized spectral densities of scattered field is highly dependent of changes of the scattering angle and degrees of polarization of incident plane waves. The degrees of polarization of scattered field are also subjective to variations of these parameters. In addition, the anisotropic effective radii of the dielectric susceptibility can lead essential influences on both spectral densities and degrees of polarization of scattered field. They are highly dependent of the effective radii of the medium. The obtained results may be applicable to determine anisotropic parameters of medium by quantitatively measuring statistics of a far-zone scattered field.

  18. Quantum dissonance and deterministic quantum computation with a single qubit

    Science.gov (United States)

    Ali, Mazhar

    2014-11-01

    Mixed state quantum computation can perform certain tasks which are believed to be efficiently intractable on a classical computer. For a specific model of mixed state quantum computation, namely, deterministic quantum computation with a single qubit (DQC1), recent investigations suggest that quantum correlations other than entanglement might be responsible for the power of DQC1 model. However, strictly speaking, the role of entanglement in this model of computation was not entirely clear. We provide conclusive evidence that there are instances where quantum entanglement is not present in any part of this model, nevertheless we have advantage over classical computation. This establishes the fact that quantum dissonance (a kind of quantum correlations) present in fully separable (FS) states provide power to DQC1 model.

  19. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  20. Linearly Bounded Liars, Adaptive Covering Codes, and Deterministic Random Walks

    CERN Document Server

    Cooper, Joshua N

    2009-01-01

    We analyze a deterministic form of the random walk on the integer line called the {\\em liar machine}, similar to the rotor-router model, finding asymptotically tight pointwise and interval discrepancy bounds versus random walk. This provides an improvement in the best-known winning strategies in the binary symmetric pathological liar game with a linear fraction of responses allowed to be lies. Equivalently, this proves the existence of adaptive binary block covering codes with block length $n$, covering radius $\\leq fn$ for $f\\in(0,1/2)$, and cardinality $O(\\sqrt{\\log \\log n}/(1-2f))$ times the sphere bound $2^n/\\binom{n}{\\leq \\lfloor fn\\rfloor}$.

  1. Scaling Mobility Patterns and Collective Movements: Deterministic Walks in Lattices

    CERN Document Server

    Han, Xiao-Pu; Wang, Bing-Hong

    2010-01-01

    Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasi-periodic synchronization of walkers' directions. These results indicate that the co-evolution of walkers' least-action behavior and the landscape could be a potential origin of not only the individual scaling mobility patterns, but also the flocks of animals. Our findings provide a bridge to connect the individual scaling mobility patterns and the ordered collective movements.

  2. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  3. Integrated deterministic and probabilistic safety assessment: Concepts, challenges, research directions

    International Nuclear Information System (INIS)

    Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives

  4. Equilibrium, fluctuation relations and transport for irreversible deterministic dynamics

    CERN Document Server

    Colangeli, Matteo

    2011-01-01

    In a recent paper [M. Colangeli \\textit{et al.}, J.\\ Stat.\\ Mech.\\ P04021, (2011)] it was argued that the Fluctuation Relation for the phase space contraction rate $\\Lambda$ could suitably be extended to non-reversible dissipative systems. We strengthen here those arguments, providing analytical and numerical evidence based on the properties of a simple irreversible nonequilibrium baker model. We also consider the problem of response, showing that the transport coefficients are not affected by the irreversibility of the microscopic dynamics. In addition, we prove that a form of \\textit{detailed balance}, hence of equilibrium, holds in the space of relevant variables, despite the irreversibility of the phase space dynamics. This corroborates the idea that the same stochastic description, which arises from a projection onto a subspace of relevant coordinates, is compatible with quite different underlying deterministic dynamics. In other words, the details of the microscopic dynamics are largely irrelevant, for ...

  5. Receding Horizon Temporal Logic Control for Finite Deterministic Systems

    CERN Document Server

    Ding, Xuchu; Belta, Calin

    2012-01-01

    This paper considers receding horizon control of finite deterministic systems, which must satisfy a high level, rich specification expressed as a linear temporal logic formula. Under the assumption that time-varying rewards are associated with states of the system and they can be observed in real-time, the control objective is to maximize the collected reward while satisfying the high level task specification. In order to properly react to the changing rewards, a controller synthesis framework inspired by model predictive control is proposed, where the rewards are locally optimized at each time-step over a finite horizon, and the immediate optimal control is applied. By enforcing appropriate constraints, the infinite trajectory produced by the controller is guaranteed to satisfy the desired temporal logic formula. Simulation results demonstrate the effectiveness of the approach.

  6. Location deterministic biosensing from quantum-dot-nanowire assemblies.

    Science.gov (United States)

    Liu, Chao; Kim, Kwanoh; Fan, D L

    2014-08-25

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926

  7. Location deterministic biosensing from quantum-dot-nanowire assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Chao [Materials Science and Engineering Program, Texas Materials Institute, University of Texas at Austin, Austin, Texas 78712 (United States); Kim, Kwanoh [Department of Mechanical Engineering, University of Texas at Austin, Austin, Texas 78712 (United States); Fan, D. L., E-mail: dfan@austin.utexas.edu [Materials Science and Engineering Program, Texas Materials Institute, University of Texas at Austin, Austin, Texas 78712 (United States); Department of Mechanical Engineering, University of Texas at Austin, Austin, Texas 78712 (United States)

    2014-08-25

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.

  8. Location deterministic biosensing from quantum-dot-nanowire assemblies

    International Nuclear Information System (INIS)

    Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.

  9. Deterministic simulation of thermal neutron radiography and tomography

    Science.gov (United States)

    Pal Chowdhury, Rajarshi; Liu, Xin

    2016-05-01

    In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.

  10. Deterministic VLSI Block Placement Algorithm Using Less Flexibility First Principle

    Institute of Scientific and Technical Information of China (English)

    DONG SheQin (董社勤); HONG XianLong (洪先龙); WU YuLiang (吴有亮); GU Jun (顾钧)

    2003-01-01

    In this paper, a simple while effective deterministic algorithm for solving the VLSI block placement problem is proposed considering the packing area and interconnect wiring simultaneously. The algorithm is based on a principle inspired by observations of ancient professionals in solving their similar problems. Using the so-called Less Flexibility First principle, it is tried to pack blocks with the least packing flexibility on its shape and interconnect requirement to the empty space with the least packing flexibility in a greedy manner. Experimental results demonstrate that the algorithm, though simple, is quite effective in solving the problem. The same philosophy could also be used in designing efficient heuristics for other hard problems, such as placement with preplaced modules, placement with L/T shape modules, etc.

  11. The advantages of minimally invasive dentistry.

    Science.gov (United States)

    Christensen, Gordon J

    2005-11-01

    Minimally invasive dentistry, in cases in which it is appropriate, is a concept that preserves dentitions and supporting structures. In this column, I have discussed several examples of minimally invasive dental techniques. This type of dentistry is gratifying for dentists and appreciated by patients. If more dentists would practice it, the dental profession could enhance the public's perception of its honesty and increase its professionalism as well.

  12. Minimal Braid in Applied Symbolic Dynamics

    Institute of Scientific and Technical Information of China (English)

    张成; 张亚刚; 彭守礼

    2003-01-01

    Based on the minimal braid assumption, three-dimensional periodic flows of a dynamical system are reconstructed in the case of unimodal map, and their topological structures are compared with those of the periodic orbits of the Rossler system in phase space through the numerical experiment. The numerical results justify the validity of the minimal braid assumption which provides a suspension from one-dimensional symbolic dynamics in the Poincare section to the knots of three-dimensional periodic flows.

  13. Deterministic calculations of radiation doses from brachytherapy seeds

    International Nuclear Information System (INIS)

    Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides (192Ir, 198Au, 137Cs and 60Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)

  14. Evaluating deterministic motif significance measures in protein databases

    Directory of Open Access Journals (Sweden)

    Azevedo Paulo J

    2007-12-01

    Full Text Available Abstract Background Assessing the outcome of motif mining algorithms is an essential task, as the number of reported motifs can be very large. Significance measures play a central role in automatically ranking those motifs, and therefore alleviating the analysis work. Spotting the most interesting and relevant motifs is then dependent on the choice of the right measures. The combined use of several measures may provide more robust results. However caution has to be taken in order to avoid spurious evaluations. Results From the set of conducted experiments, it was verified that several of the selected significance measures show a very similar behavior in a wide range of situations therefore providing redundant information. Some measures have proved to be more appropriate to rank highly conserved motifs, while others are more appropriate for weakly conserved ones. Support appears as a very important feature to be considered for correct motif ranking. We observed that not all the measures are suitable for situations with poorly balanced class information, like for instance, when positive data is significantly less than negative data. Finally, a visualization scheme was proposed that, when several measures are applied, enables an easy identification of high scoring motifs. Conclusion In this work we have surveyed and categorized 14 significance measures for pattern evaluation. Their ability to rank three types of deterministic motifs was evaluated. Measures were applied in different testing conditions, where relations were identified. This study provides some pertinent insights on the choice of the right set of significance measures for the evaluation of deterministic motifs extracted from protein databases.

  15. A hybrid stochastic-deterministic computational model accurately describes spatial dynamics and virus diffusion in HIV-1 growth competition assay.

    Science.gov (United States)

    Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela

    2012-11-01

    We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.

  16. A Kalman-filter bias correction of ozone deterministic, ensemble-averaged, and probabilistic forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Monache, L D; Grell, G A; McKeen, S; Wilczak, J; Pagowski, M O; Peckham, S; Stull, R; McHenry, J; McQueen, J

    2006-03-20

    Kalman filtering (KF) is used to postprocess numerical-model output to estimate systematic errors in surface ozone forecasts. It is implemented with a recursive algorithm that updates its estimate of future ozone-concentration bias by using past forecasts and observations. KF performance is tested for three types of ozone forecasts: deterministic, ensemble-averaged, and probabilistic forecasts. Eight photochemical models were run for 56 days during summer 2004 over northeastern USA and southern Canada as part of the International Consortium for Atmospheric Research on Transport and Transformation New England Air Quality (AQ) Study. The raw and KF-corrected predictions are compared with ozone measurements from the Aerometric Information Retrieval Now data set, which includes roughly 360 surface stations. The completeness of the data set allowed a thorough sensitivity test of key KF parameters. It is found that the KF improves forecasts of ozone-concentration magnitude and the ability to predict rare events, both for deterministic and ensemble-averaged forecasts. It also improves the ability to predict the daily maximum ozone concentration, and reduces the time lag between the forecast and observed maxima. For this case study, KF considerably improves the predictive skill of probabilistic forecasts of ozone concentration greater than thresholds of 10 to 50 ppbv, but it degrades it for thresholds of 70 to 90 ppbv. Moreover, KF considerably reduces probabilistic forecast bias. The significance of KF postprocessing and ensemble-averaging is that they are both effective for real-time AQ forecasting. KF reduces systematic errors, whereas ensemble-averaging reduces random errors. When combined they produce the best overall forecast.

  17. Next-to-Minimal SOFTSUSY

    CERN Document Server

    Allanach, B C; Tunstall, Lewis C; Voigt, A; Williams, A G

    2013-01-01

    We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a $\\mathbb{Z}_{3}$ symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general $\\mathbb{Z}_{3}$ violating (denoted as $\\,\\mathbf{\\backslash}\\mkern-11.0mu{\\mathbb{Z}}_{3}$) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper se...

  18. Similarity matrix analysis and divergence measures for statistical detection of unknown deterministic signals hidden in additive noise

    Science.gov (United States)

    Le Bot, O.; Mars, J. I.; Gervaise, C.

    2015-10-01

    This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector.

  19. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    Science.gov (United States)

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  20. Wetting and Minimal Surfaces

    CERN Document Server

    Bachas, C; Wiese, K J; Bachas, Constantin; Doussal, Pierre Le; Wiese, Kay Joerg

    2006-01-01

    We study minimal surfaces which arise in wetting and capillarity phenomena. Using conformal coordinates, we reduce the problem to a set of coupled boundary equations for the contact line of the fluid surface, and then derive simple diagrammatic rules to calculate the non-linear corrections to the Joanny-de Gennes energy. We argue that perturbation theory is quasi-local, i.e. that all geometric length scales of the fluid container decouple from the short-wavelength deformations of the contact line. This is illustrated by a calculation of the linearized interaction between contact lines on two opposite parallel walls. We present a simple algorithm to compute the minimal surface and its energy based on these ideas. We also point out the intriguing singularities that arise in the Legendre transformation from the pure Dirichlet to the mixed Dirichlet-Neumann problem.

  1. Minimal triangulations of simplotopes

    CERN Document Server

    Seacrest, Tyler

    2009-01-01

    We derive lower bounds for the size of simplicial covers of simplotopes, which are products of simplices. These also serve as lower bounds for triangulations of such polytopes, including triangulations with interior vertices. We establish that a minimal triangulation of a product of two simplices is given by a vertex triangulation, i.e., one without interior vertices. For products of more than two simplices, we produce bounds for products of segments and triangles. Our analysis yields linear programs that arise from considerations of covering exterior faces and exploiting the product structure of these polytopes. Aside from cubes, these are the first known lower bounds for triangulations of simplotopes with three or more factors. We also construct a minimal triangulation for the product of a triangle and a square, and compare it to our lower bound.

  2. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity......, and that the underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....

  3. The Minimal R$\

    CERN Document Server

    Cai, Yi

    2016-01-01

    Incorporating neutrino mass generation and a dark matter candidate in a unified model has always been intriguing. We present the minimal model to realize the dual-task procedure based on the one-loop ultraviolet completion of the Weinberg operator, in the framework of minimal dark matter and radiative neutrino mass generation. In addition to the Standard Model particles, the model consists of a real scalar quintuplet, a pair of vector-like quadruplet fermions and a fermionic quintuplet. The neutral component of the fermionic quintuplet serves as a good dark matter candidate which can be tested by the future direct and indirect detection experiments. The constraints from flavor physics and electroweak-scale naturalness are also discussed.

  4. Minimal Modification to Tri-bimaximal Mixing

    CERN Document Server

    He, Xiao-Gang

    2011-01-01

    We explore some ways of minimally modifying the neutrino mixing matrix from tribimaximal, characterized by introducing at most one mixing angle and a CP violating phase thus extending our earlier work. One minimal modification, motivated to some extent by group theoretic considerations, is a simple case with the elements $V_{\\alpha 2}$ of the second column in the mixing matrix equal to $1/\\sqrt{3}$. Modifications by keeping one of the columns or one of the rows unchanged from tri-bimaximal mixing all belong to the class of minimal modification. Some of the cases have interesting experimentally testable consequences. In particular, the T2K collaboration has recently reported indications of a non-zero $\\theta_{13}$. For the cases we consider, if we impose the T2K result as stated, the CP violating phase angle $\\delta$ is sharply constrained.

  5. Minimally Invasive Thoracic Surgery

    OpenAIRE

    McFadden, P. Michael

    2000-01-01

    To reduce the risk, trauma, and expense of intrathoracic surgical treatments, minimally invasive procedures performed with the assistance of fiberoptic video technology have been developed for thoracic and bronchial surgeries. The surgical treatment of nearly every intrathoracic condition can benefit from a video-assisted approach performed through a few small incisions. Video-assisted thoracoscopic and rigid-bronchoscopic surgery have improved the results of thoracic procedures by decreasing...

  6. Test Time Minimization for Hybrid BIST of Core-Based Systems

    Institute of Scientific and Technical Information of China (English)

    Gert Jervan; Petru Eles; Zebo Peng; Raimund Ubar; Maksim Jenihhin

    2006-01-01

    This paper presents a solution to the test time minimization problem for core-based systems. We assume a hybrid BIST approach, where a test set is assembled, for each core, from pseudorandom test patterns that are generated online, and deterministic test patterns that are generated off-line and stored in the system. In this paper we propose an iterative algorithm to find the optimal combination of pseudorandom and deterministic test sets of the whole system,consisting of multiple cores, under given memory constraints, so that the total test time is minimized. Our approach employs a fast estimation methodology in order to avoid exhaustive search and to speed-up the calculation process. Experimental results have shown the efficiency of the algorithm to find near optimal solutions.

  7. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... SNR due to noise induced bias. The IQML cost function can be “denoised” by eliminating the noise contribution: the resulting algorithm, Denoised IQML (DIQML), gives consistent estimates and outperforms IQML. Furthermore, DIQML is asymptotically globally convergent and hence insensitive......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...

  8. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    Energy Technology Data Exchange (ETDEWEB)

    Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)

    2008-01-21

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.

  9. Matching allele dynamics and coevolution in a minimal predator prey replicator model

    Science.gov (United States)

    Sardanyés, Josep; Solé, Ricard V.

    2008-01-01

    A minimal Lotka Volterra type predator prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.

  10. Aspects of cell calculations in deterministic reactor core analysis

    Energy Technology Data Exchange (ETDEWEB)

    Varvayanni, M. [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: savvapan@ipta.demokritos.gr [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PoB 60228, 15310 Aghia Paraskevi (Greece)

    2011-02-15

    {Tau}he capability of achieving optimum utilization of the deterministic neutronic codes is very important, since, although elaborate tools, they are still widely used for nuclear reactor core analyses, due to specific advantages that they present compared to Monte Carlo codes. The user of a deterministic neutronic code system has to make some significant physical assumptions if correct results are to be obtained. A decisive first step at which such assumptions are required is the one-dimensional cell calculations, which provide the neutronic properties of the homogenized core cells and collapse the cross sections into user-defined energy groups. One of the most crucial determinations required at the above stage and significantly influencing the subsequent three-dimensional calculations of reactivity, concerns the transverse leakages, associated to each one-dimensional, user-defined core cell. For the appropriate definition of the transverse leakages several parameters concerning the core configuration must be taken into account. Moreover, the suitability of the assumptions made for the transverse cell leakages, depends on earlier user decisions, such as those made for the core partition into homogeneous cells. In the present work, the sensitivity of the calculated core reactivity to the determined leakages of the individual cells constituting the core, is studied. Moreover, appropriate assumptions concerning the transverse leakages in the one-dimensional cell calculations are searched out. The study is performed examining also the influence of the core size and the reflector existence, while the effect of the decisions made for the core partition into homogenous cells is investigated. In addition, the effect of broadened moderator channels formed within the core (e.g. by removing fuel plates to create space for control rod hosting) is also examined. Since the study required a large number of conceptual core configurations, experimental data could not be available

  11. Minimally invasive mediastinal surgery.

    Science.gov (United States)

    Melfi, Franca M A; Fanucchi, Olivia; Mussi, Alfredo

    2016-01-01

    In the past, mediastinal surgery was associated with the necessity of a maximum exposure, which was accomplished through various approaches. In the early 1990s, many surgical fields, including thoracic surgery, observed the development of minimally invasive techniques. These included video-assisted thoracic surgery (VATS), which confers clear advantages over an open approach, such as less trauma, short hospital stay, increased cosmetic results and preservation of lung function. However, VATS is associated with several disadvantages. For this reason, it is not routinely performed for resection of mediastinal mass lesions, especially those located in the anterior mediastinum, a tiny and remote space that contains vital structures at risk of injury. Robotic systems can overcome the limits of VATS, offering three-dimensional (3D) vision and wristed instrumentations, and are being increasingly used. With regards to thymectomy for myasthenia gravis (MG), unilateral and bilateral VATS approaches have demonstrated good long-term neurologic results with low complication rates. Nevertheless, some authors still advocate the necessity of maximum exposure, especially when considering the distribution of normal and ectopic thymic tissue. In recent studies, the robotic approach has shown to provide similar neurological outcomes when compared to transsternal and VATS approaches, and is associated with a low morbidity. Importantly, through a unilateral robotic technique, it is possible to dissect and remove at least the same amount of mediastinal fat tissue. Preliminary results on early-stage thymomatous disease indicated that minimally invasive approaches are safe and feasible, with a low rate of pleural recurrence, underlining the necessity of a "no-touch" technique. However, especially for thymomatous disease characterized by an indolent nature, further studies with long follow-up period are necessary in order to assess oncologic and neurologic results through minimally invasive

  12. Minimal E6 unification

    Science.gov (United States)

    Susič, Vasja

    2016-06-01

    A realistic model in the class of renormalizable supersymmetric E6 Grand Unified Theories is constructed. Its matter sector consists of 3 × 27 representations, while the Higgs sector is 27 +27 ¯+35 1'+35 1' ¯+78 . An analytic solution for a Standard Model vacuum is found and the Yukawa sector analyzed. It is argued that if one considers the increased predictability due to only two symmetric Yukawa matrices in this model, it can be considered a minimal SUSY E6 model with this type of matter sector. This contribution is based on Ref. [1].

  13. Automated optimum design of wing structures. Deterministic and probabilistic approaches

    Science.gov (United States)

    Rao, S. S.

    1982-01-01

    The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.

  14. Conversion of dependability deterministic requirements into probabilistic requirements

    International Nuclear Information System (INIS)

    This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes

  15. Deterministic transport of particles in a micro-pump

    CERN Document Server

    Beltrame, Philippe; Hänggi, Peter

    2012-01-01

    We study the drift of suspended micro-particles in a viscous liquid pumped back and forth through a periodic lattice of pores (drift ratchet). In order to explain the particle drift observed in such an experiment, we present an one-dimensional deterministic model of Stokes' drag. We show that the stability of oscillations of particle is related to their amplitude. Under appropriate conditions, particles may drift and two mechanisms of transport are pointed out. The first one is due to an spatio-temporal synchronization between the fluid and particle motions. As results the velocity is locked by the ratio of the space periodicity over the time periodicity. The direction of the transport may switch by tuning the parameters. Noteworthy, its emergence is related to a lattice of 2-periodic orbits but not necessary to chaotic dynamics. The second mechanism is due to an intermittent bifurcation and leads to a slow transport composed by long time oscillations following by a relative short transport to the next pore. ...

  16. Is there a sharp phase transition for deterministic cellular automata

    Energy Technology Data Exchange (ETDEWEB)

    Wootters, W.K. (Santa Fe Inst., NM (USA) Los Alamos National Lab., NM (USA) Williams Coll., Williamstown, MA (USA). Dept. of Physics); Langton, C.G. (Los Alamos National Lab., NM (USA))

    1990-01-01

    Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs.

  17. Deterministic Polynomial-Time Algorithms for Designing Short DNA Words

    CERN Document Server

    Kao, Ming-Yang; Sun, He; Zhang, Yong

    2012-01-01

    Designing short DNA words is a problem of constructing a set (i.e., code) of n DNA strings (i.e., words) with the minimum length such that the Hamming distance between each pair of words is at least k and the n words satisfy a set of additional constraints. This problem has applications in, e.g., DNA self-assembly and DNA arrays. Previous works include those that extended results from coding theory to obtain bounds on code and word sizes for biologically motivated constraints and those that applied heuristic local searches, genetic algorithms, and randomized algorithms. In particular, Kao, Sanghi, and Schweller (2009) developed polynomial-time randomized algorithms to construct n DNA words of length within a multiplicative constant of the smallest possible word length (e.g., 9 max{log n, k}) that satisfy various sets of constraints with high probability. In this paper, we give deterministic polynomial-time algorithms to construct DNA words based on derandomization techniques. Our algorithms can construct n DN...

  18. Three-dimensional gravity-driven deterministic lateral displacement

    CERN Document Server

    Du, Siqi

    2016-01-01

    We present a simple solution to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move in the plane perpendicular to the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems. In fact, a theoretical model originally developed for force-driven, two-dimensional DLD systems accurately describes the experimental results. More importantly, we analyze the particles out-of-plane motion and observe that, for certain orientations of the driving force, significant differences in the out-of-plane displacement depending on particle siz...

  19. Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.

    Science.gov (United States)

    Kurhekar, Manish; Deshpande, Umesh

    2016-01-01

    Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402

  20. A Modified Deterministic Model for Reverse Supply Chain in Manufacturing

    Directory of Open Access Journals (Sweden)

    R. N. Mahapatra

    2013-01-01

    Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.

  1. Comparison between Monte Carlo method and deterministic method

    International Nuclear Information System (INIS)

    A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)

  2. Entrepreneurs, chance, and the deterministic concentration of wealth.

    Science.gov (United States)

    Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen

    2011-01-01

    In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540

  3. Is there a sharp phase transition for deterministic cellular automata?

    International Nuclear Information System (INIS)

    Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs

  4. Minimal Higgs inflation

    CERN Document Server

    Maity, Debaprasad

    2016-01-01

    In this paper we propose two simple minimal Higgs inflation scenarios through a simple modification of the Higgs potential, as opposed to the usual non-minimal Higgs-gravity coupling prescription. The modification is done in such a way that it creates a flat plateau for a huge range of field values at the inflationary energy scale $\\mu \\simeq (\\lambda)^{1/4} \\alpha$. Assuming the perturbative Higgs quartic coupling, $\\lambda \\simeq {\\cal O}(1)$, for both the models inflation energy scale turned out to be $\\mu \\simeq (10^{14}, 10^{15})$ GeV, and prediction of all the cosmologically relevant quantities, $(n_s,r,dn_s^k)$, fit extremely well with observations made by PLANCK. Considering observed central value of the scalar spectral index, $n_s= 0.968$, our two models predict efolding number, $N = (52,47)$. Within a wide range of viable parameter space, we found that the prediction of tensor to scalar ratio $r (\\leq 10^{-5})$ is far below the current experimental sensitivity to be observed in the near future. The ...

  5. Logarithmic superconformal minimal models

    Science.gov (United States)

    Pearce, Paul A.; Rasmussen, Jørgen; Tartaglia, Elena

    2014-05-01

    The higher fusion level logarithmic minimal models {\\cal LM}(P,P';n) have recently been constructed as the diagonal GKO cosets {(A_1^{(1)})_k\\oplus (A_1^ {(1)})_n}/ {(A_1^{(1)})_{k+n}} where n ≥ 1 is an integer fusion level and k = nP/(P‧- P) - 2 is a fractional level. For n = 1, these are the well-studied logarithmic minimal models {\\cal LM}(P,P')\\equiv {\\cal LM}(P,P';1). For n ≥ 2, we argue that these critical theories are realized on the lattice by n × n fusion of the n = 1 models. We study the critical fused lattice models {\\cal LM}(p,p')_{n\\times n} within a lattice approach and focus our study on the n = 2 models. We call these logarithmic superconformal minimal models {\\cal LSM}(p,p')\\equiv {\\cal LM}(P,P';2) where P = |2p - p‧|, P‧ = p‧ and p, p‧ are coprime. These models share the central charges c=c^{P,P';2}=\\frac {3}{2}\\big (1-{2(P'-P)^2}/{P P'}\\big ) of the rational superconformal minimal models {\\cal SM}(P,P'). Lattice realizations of these theories are constructed by fusing 2 × 2 blocks of the elementary face operators of the n = 1 logarithmic minimal models {\\cal LM}(p,p'). Algebraically, this entails the fused planar Temperley-Lieb algebra which is a spin-1 Birman-Murakami-Wenzl tangle algebra with loop fugacity β2 = [x]3 = x2 + 1 + x-2 and twist ω = x4 where x = eiλ and λ = (p‧- p)π/p‧. The first two members of this n = 2 series are superconformal dense polymers {\\cal LSM}(2,3) with c=-\\frac {5}{2}, β2 = 0 and superconformal percolation {\\cal LSM}(3,4) with c = 0, β2 = 1. We calculate the bulk and boundary free energies analytically. By numerically studying finite-size conformal spectra on the strip with appropriate boundary conditions, we argue that, in the continuum scaling limit, these lattice models are associated with the logarithmic superconformal models {\\cal LM}(P,P';2). For system size N, we propose finitized Kac character formulae of the form q^{-{c^{P,P';2}}/{24}+\\Delta ^{P,P';2} _{r

  6. Calculating the effective delayed neutron fraction in the Molten Salt Fast Reactor: Analytical, deterministic and Monte Carlo approaches

    International Nuclear Information System (INIS)

    Highlights: • Calculation of effective delayed neutron fraction in circulating-fuel reactors. • Extension of the Monte Carlo SERPENT-2 code for delayed neutron precursor tracking. • Forward and adjoint multi-group diffusion eigenvalue problems in OpenFOAM. • Analytical approach for βeff calculation in simple geometries and flow conditions. • Good agreement among the three proposed approaches in the MSFR test-case. - Abstract: This paper deals with the calculation of the effective delayed neutron fraction (βeff) in circulating-fuel nuclear reactors. The Molten Salt Fast Reactor is adopted as test case for the comparison of the analytical, deterministic and Monte Carlo methods presented. The Monte Carlo code SERPENT-2 has been extended to allow for delayed neutron precursors drift, according to the fuel velocity field. The forward and adjoint eigenvalue multi-group diffusion problems are implemented and solved adopting the multi-physics tool-kit OpenFOAM, by taking into account the convective and turbulent diffusive terms in the precursors balance. These two approaches show good agreement in the whole range of the MSFR operating conditions. An analytical formula for the circulating-to-static conditions βeff correction factor is also derived under simple hypotheses, which explicitly takes into account the spatial dependence of the neutron importance. Its accuracy is assessed against Monte Carlo and deterministic results. The effects of in-core recirculation vortex and turbulent diffusion are finally analysed and discussed

  7. Dens in dente: A minimally invasive nonsurgical approach!

    Science.gov (United States)

    Hegde, Vivek; Morawala, Abdul; Gupta, Abhilasha; Khandwawala, Naqiyaa

    2016-01-01

    Dens invaginatus, also known as dens in dente, is a rare anomaly affecting human dentition. The condition results in invagination of an amelodental structure within the pulp. This case report discusses the current management protocol of dens invaginatus using a minimally invasive and nonsurgical treatment option. As with most conditions, early diagnosis and preventive measures help minimize complications in dens invaginatus cases. PMID:27656073

  8. Linear Superposition of Minimal Surfaces: Generalized Helicoids and Minimal Cones

    OpenAIRE

    Hoppe, Jens

    2016-01-01

    Observing a linear superposition principle, a family of new minimal hypersurfaces in Euclidean space is found, as well as that linear combinations of generalized helicoids induce new algebraic minimal cones of arbitrarily high degree.

  9. [Deterministic analysis as a tool to investigate the contingency of various components of biocenosis].

    Science.gov (United States)

    Bulgakov, N G; Maksimov, V N

    2005-01-01

    Specific application of deterministic analysis to investigate the contingencies of various components of natural biocenosis was illustrated by the example of fish production and biomass of phyto- and zooplankton. Deterministic analysis confirms the theoretic assumptions on food preferences of herbivorous fish: both silver and bighead carps avoided feeding on cyanobacteria. Being a facultative phytoplankton feeder, silver carp preferred microalgae to zooplankton. Deterministic analysis allowed us to demonstrate the contingency of the mean biomass of phyto- and zooplankton during both the whole fish production cycle and the individual periods. PMID:16004266

  10. Deterministic chaos in government debt dynamics with mechanistic primary balance rules

    CERN Document Server

    Lindgren, Jussi Ilmari

    2011-01-01

    This paper shows that with mechanistic primary budget rules and with some simple assumptions on interest rates the well-known debt dynamics equation transforms into the infamous logistic map. The logistic map has very peculiar and rich nonlinear behaviour and it can exhibit deterministic chaos with certain parameter regimes. Deterministic chaos means the existence of the butterfly effect which in turn is qualitatively very important, as it shows that even deterministic budget rules produce unpredictable behaviour of the debt-to-GDP ratio, as chaotic systems are extremely sensitive to initial conditions.

  11. Holographic dark energy from minimal supergravity

    OpenAIRE

    Landim, Ricardo C. G.

    2015-01-01

    We embed models of holographic dark energy coupled to dark matter in minimal supergravity plus matter, with one chiral superfield. We analyze two cases. The first one has the Hubble radius as the infrared cutoff and the interaction between the two fluids is proportional to the energy density of the dark energy. The second case has the future event horizon as infrared cutoff while the interaction is proportional to the energy density of both components of the dark sector.

  12. Minimal Mirror Twin Higgs

    CERN Document Server

    Barbieri, Riccardo; Harigaya, Keisuke

    2016-01-01

    In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z2 parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z2 breaking, can generate the Z2 breaking in the Higgs sector necessary for the Twin Higgs mechanism, and has constrained and correlated signals in invisible Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z2 breaking from the vacuum expectation values of B-L breaking fields are also discussed.

  13. Learn with SAT to Minimize Büchi Automata

    Directory of Open Access Journals (Sweden)

    Stephan Barth

    2012-10-01

    Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.

  14. Clinical Analysis of 160 Cases about Individual Treatment for Lower Limbs Varicose vein by Minimally Invasive Treatment%微创个体化治疗下肢静脉曲张160例分析

    Institute of Scientific and Technical Information of China (English)

    王国栋; 王红超

    2015-01-01

    Objective:To investigate the individual minimally invasive treatment for lower limbs varicose and the effect.Methods:Retrospective reviewed the 160 patients of lower limbs varicose in a variety of minimally invasive treatment from Jan.2009 to Jun.2013, total 229 limbs.21 limbs in 15 patients were cured by endovenous laser treatment ( EVLT) only;123 limbs in 80 patients were cured by high ligation combined with EVLT;63 limbs in 48patents were cured by high ligation combined with EVLT and local varicose vein mass point removal;22 limbs in 17 patients were cured by high ligation combined with EVLT and subf-endoscopic surgery ( SEPS) .Results:All incisions were primary healing.30 cases felt pain and block or a column state at great saphenous vein trunk and crus local varicose veins which were burned;19 cases were found subcutaneous flake ecchymosis;18 cases felt local skin numbness;2 cases were recurrence after operation.Patients complicated with skin ulcer healed after operation by dressing change.All the patients had a clinically significant reduction in symptoms and no lower extremity deep vein thrombosis.141 patients were followed up for 6-54 months.Conclusion:Lower limbs varicose vein of different degree treat by individualized therapy to improve cure rate and to be an effective measure.%目的:探讨下肢静脉曲张的微创个体化治疗方法及疗效.方法:回顾性分析2009年1月~2013年6月期间综合应用多种微创方法治疗160例下肢静脉曲张患者的临床资料,共229 条肢体;其中单纯应用EVLT 15例,21条肢体;高位结扎加EVLT 80例,123条肢体;高位结扎加EVLT、局部曲张静脉团块点状剥除48例,63条肢体;高位结扎加EVLT联合腔镜下交通支离断(SEPS)17例,22条肢体.结果:160例患者切口均1期愈合.术后出现大隐静脉主干及小腿局部曲张静脉烧灼处条索状硬结、疼痛30例;不同程度皮下片状瘀斑19例;出现局部皮肤麻木18例.术后复发2例.合并皮肤溃疡

  15. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    Directory of Open Access Journals (Sweden)

    Ortega Antonio

    2005-01-01

    Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  16. Accurate deterministic solutions for the classic Boltzmann shock profile

    Science.gov (United States)

    Yue, Yubei

    The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.

  17. Development of a Deterministic Ethernet Building blocks for Space Applications

    Science.gov (United States)

    Fidi, C.; Jakovljevic, Mirko

    2015-09-01

    The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.

  18. Activity modes selection for project crashing through deterministic simulation

    Directory of Open Access Journals (Sweden)

    Ashok Mohanty

    2011-12-01

    Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.

  19. Heart bypass surgery - minimally invasive

    Science.gov (United States)

    Minimally invasive direct coronary artery bypass; MIDCAB; Robot assisted coronary artery bypass; RACAB; Keyhole heart surgery ... doctor may recommend a minimally invasive coronary artery bypass if you have a blockage in one or ...

  20. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    Science.gov (United States)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  1. A deterministic and statistical energy analysis of tyre cavity resonance noise

    Science.gov (United States)

    Mohamed, Zamri; Wang, Xu

    2016-03-01

    Tyre cavity resonance was studied using a combination of deterministic analysis and statistical energy analysis where its deterministic part was implemented using the impedance compact mobility matrix method and its statistical part was done by the statistical energy analysis method. While the impedance compact mobility matrix method can offer a deterministic solution to the cavity pressure response and the compliant wall vibration velocity response in the low frequency range, the statistical energy analysis method can offer a statistical solution of the responses in the high frequency range. In the mid frequency range, a combination of the statistical energy analysis and deterministic analysis methods can identify system coupling characteristics. Both methods have been compared to those from commercial softwares in order to validate the results. The combined analysis result has been verified by the measurement result from a tyre-cavity physical model. The analysis method developed in this study can be applied to other similar toroidal shape structural-acoustic systems.

  2. Analysis of the deterministic and stochastic SIRS epidemic models with nonlinear incidence

    Science.gov (United States)

    Liu, Qun; Chen, Qingmei

    2015-06-01

    In this paper, the deterministic and stochastic SIRS epidemic models with nonlinear incidence are introduced and investigated. For deterministic system, the basic reproductive number R0 is obtained. Furthermore, if R0 ≤ 1, then the disease-free equilibrium is globally asymptotically stable and if R0 > 1, then there is a unique endemic equilibrium which is globally asymptotically stable. For stochastic system, to begin with, we verify that there is a unique global positive solution starting from the positive initial value. Then when R0 > 1, we prove that stochastic perturbations may lead the disease to extinction in scenarios where the deterministic system is persistent. When R0 ≤ 1, a result on fluctuation of the solution around the disease-free equilibrium of deterministic model is obtained under appropriate conditions. At last, if the intensity of the white noise is sufficiently small and R0 > 1, then there is a unique stationary distribution to stochastic system.

  3. 微创经皮肾镜下钬激光治疗肾盂肾盏狭窄或闭锁%Minimally invasive percutaneous nephroscope with holmium laser treatment of renal pelvis stenosis or atresia: analysis of 26 cases

    Institute of Scientific and Technical Information of China (English)

    李程; 严景元; 刘利权; 王永胜; 岳良; 王波

    2012-01-01

    目的:探讨微创经皮肾镜下钬激光治疗肾盂肾盏狭窄或闭锁的临床实用性及效果.方法:2008年3月至2011年2月,对26例术后继发及原发性肾盂肾盏狭窄或闭锁患者,采取在C型臂X光机或B超引导下微创经皮肾穿刺造瘘建立F16皮肾通道,在输尿管镜下用钬激光切开肾盂肾盏狭窄或闭锁,术后留置F6D-J管做肾盂肾盏支架引流.结果:26例患者有25例一期手术成功,1例患者因为肾盂闭锁段过长>2.0 cm而行微创经皮肾镜下钬激光治疗失败,故二期行开放手术整形.25例手术成功患者术后2~3个月复查B超示,肾盂肾盏积水明显缓解,肾功能正常,故成功拔除D-J管;23例3~6个月静脉肾盂造影显示,肾盂肾盏均良好显示,2例显示肾盂再次狭窄,故通过输尿管镜下再次留置F6D-J管3个月,肾盂狭窄解除.结论:微创经皮肾镜下钬激光治疗肾盂肾盏狭窄具有创伤小、安全、疗效好等优点,尤其适合曾经开放手术后继发肾盂肾盏狭窄或闭锁.%Objective: To investigate the minimally invasive percutaneous nephrolithotomy with holmium laser treatment of renal pelvis the clinical relevance of stenosis or atresia and effect. Methods From March 2008 to February 2011 , twenty-six patients with renal pelvis secondary and primary stenosis or atresia received the X-ray-guided percutaneous nephrostomy through F16 channel, and holmium laser incision. Results: 25 of 26 patients had a successful operation, and one failed because of the renal pelvis locking section was over 2. 0 cm and the open plastic surgery was taken. 25 patients were observed to have improvement by the B ultrasoundgraphy in 2- 3 months, and the D-J was removed. Then,it was showed by the IVP in 3-6 months that 23 cases had a good imaging of renal pelvis, while 2 cases had a recurrence of stenosis. Thus, F6 D-J placement was carried on again through the ureteroscopy for the 2 cases and the stenosis relieved after removal of the D-J in

  4. Minimal Log Gravity

    CERN Document Server

    Giribet, Gaston

    2014-01-01

    Minimal Massive Gravity (MMG) is an extension of three-dimensional Topologically Massive Gravity that, when formulated about Anti-de Sitter space, accomplishes to solve the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the non-linear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Banados-Teitelboim-Zanelli (BTZ) black holes. These geometries behave asymptotically as solutions of the so-called Log Gravity, and, despite the weakened falling-off close to the boundary, they have finite mass and finite angular momentum, which w...

  5. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  6. Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented

  7. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A.F.; Roussin, R.W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  8. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A. F.; Roussin, R. W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  9. Implementation of Gy-Eq for deterministic effects limitation in shield design

    Science.gov (United States)

    Wilson, John W.; Kim, Myung-Hee Y.; De Angelis, Giovanni; Cucinotta, Francis A.; Yoshizawa, Nobuaki; Badavi, Francis F.

    2002-01-01

    The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities.

  10. Risk-minimal routes for emergency cars

    OpenAIRE

    Woelki, Marko; Nippold, Ronald; Bonert, Michael; Ruppe, Sten

    2013-01-01

    The computation of an optimal route for given start and destination in a static transportation network is used in many applications of private route planning. In this work we focus on route planning for emergency cars, such as for example police, fire brigade and ambulance. In case of private route planning typical quantities to be minimized are travel time or route length. However, the idea of this paper is to minimize the risk of a travel time exceeding a certain limit. This is inspired by ...

  11. Minimally invasive restorative dentistry: a biomimetic approach.

    Science.gov (United States)

    Malterud, Mark I

    2006-08-01

    When providing dental treatment for a given patient, the practitioner should use a minimally invasive technique that conserves sound tooth structure as a clinical imperative. Biomimetics is a tenet that guides the author's practice and is generally described as the mimicking of natural life. This can be accomplished in many cases using contemporary composite resins and adhesive dental procedures. Both provide clinical benefits and support the biomimetic philosophy for treatment. This article illustrates a minimally invasive approach for the restoration of carious cervical defects created by poor hygiene exacerbated by the presence of orthodontic brackets.

  12. Monte Carlo and deterministic computational methods for the calculation of the effective delayed neutron fraction

    Science.gov (United States)

    Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry

    2013-07-01

    The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama

  13. A three-level non-deterministic modeling methodology for the NVH behavior of rubber connections

    Science.gov (United States)

    Stenti, A.; Moens, D.; Sas, P.; Desmet, W.

    2010-03-01

    Complex built-up structures such as vehicles have a variety of joint types, such as spot-welds, bolted joints, rubber joints, etc. Rubber joints highly contribute to the nonlinear level of the structure and are a major source of uncertainties and variability. In the general framework of developing engineering tools for virtual prototyping and product refinement, the modeling of the NVH behavior of rubber joints involve the computational burden of including a detailed nonlinear model of the joint and the uncertainties and variability typical of that joint in a full-scale system model. However, in an engineering design phase the knowledge on the joint rubber material properties is typically poor, and the working conditions a rubber joint will experience are generally not known in detail. This lack of knowledge often do not justify the computational burden and the modeling effort of including detailed nonlinear models of the joint in a full-scale system model. Driven by these issues a non-deterministic numerical methodology based on a three-level modeling approach is being developed. The methodology aims at evaluating directly in the frequency domain the sensitivity of the NVH behavior of a full-scale system model to the rubber joint material properties when nonlinear visco-elastic rubber material behavior is considered. Rather than including directly in the model a representation of the rubber nonlinear visco-elastic behavior, the methodology proposes to model the material nonlinear visco-elastic behavior by using a linear visco-elastic material model defined in an interval sense, from which the scatter on the full-scale system NVH response is evaluated. Furthermore the development of a multi-level solution scheme allows to reduce the computational burden introduced by the non-deterministic approach by allowing the definition of an equivalent linear interval parametric rubber joint model, ready to be assembled in a full-scale system model at a reasonable

  14. Parallel deterministic neutronics with AMR in 3D

    Energy Technology Data Exchange (ETDEWEB)

    Clouse, C.; Ferguson, J.; Hendrickson, C. [Lawrence Livermore National Lab., CA (United States)

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  15. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    Science.gov (United States)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  16. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    Science.gov (United States)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  17. Efficiency of transport in periodic potentials: dichotomous noise contra deterministic force

    Science.gov (United States)

    Spiechowicz, J.; Łuczka, J.; Machura, L.

    2016-05-01

    We study the transport of an inertial Brownian particle moving in a symmetric and periodic one-dimensional potential, and subjected to both a symmetric, unbiased external harmonic force as well as biased dichotomic noise η (t) also known as a random telegraph signal or a two state continuous-time Markov process. In doing so, we concentrate on the previously reported regime (Spiechowicz et al 2014 Phys. Rev. E 90 032104) for which non-negative biased noise η (t) in the form of generalized white Poissonian noise can induce anomalous transport processes similar to those generated by a deterministic constant force F= but significantly more effective than F, i.e. the particle moves much faster, the velocity fluctuations are noticeably reduced and the transport efficiency is enhanced several times. Here, we confirm this result for the case of dichotomous fluctuations which, in contrast to white Poissonian noise, can assume positive as well as negative values and examine the role of thermal noise in the observed phenomenon. We focus our attention on the impact of bidirectionality of dichotomous fluctuations and reveal that the effect of nonequilibrium noise enhanced efficiency is still detectable. This result may explain transport phenomena occurring in strongly fluctuating environments of both physical and biological origin. Our predictions can be corroborated experimentally by use of a setup that consists of a resistively and capacitively shunted Josephson junction.

  18. Deterministic and risk-informed approaches for safety analysis of advanced reactors: Part II, Risk-informed approaches

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Inn Seock, E-mail: innseockkim@gmail.co [ISSA Technology, 21318 Seneca Crossing Drive, Germantown, MD 20876 (United States); Ahn, Sang Kyu; Oh, Kyu Myung [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)

    2010-05-15

    Technical insights and findings from a critical review of deterministic approaches typically applied to ensure design safety of nuclear power plants were presented in the companion paper of Part I included in this issue. In this paper we discuss the risk-informed approaches that have been proposed to make a safety case for advanced reactors including Generation-IV reactors such as Modular High-Temperature Gas-cooled Reactor (MHTGR), Pebble Bed Modular Reactor (PBMR), or Sodium-cooled Fast Reactor (SFR). Also considered herein are a risk-informed safety analysis approach suggested by Westinghouse as a means to improve the conventional accident analysis, together with the Technology Neutral Framework recently developed by the US Nuclear Regulatory Commission as a high-level regulatory infrastructure for safety evaluation of any type of reactor design. The insights from a comparative review of various deterministic and risk-informed approaches could be usefully used in developing a new licensing architecture for enhanced safety of evolutionary or advanced plants.

  19. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  20. An optimal deterministic control policy of servers in front and back rooms with a variable number of switching points and switching costs

    Institute of Scientific and Technical Information of China (English)

    WANG JiaMin

    2009-01-01

    In this paper we consider a retail service facility with cross-trained workers who can perform operations in both the front room and back room. Workers are brought from the back room to the front room and vice versa depending on the number of customers in the system. A loss of productivity occurs when a worker returns to the back room. Two problems are studied. In the first problem, given the number of workers available, we determine an optimal deterministic switching policy so that the expected number of customers in queue is minimized subject to a constraint ensuring that there is a sufficient workforce to fulfill the functions in the back room. In the second problem, the number of workers needed is minimized subject to an additional constraint requiring that the expected number of customers waiting in queue is bounded above by a given threshold value. Exact solution procedures are developed and illustrative numerical examples are presented.

  1. A Deterministic Approach to Active Debris Removal Target Selection

    Science.gov (United States)

    Lidtke, A.; Lewis, H.; Armellin, R.

    2014-09-01

    purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.

  2. Deterministic Modeling of the High Temperature Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  3. Multiple objectives application approach to waste minimization

    Institute of Scientific and Technical Information of China (English)

    张清宇

    2002-01-01

    Besides economics and controllability, waste minimization has now become an obje ctive in designing chemical processes, and usually leads to high costs of invest ment and operation. An attempt was made to minimize waste discharged from chemic al reaction processes during the design and modification process while the opera tion conditions were also optimized to meet the requirements of technology and e conomics. Multiobjectives decision nonlinear programming (NLP) was employed to o ptimize the operation conditions of a chemical reaction process and reduce waste . A modeling language package-SPEEDUP was used to simulate the process. This p aper presents a case study of the benzene production process. The flowsheet factors affecting the economics and waste generation were examined. Constraints were imposed to reduce the number of objectives and carry out optimal calculations e asily. After comparisons of all possible solutions, best-compromise approach wa s applied to meet technological requirements and minimize waste.

  4. Multiple objectives application approach to waste minimization

    Institute of Scientific and Technical Information of China (English)

    张清宇

    2002-01-01

    Besides econormics and controllability, waste minimization has now become an objective in designing chemical processes,and usually leads to high costs of investment and operation.An attempt was mede to minimize waste discharged from chemical reaction processes during the design and modification process while the operation conditions were also optimized to meet the requirements of technology and economics.Multiob-jectives decision nonlinear programming(NLP) was emplyed optimize the operation conditions of a chemical reaction process and reduce waste. A modeling package-SPEEDUP was used to simulate the process.This paper presents a case study of the benzenc production process.The flowsheer factors affecting the economics and waste generation were examined.Constraints were imposed to reduce the number of objectives and carry out optimal calculations easily.After comparisons of all possiblle solutions,best-compromise approach was applied to meet technological requirements and minimize waste.

  5. A deterministic analysis of tsunami hazard and risk for the southwest coast of Sri Lanka

    Science.gov (United States)

    Wijetunge, J. J.

    2014-05-01

    This paper describes a multi-scenario, deterministic analysis carried out as a pilot study to evaluate the tsunami hazard and risk distribution in the southwest coast of Sri Lanka. The hazard and risk assessment procedure adopted was also assessed against available field records of the impact of the Indian Ocean tsunami in 2004. An evaluation of numerically simulated nearshore tsunami amplitudes corresponding to ‘maximum-credible' scenarios from different subduction segments in the Indian Ocean surrounding Sri Lanka suggests that a seismic event similar to that generated the tsunami in 2004 can still be considered as the ‘worst-case' scenario for the southwest coast. Furthermore, it appears that formation of edge waves trapped by the primary waves diffracting around the southwest significantly influences the nearshore tsunami wave field and is largely responsible for relatively higher tsunami amplitudes in certain stretches of the coastline under study. The extent of inundation from numerical simulations corresponding to the worst-case scenario shows good overall agreement with the points of maximum penetration of inundation from field measurements in the aftermath of the 2004 tsunami. It can also be seen that the inundation distribution is strongly influenced by onshore topography. The present study indicates that the mean depth of inundation could be utilised as a primary parameter to quantify the spatial distribution of the tsunami hazard. The spatial distribution of the risk of the tsunami hazard to the population and residential buildings computed by employing the standard risk formula shows satisfactory correlation with published statistics of the affected population and the damage to residential property during the tsunami in 2004.

  6. A deterministic aggregate production planning model considering quality of products

    Science.gov (United States)

    Madadi, Najmeh; Yew Wong, Kuan

    2013-06-01

    Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.

  7. Against Explanatory Minimalism in Psychiatry.

    Science.gov (United States)

    Thornton, Tim

    2015-01-01

    The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  8. Minimal surfaces for architectural constructions

    Directory of Open Access Journals (Sweden)

    Velimirović Ljubica S.

    2008-01-01

    Full Text Available Minimal surfaces are the surfaces of the smallest area spanned by a given boundary. The equivalent is the definition that it is the surface of vanishing mean curvature. Minimal surface theory is rapidly developed at recent time. Many new examples are constructed and old altered. Minimal area property makes this surface suitable for application in architecture. The main reasons for application are: weight and amount of material are reduced on minimum. Famous architects like Otto Frei created this new trend in architecture. In recent years it becomes possible to enlarge the family of minimal surfaces by constructing new surfaces.

  9. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    Science.gov (United States)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  10. 腔镜辅助下McKeown术式切除食管癌507例临床体会%McKeown minimally invasive esophagectomy for the treatment of esophageal cancer: a report of 507 cases

    Institute of Scientific and Technical Information of China (English)

    陈保富; 孔敏; 朱成楚; 张波; 叶中瑞; 王春国; 马德华; 叶敏华

    2013-01-01

    after McKeown minimally invasive esophagectomy(MMIE) for the treatment of esophageal cancer.Methods From August 1997 to December 2012,MMIE was performed in 507 patients.Esophageal tumors located in the upper in 39(7.69%),middle in 312(61.54%),lower in 156(30.77%).Preoperative neoadjuvant chemoradiotherapy was used in 21 cases (4.14 %).Resection was performed for squamous cancer (463 cases,91.32 %),adenocarcinoma and other histologic types (44 cases,8.68%) in patients with stages 0 (55,10.85%),Ⅰ (167,32.94%),Ⅱ (203,40.04%),Ⅲ (69,13.61%),and Ⅳ (13,2.56%) disease.Surgery were completed by thoracoscopic and laparotomy(281 cases,55.42%),total thoracoscopic/laparoscopic approach(179 cases,35.31%),thoracotomy and laparoscopic (32 cases,6.31%),conversion to thoracotomy/laparotomy (15 cases,2.96%).Results MMIE was successfully completed in 492(97.04%) patients.The operative time of thoracoscopic the esophagus free and pleural lymph node dissection was(81.5 ±34.7)min(60-180 min),laparoscopic stomach free and abdominal area lymphadenectomy was 60.3 ± 17.5)min(40-105 min).The blood loss of thoracoscopic surgery was(105.2 ±73.1) m1(55-1080 ml),laparoscopic surgery (43.5 ±21.4)m1(30-350ml).The total number of lymph node dissection was 5-48[(23.7 ± 11.5)/case],the number of thoracic lymph node dissection was 3-32 [(14.6 ± 7.7)/case],abdominal lymph node dissection 2-29 [(8.7 ±5.2)/case)],and neck lymph node dissection 0-7 [(1.3 ± 1.1)/case].198 cases of esophageal reconstruction after esophageal bed,309 cases through the sternum approach.The whole group were no deaths,intraoperative bleeding in 3 cases due to the azygos vein/spleen injury,the hook cautery/ultrasound surgery the knife accidentally injure trachea 3 cases,the non-focal cause 13 cases of thoracic duct injury,9 cases of atrial fibrillation,esophageal resection margin-positive R1 resection in 3 cases.Major complications in the early postoperative period,lung infection rate was

  11. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand;

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  12. Singly-even self-dual codes with minimal shadow

    CERN Document Server

    Bouyuklieva, Stefka

    2011-01-01

    In this note we investigate extremal singly-even self-dual codes with minimal shadow. For particular parameters we prove non-existence of such codes. By a result of Rains \\cite{Rains-asymptotic}, the length of extremal singly-even self-dual codes is bounded. We give explicit bounds in case the shadow is minimal.

  13. Minimal repair under step-stress test

    OpenAIRE

    Balakrishnan, N.; Kamps, U.; Kateri, M.

    2009-01-01

    Abstract In the one- and multi-sample cases, in the context of life-testing reliability experiments, we introduce minimal repair processes under a simple step-stress test, based on exponential distributions and an associated cumulative exposure model, and then develop likelihood inference for such a model. correspondance: Corresponding author. Tel.: +49 241 80 94576; fax: +49 241 80 92848. (Kamps, U.) (Kamps, U....

  14. Open and minimally open lips schizencephaly.

    Directory of Open Access Journals (Sweden)

    Srikanth S

    2000-04-01

    Full Text Available Two patients with isolated schizencephaly, a very rare congenital anomaly of the brain, who presented with epilepsy are presented. According to imaging morphology, there are two types of schizencephaly, ′open lip′ and ′minimally open lip′. These two cases emphasize that while MRI is superior to CT in the diagnosis of congenital brain anomalies, schizencephaly can be diagnosed by its characteristic CT features.

  15. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    Science.gov (United States)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  16. Guidelines for mixed waste minimization

    Energy Technology Data Exchange (ETDEWEB)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization.

  17. Uniqueness of PL Minimal Surfaces

    Institute of Scientific and Technical Information of China (English)

    Yi NI

    2007-01-01

    Using a standard fact in hyperbolic geometry, we give a simple proof of the uniqueness of PL minimal surfaces, thus filling in a gap in the original proof of Jaco and Rubinstein. Moreover, in order to clarify some ambiguity, we sharpen the definition of PL minimal surfaces, and prove a technical lemma on the Plateau problem in the hyperbolic space.

  18. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  19. Minimal massive 3D gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K.

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same 'minimal' bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new 'minimal massive gravity'

  20. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  1. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  2. EVALUATION OF STATISTIC AND DETERMINISTIC INTERPOLATORS AS INSTRUMENT OF Eucalyptus sp CLONE STANDS STRATIFICATION

    Directory of Open Access Journals (Sweden)

    Honório Kanegae Junior

    2006-06-01

    Full Text Available The stands stratification for successive forest inventory is usually based on stands cadastral information, such as theage, the species, the spacing, and the management regime, among others. The size of the sample is usually conditioned by thevariability of the forest and by the required precision. Thus, the control of the variation through the efficient stratification has stronginfluence on sample precision and size. This study evaluated: the stratification propitiated by two spatial interpolators, the statisticianone represented by the krigage and the deterministic one represented by the inverse of the square of the distance; evaluated theinterpolators in relation to simple random sampling and the traditional stratification based on cadastral data, in the reduction of thevariance of the average and sampling error; and defined the optimal number of strata when spatial interpolators are used. For thegeneration of the strata, it was studied 4 different dendrometric variables: volume, basal area, dominant height and site index in 2different ages: 2.5 years and 3.5 years. It was concluded that the krigage of the volume per hectare obtained at 3.5 years of age reducedin 47% the stand average variance and in 32% the inventory sampling error, when compared to the simple random sampling. Thevolume interpolator IDW, at 3.5 years of age, reduced in 74% the stand average variance and in 48% the inventory sampling error.The less efficient stratificator was the one based on age, species and spacing. In spite of the IDW method having presented highefficiency, it doesn t guarantee that the efficiency be maintained, if a new sampling is accomplished in the same projects, contrarily tothe geostatistic krigage. In forest stands that don t present spatial dependence, the IDW method can be used with great efficiency in thetraditional stratification. The less efficient stratification method is the one based on the control of age, species and spacing (STR

  3. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    Science.gov (United States)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  4. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    Science.gov (United States)

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  5. Deterministic creation, pinning, and manipulation of quantized vortices in a Bose-Einstein condensate

    Science.gov (United States)

    Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.

    2016-02-01

    We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.

  6. Deterministic analysis of operational events in nuclear power plants. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    Computer codes are being used to analyse operational events in nuclear power plants but until now no special attention has been given to the dissemination of the benefits from these analyses. The IAEA's Incident Reporting System contains more than 3000 reported operational events. Even though deterministic analyses were certainly performed for some of them, only a few reports are supplemented by the results of the computer code analysis. From 23-26 May 2005 a Technical Meeting on Deterministic Analysis of Operational Events in Nuclear Power Plants was organized by the IAEA and held at the International Centre of Croatian Universities in Dubrovnik, Croatia. The objective of the meeting was to provide an international forum for presentations and discussions on how deterministic analysis can be utilized for the evaluation of operational events at nuclear power plants in addition to the traditional root cause evaluation methods

  7. Structural stability of gravity dams during floods: deterministic, semi-probabilistic and probabilistic structural safety assessment

    Energy Technology Data Exchange (ETDEWEB)

    Leger, Pierre [Ecole Polytechnique de Montreal, Montreal, (Canada); Vauvy, Pierre; Boissier, Daniel [Polytech Clermont-Ferrand, Clermont-Ferrand, (France); Peyras, Laurent [Cemagref, Aix-en-Provance, (France)

    2010-07-01

    The evaluation of the safety of existing gravity dams uses semi-probabilistic or probabilistic risk analyses when the traditional classical deterministic safety requirements are not applicable. This paper presented a methodology to compare three safety evaluation formats for gravity dams during floods. This paper first presented a review of the structural safety assessment of gravity dams using progressive approaches. A full description of the different dam safety evaluation formats, including the deterministic approach, the most recent French dam safety guidelines based on a semi-probabilistic safety evaluation format, and probabilistic methods using Monte-Carlo simulations was provided. The hydrostatic safety of a gravity dam 46 m in height was investigated using the gravity method and provided data for comparisons of the methods. It was found that the semi-probabilistic approach proved more flexible than the deterministic approach because the partial material strength reduction and load factors could be adjusted. Advantages and disadvantages of each safety evaluation format were discussed.

  8. Concert Investigation of Novel Deterministic Interleaver for OFDM-IDMA System

    Directory of Open Access Journals (Sweden)

    A. Mary Juliet

    2013-10-01

    Full Text Available In recent days the expectations of wireless communication systems have to configure new technologies that support high capacity to match the increase in demands on wireless service. A new hybrid scheme namely Orthogonal Frequency Division Multiplexing Interleave division Multiple Access (OFDM-IDMA is a hopeful candidate of future wireless communication systems. In OFDM-IDMA the users are separated by having a unique interleaver pattern. Thus design of interleaver plays a prominent role in user separation. In this paper, new methodology and design analysis of novel deterministic interleaver is discussed. Two different deterministic interleavers are proposed and their collision probability is studied using correlation analysis. Wealso analyze the multiple access interference (MAI performance of proposed interleavers using peak correlation analysis. The new deterministic interleavers are namely modified circular shifting interleaver and clockwise interleaver. The BER Vs Eb/N0 analysis is carried out for the proposed interleavers and their performance isstudied.

  9. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    Science.gov (United States)

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-11-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.

  10. A non-deterministic approach to forecasting the trophic evolution of lakes

    Directory of Open Access Journals (Sweden)

    Roberto Bertoni

    2016-03-01

    Full Text Available Limnologists have long recognized that one of the goals of their discipline is to increase its predictive capability. In recent years, the role of prediction in applied ecology escalated, mainly due to man’s increased ability to change the biosphere. Such alterations often came with unplanned and noticeably negative side effects mushrooming from lack of proper attention to long-term consequences. Regression analysis of common limnological parameters has been successfully applied to develop predictive models relating the variability of limnological parameters to specific key causes. These approaches, though, are biased by the requirement of a priori cause-relation assumption, oftentimes difficult to find in the complex, nonlinear relationships entangling ecological data. A set of quantitative tools that can help addressing current environmental challenges avoiding such restrictions is currently being researched and developed within the framework of ecological informatics. One of these approaches attempting to model the relationship between a set of inputs and known outputs, is based on genetic algorithms and programming (GP. This stochastic optimization tool is based on the process of evolution in natural systems and was inspired by a direct analogy to sexual reproduction and Charles Darwin’s principle of natural selection. GP works through genetic algorithms that use selection and recombination operators to generate a population of equations. Thanks to a 25-years long time-series of regular limnological data, the deep, large, oligotrophic Lake Maggiore (Northern Italy is the ideal case study to test the predictive ability of GP. Testing of GP on the multi-year data series of this lake has allowed us to verify the forecasting efficacy of the models emerging from GP application. In addition, this non-deterministic approach leads to the discovery of non-obvious relationships between variables and enabled the formulation of new stochastic models.

  11. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States); Wang, Yaqi [North Carolina State Univ., Raleigh, NC (United States)

    2013-12-20

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code’s numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory’s Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  12. Deterministic Simulation of Alternative Breeding Objectives and Schemes for Pure Bred Cattle in Kenya

    International Nuclear Information System (INIS)

    Alternative breeding objectives and schemes for milk production were evaluated for their economic efficiency using deterministic simulation. A two-tier open nucleus breeding scheme and a young bull system (YBS) were assumed with intensive recording and 100% artificial insemination (AI) in the nucleus and 35% AI in the commercial population, which was assumed to comprise of the smallholder herds. Since most production systems are dual purpose, breeding objectives were defined, which represented different scenarios. These objectives represented the present (objective 1- dual purpose), smallholder (objective 2- dual purpose with limited mature live weight) and future production situations (objective 3- dual purpose with fat based milk price). Breeding objectives differed in the trials included and their economic values while the breeding schemes differed in records available for use as selection criteria as well as in the costs and investment parameters. since the main question for establishing a breeding and recording programme is that of efficiency of investment, the monetary genetic response and profit per cow in the population were used as evaluation criteria. All breeding objectives and schemes realized profits. The objectives and schemes that ranked highly for annual monetary genetic response and total return per cow did not rank the same in profit per cow in all cases. In objective 3, the scheme that assumed records on fat yield (FY) were available for use as selection criterion and that, which assumed no records on FY,differed very little in profit per cow (approximately 4%). Therefore, under the current production and marketing conditions, a breeding scheme that requires measuring of the fat content does not seem to be justified from an economic point of view. There is evidence that a well-organised breeding programme utilizing an open nucleus, a YBS and the smallholder farms as well as commercial population could sustain itself

  13. Buffer clustering policy for sequential production lines with deterministic processing times

    Directory of Open Access Journals (Sweden)

    Francesca Schuler

    2016-09-01

    Full Text Available A sequential production line is defined as a set of sequential operations within a factory or distribution center whereby entities undergo one or more processes to produce a final product. Sequential production lines may gain efficiencies such as increased throughput or reduced work in progress by utilizing specific configurations while maintaining the chronological order of operations. One problem identified by the authors via a case study is that, some of the configurations, such as work cell or U-shaped production lines that have groups of buffers, often increase the space utilization. Therefore, many facilities do not take advantage of the configuration efficiencies that a work cell or U-shaped production line provide. To solve this problem, the authors introduce the concept of a buffer cluster. The production line implemented with one or more buffer clusters maintains the throughput of the line, identical to that with dedicated buffers, but with the clusters reduces the buffer storage space. The paper derives a time based parametric model that determines the sizing of the buffer cluster, provides a reduced time space for which to search for the buffer cluster sizing, and determines an optimal buffer clustering policy that can be applied to any N-server, N+1 buffer sequential line configuration with deterministic processing time. This solution reduces the buffer storage space utilized while ensuring no overflows or underflows occur in the buffer. Furthermore, the paper demonstrates how the buffer clustering policy serves as an input into a facility layout tool that provides the optimal production line layout.

  14. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    Science.gov (United States)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch

  15. Solving the flow fields in conduits and networks using energy minimization principle with simulated annealing

    CERN Document Server

    Sochi, Taha

    2014-01-01

    In this paper, we propose and test an intuitive assumption that the pressure field in single conduits and networks of interconnected conduits adjusts itself to minimize the total energy consumption required for transporting a specific quantity of fluid. We test this assumption by using linear flow models of Newtonian fluids transported through rigid tubes and networks in conjunction with a simulated annealing (SA) protocol to minimize the total energy cost. All the results confirm our hypothesis as the SA algorithm produces very close results to those obtained from the traditional deterministic methods of identifying the flow fields by solving a set of simultaneous equations based on the conservation principles. The same results apply to electric ohmic conductors and networks of interconnected ohmic conductors. Computational experiments conducted in this regard confirm this extension. Further studies are required to test the energy minimization hypothesis for the non-linear flow systems.

  16. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  17. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  18. VISCO-ELASTIC SYSTEMS UNDER BOTH DETERMINISTIC AND BOUND RANDOM PARAMETRIC EXCITATION

    Institute of Scientific and Technical Information of China (English)

    徐伟; 戎海武; 方同

    2003-01-01

    The principal resonance of a visco-elastic systems under both deterministic and random parametric excitation was investigated. The method of multiple scales was used to determine the equations of modulation of amplitude and phase. The behavior, stability and bifurcation of steady state response were studied by means of qualitative analysis. The contributions from the visco-elastic force to both damping and stiffness can be taken into account. The effects of damping, detuning, bandwidth, and magnitudes of deterministic and random excitations were analyzed. The theoretical analysis is verified by numerical results.

  19. Minimal flows and their extensions

    CERN Document Server

    Auslander, J

    1988-01-01

    This monograph presents developments in the abstract theory of topological dynamics, concentrating on the internal structure of minimal flows (actions of groups on compact Hausdorff spaces for which every orbit is dense) and their homomorphisms (continuous equivariant maps). Various classes of minimal flows (equicontinuous, distal, point distal) are intensively studied, and a general structure theorem is obtained. Another theme is the ``universal'' approach - entire classes of minimal flows are studied, rather than flows in isolation. This leads to the consideration of disjointness of flows, w

  20. Performance assessment of deterministic and probabilistic weather predictions for the short-term optimization of a tropical hydropower reservoir

    Science.gov (United States)

    Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter

    2016-04-01

    Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of

  1. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  2. Fabrication of the Advanced X-ray Astrophysics Facility (AXAF) Optics: A Deterministic, Precision Engineering Approach to Optical Fabrication

    Science.gov (United States)

    Gordon, T. E.

    1995-01-01

    The mirror assembly of the AXAF observatory consists of four concentric, confocal, Wolter type 1 telescopes. Each telescope includes two conical grazing incidence mirrors, a paraboloid followed by a hyperboloid. Fabrication of these state-or-the-art optics is now complete, with predicted performance that surpasses the goals of the program. The fabrication of these optics, whose size and requirements exceed those of any previous x-ray mirrors, presented a challenging task requiring the use of precision engineering in many different forms. Virtually all of the equipment used for this effort required precision engineering. Accurate metrology required deterministic support of the mirrors in order to model the gravity distortions which will not be present on orbit. The primary axial instrument, known as the Precision Metrology Station (PMS), was a unique scanning Fizeau interferometer. After metrology was complete, the optics were placed in specially designed Glass Support Fixtures (GSF's) for installation on the Automated Cylindrical Grinder/Polishers (ACG/P's). The GSF's were custom molded for each mirror element to match the shape of the outer surface to minimize distortions of the inner surface. The final performance of the telescope is expected to far exceed the original goals and expectations of the program.

  3. A deterministic oscillatory model of microtubule growth and shrinkage for differential actions of short chain fatty acids.

    Science.gov (United States)

    Kilner, Josephine; Corfe, Bernard M; McAuley, Mark T; Wilkinson, Stephen J

    2016-01-01

    Short chain fatty acids (SCFA), principally acetate, propionate, butyrate and valerate, are produced in pharmacologically relevant concentrations by the gut microbiome. Investigations indicate that they exert beneficial effects on colon epithelia. There is increasing interest in whether different SCFAs have distinct functions which may be exploited for prevention or treatment of colonic diseases including colorectal cancer (CRC), inflammatory bowel disease and obesity. Based on experimental evidence, we hypothesised that odd-chain SCFAs may possess anti-mitotic capabilities in colon cancer cells by disrupting microtubule (MT) structural integrity via dysregulation of β-tubulin isotypes. MT dynamic instability is an essential characteristic of MT cellular activity. We report a minimal deterministic model that takes a novel approach to explore the hypothesised pathway by triggering spontaneous oscillations to represent MT dynamic behaviour. The dynamicity parameters in silico were compared to those reported in vitro. Simulations of untreated and butyrate (even-chain length) treated cells reflected MT behaviour in interphase or untreated control cells. The propionate and valerate (odd-chain length) simulations displayed increased catastrophe frequencies and longer periods of MT-fibre shrinkage. Their enhanced dynamicity was dissimilar to that observed in mitotic cells, but parallel to that induced by MT-destabilisation treatments. Antimicrotubule drugs act through upward or downward modulation of MT dynamic instability. Our computational modelling suggests that metabolic engineering of the microbiome may facilitate managing CRC risk by predicting outcomes of SCFA treatments in combination with AMDs. PMID:26562762

  4. Minimally Invasive Aortic Valve Replacement

    Medline Plus

    Full Text Available ... his choice. Tariq asks, can we see one day that minimally invasive techniques to replace conventional heart ... rarely need those pacemaker wires after the first day after surgery. Okay, we have another question from ...

  5. Minimally Invasive Aortic Valve Replacement

    Medline Plus

    Full Text Available ... to minimize their symptoms, but that doesn't impact the course of the disease itself. When I' ... more likely we see aortic stenosis. Again, patient education is part of the evaluation and management of ...

  6. Minimally Invasive Aortic Valve Replacement

    Medline Plus

    Full Text Available ... to minimize their symptoms, but that doesn't impact the course of the disease itself. When I' ... echocardiography is? Echocardiography is the use of ultrasound technology. Ultrasound technology is a form of the same ...

  7. Cost minimization and asset pricing

    OpenAIRE

    Chambers, Robert G.; John Quiggin

    2005-01-01

    A cost-based approach to asset-pricing equilibrium relationships is developed. A cost function induces a stochastic discount factor (pricing kernel) that is a function of random output, prices, and capital stockt. By eliminating opportunities for arbitrage between financial markets and the production technology, firms minimize the current cost of future consumption. The first-order conditions for this cost minimization problem generate the stochastic discount factor. The cost-based approach i...

  8. $\\alpha$-minimal Banach spaces

    CERN Document Server

    Rosendal, Christian

    2011-01-01

    A Banach space with a Schauder basis is said to be $\\alpha$-minimal for some countable ordinal $\\alpha$ if, for any two block subspaces, the Bourgain embeddability index of one into the other is at least $\\alpha$. We prove a dichotomy that characterises when a Banach space has an $\\alpha$-minimal subspace, which contributes to the ongoing project, initiated by W. T. Gowers, of classifying separable Banach spaces by identifying characteristic subspaces.

  9. Minimal Massive 3D Gravity

    OpenAIRE

    Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K

    2014-01-01

    We present an alternative to Topologically Massive Gravity (TMG) with the same "minimal" bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new "minimal massive gravity" has both a positive energy graviton and positive central charges for the asymptotic AdS-boundary conformal algebra.

  10. BOT3P: a mesh generation software package for transport analysis with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    BOT3P consists of a set of standard Fortran 77 language programs that gives the users of the deterministic transport codes DORT, TORT, TWODANT, THREEDANT, PARTISN and the sensitivity code SUSD3D some useful diagnostic tools to prepare and check the geometry of their input data files for both Cartesian and cylindrical geometries, including graphical display modules. Users can produce the geometrical and material distribution data for all the cited codes for both two-dimensional and three-dimensional applications and, only in 3-dimensional Cartesian geometry, for the Monte Carlo Transport Code MCNP, starting from the same BOT3P input. Moreover, BOT3P stores the fine mesh arrays and the material zone map in a binary file, the content of which can be easily interfaced to any deterministic and Monte Carlo transport code. This makes it possible to compare directly for the same geometry the effects stemming from the use of different data libraries and solution approaches on transport analysis results. BOT3P Version 5.0 lets users optionally and with the desired precision compute the area/volume error of material zones with respect to the theoretical values, if any, because of the stair-cased representation of the geometry, and automatically update material densities on the whole zone domains to conserve masses. A local (per mesh) density correction approach is also available. BOT3P is designed to run on Linux/UNIX platforms and is publicly available from the Organization for Economic Cooperation and Development (OECD/NEA)/Nuclear Energy Agency Data Bank. Through the use of BOT3P, radiation transport problems with complex 3-dimensional geometrical structures can be modelled easily, as a relatively small amount of engineer-time is required and refinement is achieved by changing few parameters. This tool is useful for solving very large challenging problems, as successfully demonstrated not only in some complex neutron shielding and criticality benchmarks but also in a power

  11. Minimizing shortfall risk for multiple assets derivatives

    CERN Document Server

    Barski, Michal

    2011-01-01

    The risk minimizing problem $\\mathbf{E}[l((H-X_T^{x,\\pi})^{+})]\\overset{\\pi}{\\longrightarrow}\\min$ in the Black-Scholes framework with correlation is studied. General formulas for the minimal risk function and the cost reduction function for the option $H$ depending on multiple underlying are derived. The case of a linear and a strictly convex loss function $l$ are examined. Explicit computation for $l(x)=x$ and $l(x)=x^p$, with $p>1$ for digital, quantos, outperformance and spread options are presented. The method is based on the quantile hedging approach presented in \\cite{FL1}, \\cite{FL2} and developed for the multidimensional options in \\cite{Barski}.

  12. Minimal flavour violation and SU(5)-unification

    International Nuclear Information System (INIS)

    Minimal flavour violation in its strong or weak versions, based on U(3)3 and U(2)3, respectively, allows suitable extensions of the standard model at the TeV scale to comply with current flavour constraints in the quark sector. Here we discuss considerations analogous to minimal flavour violation (MFV) in the context of SU(5)-unification, showing the new effects/constraints that arise both in the quark and in the lepton sector, where quantitative statements can be made controlled by the CKM matrix elements. The case of supersymmetry is examined in detail as a particularly motivated example. Third generation sleptons and neutralinos in the few hundred GeV range are shown to be compatible with current constraints. (orig.)

  13. Minimal flavour violation and SU(5)-unification

    Energy Technology Data Exchange (ETDEWEB)

    Barbieri, Riccardo, E-mail: barbieri@sns.it; Senia, Fabrizio, E-mail: fabrizio.senia@sns.it [Scuola Normale Superiore and INFN, Piazza dei Cavalieri 7, 56126, Pisa (Italy)

    2015-12-17

    Minimal flavour violation in its strong or weak versions, based on U(3){sup 3} and U(2){sup 3}, respectively, allows suitable extensions of the standard model at the TeV scale to comply with current flavour constraints in the quark sector. Here we discuss considerations analogous to minimal flavour violation (MFV) in the context of SU(5)-unification, showing the new effects/constraints that arise both in the quark and in the lepton sector, where quantitative statements can be made controlled by the CKM matrix elements. The case of supersymmetry is examined in detail as a particularly motivated example. Third generation sleptons and neutralinos in the few hundred GeV range are shown to be compatible with current constraints.

  14. The phase-space analysis of scalar fields with non-minimally derivative coupling

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Yumei [Beijing Normal University, Department of Astronomy, Beijing (China); Gao, Qing; Gong, Yungui [Huazhong University of Science and Technology, MOE Key Laboratory of Fundamental Quantities Measurement, School of Physics, Wuhan, Hubei (China)

    2015-04-01

    We perform a dynamical analysis for the exponential scalar field with non-minimally derivative coupling. For the quintessence case, the stable fixed points are the same with and without the non-minimally derivative coupling. For the phantom case, the attractor with dark energy domination exists for the minimal coupling only. For the non-minimally derivative coupling without the standard canonical kinetic term, only the de Sitter attractor exists, and the dark matter solution is unstable. (orig.)

  15. Optimal deterministic shallow cuttings for 3D dominance ranges

    DEFF Research Database (Denmark)

    Afshani, Peyman; Tsakalidis, Konstantinos

    2014-01-01

    In the concurrent range reporting (CRR) problem, the input is L disjoint sets S1..., SL of points in Rd with a total of N points. The goal is to preprocess the sets into a structure such that, given a query range r and an arbitrary set Q ⊆ {1,..., L}, we can efficiently report all the points in Si...... model (as well as comparison models such as the real RAM model), answering queries requires Ω(|Q|log(L/|Q|) + logN + K) time in the worst case, where K is the number of output points. In one dimension, we achieve this query time with a linear-space dynamic data structure that requires optimal O(log N...... times of O(|Q|log(N/|Q|) + K) and O(2LL + logN + K). Finally, we give an optimal data structure for three-sided ranges for the case L = O(log N). Copyright © 2014 by the Society for Industrial and Applied Mathematics....

  16. Deterministic Secure Direct Communication by Using Swapping Quantum Entanglement and Local Unitary Operations

    Institute of Scientific and Technical Information of China (English)

    MAN Zhong-Xiao; ZHANG Zhan-Jun; LI Yong

    2005-01-01

    @@ A deterministic direct quantum communication protocol is proposed by using swapping quantum entanglement and local unitary operations. The present protocol is secure for the proof of the security of the present scheme,the same as that in the two-step protocol [Phys. Rev. A 68 (2003)042317]. Additionally, the advantages anddisadvantages of the present protocol is also discussed.

  17. RISK ESTIMATES FOR DETERMINISTIC HEALTH EFFECTS OF INHALED WEAPONS GRADE PLUTONIUM

    Science.gov (United States)

    Risk estimates for deterministic effects of inhaled weapons-grade plutonium (WG Pu) are needed to evaluate potential serious harm to: (1) U. S. Department of Energy nuclear workers from accidental or other work-place releases of WG Pu; and (2) the public from terrorist actions re...

  18. 3D shape measurement using deterministic phase retrieval and a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa;

    2012-01-01

    For deterministic phase retrieval, the problem of insignificant axial intensity variations upon defocus of a smooth object wavefront is addressed. Our proposed solution is based on the use of a phase diffuser facilitating the formation of a partially-developed speckle field (i.e., a field with bo...

  19. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    Science.gov (United States)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  20. Bounds for right tails of deterministic and stochastic sums of random variables

    NARCIS (Netherlands)

    G. Darkiewicz; G. Deelstra; J. Dhaene; T. Hoedemakers; M. Vanmaele

    2009-01-01

    We investigate lower and upper bounds for right tails (stop-loss premiums) of deterministic and stochastic sums of nonindependent random variables. The bounds are derived using the concepts of comonotonicity, convex order, and conditioning. The performance of the presented approximations is investig