WorldWideScience

Sample records for case minimal deterministic

  1. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    Science.gov (United States)

    Asinari, Pietro

    2010-10-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar

  2. Nonlinear Markov processes: Deterministic case

    International Nuclear Information System (INIS)

    Frank, T.D.

    2008-01-01

    Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution

  3. A deterministic annealing algorithm for approximating a solution of the linearly constrained nonconvex quadratic minimization problem.

    Science.gov (United States)

    Dang, Chuangyin; Liang, Jianqing; Yang, Yang

    2013-03-01

    A deterministic annealing algorithm is proposed for approximating a solution of the linearly constrained nonconvex quadratic minimization problem. The algorithm is derived from applications of a Hopfield-type barrier function in dealing with box constraints and Lagrange multipliers in handling linear equality constraints, and attempts to obtain a solution of good quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the box constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem. Preliminary numerical results show that the algorithm seems effective and efficient. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study

    Directory of Open Access Journals (Sweden)

    King John R

    2010-03-01

    Full Text Available Abstract Background Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. Results In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Conclusions Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  5. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  6. Deterministic chaos

    CERN Document Server

    Eckmann, Jean-Pierre

    1999-01-01

    In these lectures, I will give an overview of the mathematical and physical aspects of deterministic chaotic systems. Starting from simple examples, I plan to cover some crucial notions of the theory such as : Hyperbolicity, shadowing and ergodic properties.

  7. Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis.

    Science.gov (United States)

    Oliveira, Gisele Pinto de; Bierrenbach, Ana Luiza de Souza; Camargo, Kenneth Rochel de; Coeli, Cláudia Medina; Pinheiro, Rejane Sobrino

    2016-08-22

    To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring). Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System - Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used. Analisar a acurácia das técnicas determinística e probabilística para identificação de registros duplicados de tuberculose, assim como as características dos pares discordantes. Foram analisados todos os registros de tuberculose no período de 2009 a 2011 do estado do Rio de Janeiro. Foi desenvolvido algoritmo para relacionamento determinístico, usando conjunto de 70 regras, a

  8. Minimalism

    CERN Document Server

    Obendorf, Hartmut

    2009-01-01

    The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.

  9. Treatment of cervical agenesis with minimally invasive therapy: Case report

    Directory of Open Access Journals (Sweden)

    Azami Denas Azinar

    2017-11-01

    Full Text Available Cervical agenesis is very rare congenital disorder case with cervical not formed. Because of cervical clogged so that menstruation can not be drained. We Report the case of a19 years old women still single with endometrioma, hematometra, cervical agenesis and perform surgery combination laparoscopy and transvaginally with laparoscopic cystectomy, neocervix, and use catheter no 24f in the new cervix. And now she can currently be normal menstruation. Minimally invasive theraphy of congenital anomalies case is recommended to save reproductive function. Keywords: Cervical agenesis, minimal invasive theraphy

  10. Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series

    Directory of Open Access Journals (Sweden)

    Weijun Wang

    2016-01-01

    Full Text Available Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL; postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes; no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n=1, 41–60 (n=1, and >60 (n=55; scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy.

  11. [Minimally invasive surgery for Chance fractures: Three case studies].

    Science.gov (United States)

    Blondel, B; Fuentes, S; Rambolarimanana, T; Metellus, P; Dufour, H

    2010-02-01

    Chance fractures are quite rare injuries that require surgical treatment in cases of spinal instability. Development of percutaneous and minimally invasive procedures can alter the management of such lesions, resulting in fewer related soft tissue lesions and morbidities. We present our experience with three patients who underwent percutaneous posterior osteosynthesis associated with a minimally invasive anterior graft for discal lesion. The first two cases presented fracture through the disc and osteosynthesis was done on a single mobile level. In the third case with a bony Chance fracture, we performed a short-segment fixation one level above and below the fractured vertebra. In all three cases, operative blood loss was minimal and clinical outcomes were favorable, with tolerable postoperative pain. Fusion and consolidation were visible for all the patients without loss of correction or implant failure. Percutaneous osteosynthesis and minimally invasive surgery can be an advantageous alternative for the management of Chance fractures. They allow early mobilization of the patient with less soft tissue trauma and morbidities associated with open procedures. Copyright 2009 Elsevier Masson SAS. All rights reserved.

  12. Minimally invasive treatment of hepatic adenoma in special cases

    Energy Technology Data Exchange (ETDEWEB)

    Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil); Odisio, Bruno Calazans [MD Anderson Cancer Center, Houston (United States); Garcia, Rodrigo Gobbo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil)

    2013-07-01

    Hepatocellular adenoma is a rare benign tumor that was increasingly diagnosed in the 1980s and 1990s. This increase has been attributed to the widespread use of oral hormonal contraceptives and the broader availability and advances of radiological tests. We report two cases of patients with large hepatic adenomas who were subjected to minimally invasive treatment using arterial embolization. One case underwent elective embolization due to the presence of multiple adenomas and recent bleeding in one of the nodules. The second case was a victim of blunt abdominal trauma with rupture of a hepatic adenoma and clinical signs of hemodynamic shock secondary to intra-abdominal hemorrhage, which required urgent treatment. The development of minimally invasive locoregional treatments, such as arterial embolization, introduced novel approaches for the treatment of individuals with hepatic adenoma. The mortality rate of emergency resection of ruptured hepatic adenomas varies from 5 to 10%, but this rate decreases to 1% when resection is elective. Arterial embolization of hepatic adenomas in the presence of bleeding is a subject of debate. This observation suggests a role for transarterial embolization in the treatment of ruptured and non-ruptured adenomas, which might reduce the indication for surgery in selected cases and decrease morbidity and mortality. Magnetic resonance imaging showed a reduction of the embolized lesions and significant avascular component 30 days after treatment in the two cases in this report. No novel lesions were observed, and a reduction in the embolized lesions was demonstrated upon radiological assessment at a 12-month follow-up examination.

  13. Minimally invasive treatment of hepatic adenoma in special cases

    International Nuclear Information System (INIS)

    Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo; Odisio, Bruno Calazans; Garcia, Rodrigo Gobbo

    2013-01-01

    Hepatocellular adenoma is a rare benign tumor that was increasingly diagnosed in the 1980s and 1990s. This increase has been attributed to the widespread use of oral hormonal contraceptives and the broader availability and advances of radiological tests. We report two cases of patients with large hepatic adenomas who were subjected to minimally invasive treatment using arterial embolization. One case underwent elective embolization due to the presence of multiple adenomas and recent bleeding in one of the nodules. The second case was a victim of blunt abdominal trauma with rupture of a hepatic adenoma and clinical signs of hemodynamic shock secondary to intra-abdominal hemorrhage, which required urgent treatment. The development of minimally invasive locoregional treatments, such as arterial embolization, introduced novel approaches for the treatment of individuals with hepatic adenoma. The mortality rate of emergency resection of ruptured hepatic adenomas varies from 5 to 10%, but this rate decreases to 1% when resection is elective. Arterial embolization of hepatic adenomas in the presence of bleeding is a subject of debate. This observation suggests a role for transarterial embolization in the treatment of ruptured and non-ruptured adenomas, which might reduce the indication for surgery in selected cases and decrease morbidity and mortality. Magnetic resonance imaging showed a reduction of the embolized lesions and significant avascular component 30 days after treatment in the two cases in this report. No novel lesions were observed, and a reduction in the embolized lesions was demonstrated upon radiological assessment at a 12-month follow-up examination

  14. Entropy Generation Minimization in Dimethyl Ether Synthesis: A Case Study

    Science.gov (United States)

    Kingston, Diego; Razzitte, Adrián César

    2018-04-01

    Entropy generation minimization is a method that helps improve the efficiency of real processes and devices. In this article, we study the entropy production (due to chemical reactions, heat exchange and friction) in a conventional reactor that synthesizes dimethyl ether and minimize it by modifying different operating variables of the reactor, such as composition, temperature and pressure, while aiming at a fixed production of dimethyl ether. Our results indicate that it is possible to reduce the entropy production rate by nearly 70 % and that, by changing only the inlet composition, it is possible to cut it by nearly 40 %, though this comes at the expense of greater dissipation due to heat transfer. We also study the alternative of coupling the reactor with another, where dehydrogenation of methylcyclohexane takes place. In that case, entropy generation can be reduced by 54 %, when pressure, temperature and inlet molar flows are varied. These examples show that entropy generation analysis can be a valuable tool in engineering design and applications aiming at process intensification and efficient operation of plant equipment.

  15. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  16. Minimal Self and Timing Disorders in Schizophrenia: A Case Report

    Directory of Open Access Journals (Sweden)

    Brice Martin

    2018-04-01

    Full Text Available For years, phenomenological psychiatry has proposed that distortions of the temporal structure of consciousness contribute to the abnormal experiences described before schizophrenia emerges, and may relate to basic disturbances in consciousness of the self. However, considering that temporality refers mainly to an implicit aspect of our relationship with the world, disturbances in the temporal structure of consciousness remain difficult to access. Nonetheless, previous studies have shown a correlation between self disorders and the automatic ability to expect an event in time, suggesting timing is a key issue for the psychopathology of schizophrenia. Timing disorders may represent a target for cognitive remediation, but this requires that disorders can be demonstrated at an individual level. Since cognitive impairments in patients with schizophrenia are discrete, and there is no standardized timing exploration, we focused on timing impairments suggested to be related to self disorders. We present the case report of AF, a 22 year old man suffering from schizophrenia, with no antipsychotic intake. Although AF shows few positive and negative symptoms and has a normal neurocognitive assessment, he shows a high level of disturbance of Minimal Self Disorders (SDs (assessed with the EASE scale. Moreover, AF has a rare ability to describe his self and time difficulties. An objective assessment of timing ability (variable foreperiod task confirmed that AF had temporal impairments similar to those previously described in patients, i.e., a preserved ability to distinguish time intervals, but a difficulty to benefit from the passage of time to expect a visual stimulus. He presents additional difficulties in benefitting from temporal cues and adapting to changes in time delays. The impairments were ample enough to yield significant effects with analyses at the individual level. Although causal relationships between subjective and objective impairments cannot

  17. Assessment of earthquake locations in 3-D deterministic velocity models: A case study from the Altotiberina Near Fault Observatory (Italy)

    Science.gov (United States)

    Latorre, D.; Mirabella, F.; Chiaraluce, L.; Trippetta, F.; Lomax, A.

    2016-11-01

    The accuracy of earthquake locations and their correspondence with subsurface geology depends strongly on the accuracy of the available seismic velocity model. Most modern methods to construct a velocity model for earthquake location are based on the inversion of passive source seismological data. Another approach is the integration of high-resolution geological and geophysical data to construct deterministic velocity models in which earthquake locations can be directly correlated to the geological structures. Such models have to be kinematically consistent with independent seismological data in order to provide precise hypocenter solutions. We present the Altotiberina (AT) seismic model, a three-dimensional velocity model for the Upper Tiber Valley region (Northern Apennines, Italy), constructed by combining 300 km of seismic reflection profiles, six deep boreholes (down to 5 km depth), detailed data from geological surveys and direct measurements of P and S wave velocities performed in situ and in laboratory. We assess the robustness of the AT seismic model by locating 11,713 earthquakes with a nonlinear, global-search inversion method and comparing the probabilistic hypocenter solutions to those calculated in three previously published velocity models, constructed by inverting passive seismological data only. Our results demonstrate that the AT seismic model is able to provide higher-quality hypocenter locations than the previous velocity models. Earthquake locations are consistent with the subsurface geological structures and show a high degree of spatial correlation with specific lithostratigraphic units, suggesting a lithological control on the seismic activity evolution.

  18. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  19. Minimally invasive approaches in pancreatic pseudocyst: a Case report

    Directory of Open Access Journals (Sweden)

    Rohollah Y

    2009-09-01

    Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: According to importance of post operative period, admission duration, post operative pain, and acceptable rate of complications, minimally invasive approaches with endoscope in pancreatic pseudocyst management becomes more popular, but the best choice of procedure and patient selection is currently not completely established. During past decade endoscopic procedures are become first choice in most authors' therapeutic plans, however, open surgery remains gold standard in pancreatic pseudocyst treatment."n"nMethods: we present here a patient with pancreatic pseudocyst unresponsive to conservative management that is intervened endoscopically before 6th week, and review current literatures to depict a schema to management navigation."n"nResults: A 16 year old male patient presented with two episodes of acute pancreatitis with abdominal pain, nausea and vomiting. Hyperamilasemia, pancreatic ascites and a pseudocyst were found in our preliminary investigation. Despite optimal conservative management, including NPO (nil per os and total parentral nutrition, after four weeks, clinical and para-clinical findings deteriorated. Therefore, ERCP and trans-papillary cannulation with placement of 7Fr stent was

  20. RBE for deterministic effects

    International Nuclear Information System (INIS)

    1990-01-01

    In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)

  1. Minimal change disease in a patient with myasthenia gravis: A case report.

    Science.gov (United States)

    Tsai, Jun-Li; Tsai, Shang-Feng

    2016-09-01

    Myasthenia gravis superimposed with proteinuria is a very rare disorder with only 39 cases reported so far. Of these cases, the most commonly associated disorder is minimal change disease. Myasthenia gravis and minimal change disease are both related to the dysfunction of T lymphocytes and hence the 2 disorders may be connected. Here we report the first case on a patient diagnosed with myasthenia gravis concurrently with the minimal change disease, and it was presented in the absence of thymoma or thymic hyperplasia. Treatment for myasthenia gravis also lowered proteinuria of minimal change disease. He ever experienced good control for myasthenia gravis and minimal change disease. However, pneumonia related septic shock occurred to him and finally he was dead. Minimal change disease is generally considered to occur subsequent to the onset of myasthenia gravis with causal association. After extensive literature review, we noted only 47.8% minimal change disease had occurred after the onset of myasthenia gravis. Minimal change disease mostly occurs in children and if diagnosed in adults, clinicians should search for a potential cause such as myasthenia gravis and other associated thymic disorders.

  2. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...

  3. Deterministic dense coding with partially entangled states

    Science.gov (United States)

    Mozes, Shay; Oppenheim, Jonathan; Reznik, Benni

    2005-01-01

    The utilization of a d -level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d . In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1 . We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.

  4. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2009-02-03

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  5. A rare case of minimal deviation adenocarcinoma of the uterine cervix in a renal transplant recipient.

    LENUS (Irish Health Repository)

    Fanning, D M

    2012-02-01

    INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.

  6. Thunderclap headache caused by minimally invasive medical procedures: description of 2 cases.

    Science.gov (United States)

    Devetag Chalaupka, Flavio; Caneve, Giorgio; Mauri, Michela; Zaiotti, Giuseppe

    2007-02-01

    We report 2 very unusual cases of thunderclap headache complicating minimally invasive medical procedures. In the first case headache developed as the consequence of a pneumocephalus caused by an inadvertent intrathecal puncture during oxygen-ozone therapy for lumbar disk herniation. The second case involved intracranial hypotension, caused by the persistence of the needle, used for epidural anesthesia, and then penetrated in the subarachnoid space.

  7. Deterministic Global Optimization

    CERN Document Server

    Scholz, Daniel

    2012-01-01

    This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization

  8. Chronic Lyme borreliosis associated with minimal change glomerular disease: a case report.

    Science.gov (United States)

    Florens, N; Lemoine, S; Guebre-Egziabher, F; Valour, F; Kanitakis, J; Rabeyrin, M; Juillard, L

    2017-02-06

    There are only few cases of renal pathology induced by Lyme borreliosis in the literature, as this damage is rare and uncommon in humans. This patient is the first case of minimal change glomerular disease associated with chronic Lyme borreliosis. A 65-year-old Caucasian woman was admitted for an acute edematous syndrome related to a nephrotic syndrome. Clinical examination revealed violaceous skin lesions of the right calf and the gluteal region that occurred 2 years ago. Serological tests were positive for Lyme borreliosis and skin biopsy revealed lesions of chronic atrophic acrodermatitis. Renal biopsy showed minimal change glomerular disease. The skin lesions and the nephrotic syndrome resolved with a sequential treatment with first ceftriaxone and then corticosteroids. We report here the first case of minimal change disease associated with Lyme borreliosis. The pathogenesis of minimal change disease in the setting of Lyme disease is discussed but the association of Lyme and minimal change disease may imply a synergistic effect of phenotypic and bacterial factors. Regression of proteinuria after a sequential treatment with ceftriaxone and corticosteroids seems to strengthen this conceivable association.

  9. Cervical spinal cord bullet fragment removal using a minimally invasive surgical approach: a case report

    Directory of Open Access Journals (Sweden)

    Lawton Cort D

    2012-08-01

    Full Text Available Abstract Introduction We present a case of penetrating gunshot injury to the high-cervical spinal cord and describe a minimally invasive approach used for removal of the bullet fragment. We present this report to demonstrate technical feasibility of a minimally invasive approach to projectile removal. Case presentation An 18-year-old African-American man presented to our hospital with a penetrating gunshot injury to the high-cervical spine. The bullet lodged in the spinal cord at the C1 level and rendered our patient quadriplegic and dependent on a ventilator. For personal and forensic reasons, our patient and his family requested removal of the bullet fragment almost one year following the injury. Given the significant comorbidity associated with quadriplegia and ventilator dependency, a minimally invasive approach was used to limit the peri-operative complication risk and expedite recovery. Using a minimally invasive expandable retractor system and the aid of a microscope, the posterior arch of C1 was removed, the dura was opened, and the bullet fragment was successfully removed from the spinal cord. Conclusions Here we describe a minimally invasive procedure demonstrating the technical feasibility of removing an intramedullary foreign object from the high-cervical spine. We do not suggest that the availability of minimally invasive procedures should lower the threshold or expand the indications for the removal of bullet fragments in the spinal canal. Rather, our objective is to expand the indications for minimally invasive procedures in an effort to reduce the morbidity and mortality associated with spinal procedures. In addition, this report may help to highlight the feasibility of this approach.

  10. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem....

  11. Deterministic estimation of hydrological thresholds for shallow landslide initiation and slope stability models: case study from the Somma-Vesuvius area of southern Italy

    Science.gov (United States)

    Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.

    2012-01-01

    Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is

  12. On integration of probabilistic and deterministic safety analysis

    International Nuclear Information System (INIS)

    Cepin, M.; Wardzinski, A.

    1996-01-01

    The paper presents the case study on probabilistic and deterministic safety analysis of Engineered Safety Features Actuation System. The Fault Tree as a Probabilistic Safety Assessment tool is developed and analysed. The same Fault Tree is specified in a formal way. When formalized, it has a possibility to include the time requirements of the analysed system, which can not be included in a probabilistic approach to Fault Tree Analysis. The feature of inclusion of time is the main advantage of formalized Fault Tree, which extends it to a dynamic tool. Its results are Minimal Cut Sets with time relations, which are the base for the definition of safety requirements. Definition of safety requirements is one of early phases of software lifecycle and it is of special importance designing safety-related computer systems. (author)

  13. Minimal access excision of aortic valve fibroelastoma: a case report and review of the literature.

    Science.gov (United States)

    Harling, Leanne; Athanasiou, Thanos; Ashrafian, Hutan; Kokotsakis, John; Brown, Virginia; Nathan, Anthony; Casula, Roberto

    2012-09-03

    Papillary fibroelastomas are rare primary tumours of cardiac origin accounting for approximately 10% of all primary cardiac neoplasms. Due to a high thromboembolic risk, surgical excision is the mainstay of treatment in these patients and median sternotomy the most widely used approach. We describe the case of a 43 year-old lady presenting with acute myocardial infarction secondary to aortic valve papillary fibroelastoma subsequently excised using a minimal access technique. From our experience mini-sternotomy offers excellent exposure and allows for safe resection in such cases, improving cosmesis without compromising either intra or post-operative outcome.

  14. Minimal access excision of aortic valve fibroelastoma: a case report and review of the literature

    Directory of Open Access Journals (Sweden)

    Harling Leanne

    2012-09-01

    Full Text Available Abstract Papillary fibroelastomas are rare primary tumours of cardiac origin accounting for approximately 10% of all primary cardiac neoplasms. Due to a high thromboembolic risk, surgical excision is the mainstay of treatment in these patients and median sternotomy the most widely used approach. We describe the case of a 43 year-old lady presenting with acute myocardial infarction secondary to aortic valve papillary fibroelastoma subsequently excised using a minimal access technique. From our experience mini-sternotomy offers excellent exposure and allows for safe resection in such cases, improving cosmesis without compromising either intra or post-operative outcome.

  15. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  16. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  17. Nail gun injuries to the head with minimal neurological consequences: a case series.

    Science.gov (United States)

    Makoshi, Ziyad; AlKherayf, Fahad; Da Silva, Vasco; Lesiuk, Howard

    2016-03-16

    An estimated 3700 individuals are seen annually in US emergency departments for nail gun-related injuries. Approximately 45 cases have been reported in the literature concerning nail gun injuries penetrating the cranium. These cases pose a challenge for the neurosurgeon because of the uniqueness of each case, the dynamics of high pressure nail gun injuries, and the surgical planning to remove the foreign body without further vascular injury or uncontrolled intracranial hemorrhage. Here we present four cases of penetrating nail gun injuries with variable presentations. Case 1 is of a 33-year-old white man who sustained 10 nail gunshot injuries to his head. Case 2 is of a 51-year-old white man who sustained bi-temporal nail gun injuries to his head. Cases 3 and 4 are of two white men aged 22 years and 49 years with a single nail gun injury to the head. In the context of these individual cases and a review of similar cases in the literature we present surgical approaches and considerations in the management of nail gun injuries to the cranium. Case 1 presented with cranial nerve deficits, Case 2 required intubation for low Glasgow Coma Scale, while Cases 3 and 4 were neurologically intact on presentation. Three patients underwent angiography for assessment of vascular injury and all patients underwent surgical removal of foreign objects using a vice-grip. No neurological deficits were found in these patients on follow-up. Nail gun injuries can present with variable clinical status; mortality and morbidity is low for surgically managed isolated nail gun-related injuries to the head. The current case series describes the surgical use of a vice-grip for a good grip of the nail head and controlled extraction, and these patients appear to have a good postoperative prognosis with minimal neurological deficits postoperatively and on follow-up.

  18. Deterministic methods in radiation transport

    International Nuclear Information System (INIS)

    Rice, A.F.; Roussin, R.W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community

  19. Operational State Complexity of Deterministic Unranked Tree Automata

    Directory of Open Access Journals (Sweden)

    Xiaoxue Piao

    2010-08-01

    Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.

  20. The effect of head size/shape, miscentering, and bowtie filter on peak patient tissue doses from modern brain perfusion 256-slice CT: How can we minimize the risk for deterministic effects?

    Energy Technology Data Exchange (ETDEWEB)

    Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E.; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece); Medical Diagnostic Center ' Ayios Therissos,' P.O. Box 28405, Nicosia 2033, Cyprus and Department of Medical Physics, Medical School, Democritus University of Thrace, Panepistimioupolis, Dragana 68100, Alexandroupolis (Greece); Department of Medical Physics, University Hospital of Heraklion, P.O. Box 1352, Heraklion 71110, Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece)

    2013-01-15

    Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of

  1. The effect of head size∕shape, miscentering, and bowtie filter on peak patient tissue doses from modern brain perfusion 256-slice CT: how can we minimize the risk for deterministic effects?

    Science.gov (United States)

    Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E; Damilakis, John

    2013-01-01

    To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size∕shape, head position during the examination and bowtie filter used on peak tissue doses. The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size∕shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of signal-to-noise ratio

  2. The probabilistic approach and the deterministic licensing procedure

    International Nuclear Information System (INIS)

    Fabian, H.; Feigel, A.; Gremm, O.

    1984-01-01

    If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)

  3. Minimally invasive spine surgery in lumbar spondylodiscitis: a retrospective single-center analysis of 67 cases.

    Science.gov (United States)

    Tschugg, Anja; Hartmann, Sebastian; Lener, Sara; Rietzler, Andreas; Sabrina, Neururer; Thomé, Claudius

    2017-12-01

    Minimally invasive surgical techniques have been developed to minimize tissue damage, reduce narcotic requirements, decrease blood loss, and, therefore, potentially avoid prolonged immobilization. Thus, the purpose of the present retrospective study was to assess the safety and efficacy of a minimally invasive posterior approach with transforaminal lumbar interbody debridement and fusion plus pedicle screw fixation in lumbar spondylodiscitis in comparison to an open surgical approach. Furthermore, treatment decisions based on the patient´s preoperative condition were analyzed. 67 patients with lumbar spondylodiscitis treated at our department were included in this retrospective analysis. The patients were categorized into two groups based on the surgical procedure: group (MIS) minimally invasive lumbar spinal fusion (n = 19); group (OPEN) open lumbar spinal fusion (n = 48). Evaluation included radiological parameters on magnetic resonance imaging (MRI), laboratory values, and clinical outcome. Preoperative MRI showed higher rates of paraspinal abscess (35.5 vs. 5.6%; p = 0.016) and multilocular location in the OPEN group (20 vs. 0%, p = 0.014). Overall pain at discharge was less in the MIS group: NRS 2.4 ± 1 vs. NRS 1.6 ± 1 (p = 0.036). The duration of hospital stay was longer in the OPEN than the MIS group (19.1 ± 12 days vs. 13.7 ± 5 days, p = 0.018). The open technique is effective in all varieties of spondylodiscitis inclusive in epidural abscess formation. MIS can be applied safely and effectively as well in selected cases, even with epidural abscess.

  4. Deterministic Search Methods for Computational Protein Design.

    Science.gov (United States)

    Traoré, Seydou; Allouche, David; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2017-01-01

    One main challenge in Computational Protein Design (CPD) lies in the exploration of the amino-acid sequence space, while considering, to some extent, side chain flexibility. The exorbitant size of the search space urges for the development of efficient exact deterministic search methods enabling identification of low-energy sequence-conformation models, corresponding either to the global minimum energy conformation (GMEC) or an ensemble of guaranteed near-optimal solutions. In contrast to stochastic local search methods that are not guaranteed to find the GMEC, exact deterministic approaches always identify the GMEC and prove its optimality in finite but exponential worst-case time. After a brief overview on these two classes of methods, we discuss the grounds and merits of four deterministic methods that have been applied to solve CPD problems. These approaches are based either on the Dead-End-Elimination theorem combined with A* algorithm (DEE/A*), on Cost Function Networks algorithms (CFN), on Integer Linear Programming solvers (ILP) or on Markov Random Fields solvers (MRF). The way two of these methods (DEE/A* and CFN) can be used in practice to identify low-energy sequence-conformation models starting from a pairwise decomposed energy matrix is detailed in this review.

  5. Retroperitoneal abscess after transanal minimally invasive surgery: case report and review of literature

    Directory of Open Access Journals (Sweden)

    Aaron Raney

    2017-10-01

    Full Text Available Abscesses are a rare complication of transanal minimally invasive surgery and transanal endoscopic micro surgery. Reported cases have been in the rectal and pre-sacral areas and have been managed with either antibiotics alone or in conjunction with laparotomy and diverting colostomy. We report a case of a large retroperitoneal abscess following a Transanal minimally invasive surgery full thickness rectal polyp excision. The patient was successfully managed conservatively with antibiotics and a percutaneous drain. Retroperitoneal infection should be included in a differential diagnosis following a Transanal minimally invasive surgery procedure as the presentation can be insidious and timely intervention is needed to prevent further morbidity. Resumo: Os abscessos são uma complicação rara da cirurgia de ressecção transanal minimamente invasiva (TAMIS e da micro cirurgia endoscópica transanal (TEMS. Os casos notificados foram nas áreas rectal e pré-sacral e foram administrados com antibióticos isoladamente ou em conjunto com laparotomia e desvio de colostomia. Relatamos um caso de grande abscesso retroperitoneal após uma excisão de pólipo retal de espessura total TAMIS. O paciente foi tratado com sucesso com a administração de antibióticos e drenagem percutânea. Para prevenir mais morbidade é necessária incluir a infecção retroperitoneal no diagnostico diferencial após um procedimento TAMIS onde a apresentação pode ser insidiosa e a intervenção atempada. Keywords: Colorectal surgery, Transanal minimally invasive surgery (TAMIS, Retroperitoneal abscess, Natural orifice transluminal endoscopic surgery (NOTES, Single-site laparoscopic surgery (SILS, Surgical oncology, Palavras-chave: Cirurgia colorretal, Cirurgia de ressecção transanal minimamente invasiva (TAMIS, Abscesso retroperitoneal, Cirurgia endoscópica transluminal de orifício natural (NOTES, Cirurgia laparoscópica de único local (SILS, Oncologia cirúrgica

  6. Minimally invasive carcinosarcoma ex pleomorphic adenoma: A case report and literature review with cytohistological correlation.

    Science.gov (United States)

    Mok, Yingting; Min En, Nga; Chwee Ming, Lim; Petersson, Fredrik

    2016-09-01

    Carcinosarcoma of the salivary glands is a rare neoplasm, and the minimally invasive form constitutes a subgroup with a more favorable prognosis. The cytomorphologic features of this neoplasm can be appreciated on fine-needle aspiration biopsy. We present a patient with a minimally invasive carcinosarcoma ex non-recurrent pleomorphic adenoma (Ca ex PA) who underwent initial fine-needle aspiration biopsy followed by surgical resection. The tumor was composed predominantly of a light microscopic pleomorphic high-grade sarcoma exhibiting partial myoepithelial immunohistochemical features, with a minor component of in situ and invasive salivary duct carcinoma (10%). A limited area with features of a hyalinized pleomorphic adenoma was identified. This is the third case report of the cytological features of Ca ex PA of the salivary gland, with histologic correlation. It further illustrates the oncogenic relationship between epithelial and myoepithelial elements in the early stages of carcinosarcomatous transformation. © 2016 Wiley Periodicals, Inc. Head Neck 38: E2483-E2489, 2016. © 2016 Wiley Periodicals, Inc.

  7. Moving beyond resistance to restraint minimization: a case study of change management in aged care.

    Science.gov (United States)

    Johnson, Susan; Ostaszkiewicz, Joan; O'Connell, Beverly

    2009-01-01

    This case study describes a quality initiative to minimize restraint in an Australian residential aged care facility. The process of improving practice is examined with reference to the literature on implementation of research into practice and change management. The differences between planned and emergent approaches to change management are discussed. The concepts of resistance and attractors are explored in relation to our experiences of managing the change process in this initiative. The importance of the interpersonal interactions that were involved in facilitating the change process is highlighted. Recommendations are offered for dealing with change management processes in clinical environments, particularly the need to move beyond an individual mind-set to a systems-based approach for quality initiatives in residential aged care.

  8. Desmoplastic fibroma of the distal tibia: A case report of a minimally invasive histological diagnosis

    Science.gov (United States)

    Levrini, Gabriele; Pattacini, Pierpaolo

    2016-01-01

    Desmoplastic fibroma (DF) is a benign, rare fibroblastic intraosseous neoplasm histologically resembling a desmoid soft tissue tumor. Although classified as benign, DF frequently exhibits an aggressive behavior, has a moderate-to-high recurrence rate, and often causes pathological fractures and extensive bone destruction. This case report presents an incidentally detected DF of the tibia, which was diagnosed using a minimally invasive approach. A 36-year-old African female patient was referred to the Department of Diagnostic Imaging of Arcispedale Santa Maria Nuova-IRCCS (Reggio Emilia, Italy), to be examined by a computed tomography scan on an outpatient basis, after an x-ray examination of the tibia, which was performed after an injury to exclude the presence of a fracture, revealed a hyperlucency of unknown origin. The aim of this study was to discuss the clinical, histological, immunohistochemical and radiographic characteristics of this rare neoplasm, with a focus on image-guided bone biopsy. PMID:27882239

  9. Deterministic indexing for packed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye

    2017-01-01

    Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ......, we show how to preprocess S in O(n) (deterministic) time and space O(n) such that given a packed pattern string of length m we can support queries in (deterministic) time O (m/α + log m + log log σ), where α = w/log σ is the number of characters packed in a word of size w = θ(log n). Our query time...

  10. Minimal access direct spondylolysis repair using a pedicle screw-rod system: a case series

    Directory of Open Access Journals (Sweden)

    Mohi Eldin Mohamed

    2012-11-01

    Full Text Available Abstract Introduction Symptomatic spondylolysis is always challenging to treat because the pars defect causing the instability needs to be stabilized while segmental fusion needs to be avoided. Direct repair of the pars defect is ideal in cases of spondylolysis in which posterior decompression is not necessary. We report clinical results using segmental pedicle-screw-rod fixation with bone grafting in patients with symptomatic spondylolysis, a modification of a technique first reported by Tokuhashi and Matsuzaki in 1996. We also describe the surgical technique, assess the fusion and analyze the outcomes of patients. Case presentation At Cairo University Hospital, eight out of twelve Egyptian patients’ acute pars fractures healed after conservative management. Of those, two young male patients underwent an operative procedure for chronic low back pain secondary to pars defect. Case one was a 25-year-old Egyptian man who presented with a one-year history of axial low back pain, not radiating to the lower limbs, after falling from height. Case two was a 29-year-old Egyptian man who presented with a one-year history of axial low back pain and a one-year history of mild claudication and infrequent radiation to the leg, never below the knee. Utilizing a standardized mini-access fluoroscopically-guided surgical protocol, fixation was established with two titanium pedicle screws place into both pedicles, at the same level as the pars defect, without violating the facet joint. The cleaned pars defect was grafted; a curved titanium rod was then passed under the base of the spinous process of the affected vertebra, bridging the loose fragment, and attached to the pedicle screw heads, to uplift the spinal process, followed by compression of the defect. The patients were discharged three days after the procedure, with successful fusion at one-year follow-up. No rod breakage or implant-related complications were reported. Conclusions Where there is no

  11. Deterministic multi-player dynkin games

    OpenAIRE

    Solan, Eilon; Vieille, Nicolas

    2015-01-01

    A multi-player Dynkin game is a sequential game in which at every stage one of the players is chosen, and that player can decide whether to continue the game or to stop it, in which case all players receive some terminal payoff. We study a variant of this model, where the order by which players are chosen is deterministic, and the probability that the game terminates once the chosen player decides to stop may be strictly less than one. We prove that a subgame-perfect e-equilibrium in Markovia...

  12. The Learning Curve for Robot-Assisted Minimally Invasive Thoracoscopic Esophagectomy: Results from 312 Cases.

    Science.gov (United States)

    van der Sluis, Pieter C; Ruurda, Jelle P; van der Horst, Sylvia; Goense, Lucas; van Hillegersberg, Richard

    2018-02-15

    Robot-assisted minimally invasive thoraco-laparoscopic esophagectomy (RAMIE) was developed in 2003. RAMIE was shown to be safe and oncologically effective. The aim of this study was to assess the learning curve and the proctoring program for a newly introduced surgeon (surgeon 2). The "learning curve" was defined as the number of operations that must be performed by a surgeon to achieve a steady level of performance. Measures of proficiency to describe the learning curve of the proctor and the newly introduced surgeon 2 included: operating time, blood loss and conversion rates and were analyzed using the cumulative sum (CUSUM) method. Results of the newly introduced surgeon were compared to the proctor in the same period of time. The proctor performed 232 of 312 procedures (74%) and surgeon 2 performed 80 of 312 procedures (26%). The proctor reached proficiency after 70 procedures in 55 months. The structured proctoring program for surgeon 2 started with 20 procedures as assisting table surgeon, followed by 5 observational and 15 supervised cases. Surgeon 2 performed at the same level as the proctor concerning operating time, blood loss, conversion rates, radicality and complications. For surgeon 2, the learning phase of RAMIE was completed within 24 cases (15 supervised and 9 independent cases) in 13 months; a reduction of 66% in the number of operations and a reduction of 76% in time, compared to the proctor. The learning phase of RAMIE consisted of 70 procedures in 55 months. A structured proctoring for RAMIE significantly reduced the number of cases and time required to achieve proficiency. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Cauliflower ear – a minimally invasive treatment method in a wrestling athlete: a case report

    Directory of Open Access Journals (Sweden)

    Haik J

    2018-01-01

    Full Text Available Josef Haik,1–4 Or Givol,2 Rachel Kornhaber,1,5 Michelle Cleary,6 Hagit Ofir,1,2 Moti Harats1–3 1Department of Plastic and Reconstructive Surgery, Sheba Medical Center, Tel Hashomer, Ramat Gan, 2Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel; 3Burn Injury Research Node, Institute for Health Research University of Notre Dame Fremantle, Fremantle WA, Australia; 4Talpiot Leadership Program, Sheba Medical Center, Tel Hashomer, Ramat Gan, Israel; 5Faculty of Health, 6School of Health Sciences, College of Health and Medicine, University of Tasmania, Sydney, NSW, Australia Abstract: Acute auricular hematoma can be caused by direct blunt trauma or other injury to the external ear. It is typically seen in those who practice full contact sports such as boxing, wrestling, and rugby. “Cauliflower ear” deformity, fibrocartilage formation during scarring, is a common complication of auricular hematomas. Therefore, acute drainage of the hematoma and postprocedural techniques for preventing recurrence are necessary for preventing the deformity. There are many techniques although no superior method of treatment has been found. In this case report, we describe a novel method using needle aspiration followed by the application of a magnet and an adapted disc to the affected area of the auricular. This minimally invasive, simple, and accessible method could potentially facilitate the treatment of cauliflower ear among full contact sports athletes. Keywords: cauliflower ear, hematoma, ear deformity, athletic injuries, wrestling, case report

  14. Algorithms for Computing Nash Equilibria in Deterministic LQ Games

    NARCIS (Netherlands)

    Engwerda, J.C.

    2006-01-01

    In this paper we review a number of algorithms to compute Nash equilibria in deterministic linear quadratic differential games.We will review the open-loop and feedback information case.In both cases we address both the finite and the infinite-planning horizon.

  15. Deterministic extraction from weak random sources

    CERN Document Server

    Gabizon, Ariel

    2011-01-01

    In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.

  16. Deterministic hydrodynamics: Taking blood apart

    Science.gov (United States)

    Davis, John A.; Inglis, David W.; Morton, Keith J.; Lawrence, David A.; Huang, Lotien R.; Chou, Stephen Y.; Sturm, James C.; Austin, Robert H.

    2006-10-01

    We show the fractionation of whole blood components and isolation of blood plasma with no dilution by using a continuous-flow deterministic array that separates blood components by their hydrodynamic size, independent of their mass. We use the technology we developed of deterministic arrays which separate white blood cells, red blood cells, and platelets from blood plasma at flow velocities of 1,000 μm/sec and volume rates up to 1 μl/min. We verified by flow cytometry that an array using focused injection removed 100% of the lymphocytes and monocytes from the main red blood cell and platelet stream. Using a second design, we demonstrated the separation of blood plasma from the blood cells (white, red, and platelets) with virtually no dilution of the plasma and no cellular contamination of the plasma. cells | plasma | separation | microfabrication

  17. Stochastic and deterministic trend models

    OpenAIRE

    Estela Bee Dagum; Camilo Dagum

    2008-01-01

    In this paper we provide an overview of some trend models formulated for global and local estimation. Global trend models are based on the assumption that the trend or nonstationary mean of a time series can be approximated closely by simple functions of time over the entire span of the series. The most common representation of deterministic and stochastic trend are introduced. In particular, for the former we analyze polynomial and transcendental functions, whereas for the latter we assume t...

  18. Successful treatment of rare-earth magnet ingestion via minimally invasive techniques: a case series.

    Science.gov (United States)

    Kosut, Jessica S; Johnson, Sidney M; King, Jeremy L; Garnett, Gwendolyn; Woo, Russell K

    2013-04-01

    Cases of rare-earth magnet ingestions have been increasingly reported in the literature. However, these descriptions have focused on the severity of the injuries, rather than the clinical presentation and/or therapeutic approach. We report a series of eight children, ranging in age from 2 to 10 years, who ingested powerful rare-earth magnets. The rare-earth magnets were marketed in 2009 under the trade name Buckyballs(®) (Maxfield & Oberton, New York, NY). They are about 5 mm in size, spherical, and brightly colored, making them appealing for young children to play with and place in their mouths. Three children presented within hours of ingestion, and the magnets were successfully removed via endoscopy in two, whereas the third child required laparoscopy. No fistulas were found in these children. A fourth child presented 2 days after ingestion with evidence of bowel wall erosion, but without fistula formation; the magnets were removed via laparoscopy. A fifth child ingested nine magnets in a ring formation, which were removed via colonoscopy without evidence of injury or fistula formation. The three remaining children presented late (5-8 days after ingestion) and were found to have associated fistulas. They were treated successfully with a combination of endoscopy and laparoscopy with fluoroscopy. None of the children in our series required an open surgical procedure. All children were discharged home without complications. This case series highlights the potential dangers of rare-earth magnet ingestion in children. Our experience suggests that prompt intervention using minimally invasive approaches can lead to successful outcomes.

  19. Design Optimization of a Speed Reducer Using Deterministic Techniques

    OpenAIRE

    Lin, Ming-Hua; Tsai, Jung-Fa; Hu, Nian-Ze; Chang, Shu-Chuan

    2013-01-01

    The optimal design problem of minimizing the total weight of a speed reducer under constraints is a generalized geometric programming problem. Since the metaheuristic approaches cannot guarantee to find the global optimum of a generalized geometric programming problem, this paper applies an efficient deterministic approach to globally solve speed reducer design problems. The original problem is converted by variable transformations and piecewise linearization techniques. The reformulated prob...

  20. A minimally invasive treatment of an asymptomatic case of mesh erosion into the caecum after total extraperitoneal inguinal hernia repair.

    Science.gov (United States)

    Mulleners, Gert; Olivier, Frederick; Abasbassi, Mohamed

    2017-12-28

    Mesh migration and erosion into adjacent viscera is a rare complication after laparoscopic inguinal hernia repair. We present a minimally invasive treatment of an asymptomatic case of mesh erosion into the caecum after total extraperitoneal inguinal hernia repair, including an overview of the relevant recent literature. A male patient underwent a laparoscopic inguinal hernia repair at the age of 42. Two years after this procedure, a screening colonoscopy revealed erosion of the mesh into the caecum. A laparoscopy was performed with partial resection of the mesh and minimal resection of the involved colon. Results of a systematic review of English PubMed articles on mesh migration and erosion after inguinal hernia repair is presented. We report a first-time minimally invasive treatment of mesh erosion into the colon. A laparoscopic approach is feasible and provides an excellent exposure. Partial removal of the mesh is suggested in uncomplicated cases to avoid complications associated with complete mesh removal.

  1. The dialectical thinking about deterministic and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong

    2005-01-01

    There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)

  2. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  3. TACE with Ar-He Cryosurgery Combined Minimal Invasive Technique for the Treatment of Primary NSCLC in 139 Cases

    Directory of Open Access Journals (Sweden)

    Yunzhi ZHOU

    2010-01-01

    Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications

  4. Deterministic SLIR model for tuberculosis disease mapping

    Science.gov (United States)

    Aziz, Nazrina; Diah, Ijlal Mohd; Ahmad, Nazihah; Kasim, Maznah Mat

    2017-11-01

    Tuberculosis (TB) occurs worldwide. It can be transmitted to others directly through air when active TB persons sneeze, cough or spit. In Malaysia, it was reported that TB cases had been recognized as one of the most infectious disease that lead to death. Disease mapping is one of the methods that can be used as the prevention strategies since it can displays clear picture for the high-low risk areas. Important thing that need to be considered when studying the disease occurrence is relative risk estimation. The transmission of TB disease is studied through mathematical model. Therefore, in this study, deterministic SLIR models are used to estimate relative risk for TB disease transmission.

  5. Opportunity-based age replacement policy with minimal repair

    International Nuclear Information System (INIS)

    Jhang, J.P.; Sheu, S.H.

    1999-01-01

    This paper proposes an opportunity-based age replacement policy with minimal repair. The system has two types of failures. Type I failures (minor failures) are removed by minimal repairs, whereas type II failures are removed by replacements. Type I and type II failures are age-dependent. A system is replaced at type II failure (catastrophic failure) or at the opportunity after age T, whichever occurs first. The cost of the minimal repair of the system at age z depends on the random part C(z) and the deterministic part c(z). The opportunity arises according to a Poisson process, independent of failures of the component. The expected cost rate is obtained. The optimal T * which would minimize the cost rate is discussed. Various special cases are considered. Finally, a numerical example is given

  6. b-c system approach to minimal models. The genus-zero case

    International Nuclear Information System (INIS)

    Bonora, L.; Matone, M.; Toppan, F.; Wu Ke

    1989-01-01

    We present a method based on real weight b-c system to study conformal minimal models. In particular it reproduces the results of the Coulomb gas approach, while shedding light on the topological origin of the charge at infinity. (orig.)

  7. Deterministic Bragg Coherent Diffraction Imaging.

    Science.gov (United States)

    Pavlov, Konstantin M; Punegov, Vasily I; Morgan, Kaye S; Schmalz, Gerd; Paganin, David M

    2017-04-25

    A deterministic variant of Bragg Coherent Diffraction Imaging is introduced in its kinematical approximation, for X-ray scattering from an imperfect crystal whose imperfections span no more than half of the volume of the crystal. This approach provides a unique analytical reconstruction of the object's structure factor and displacement fields from the 3D diffracted intensity distribution centred around any particular reciprocal lattice vector. The simple closed-form reconstruction algorithm, which requires only one multiplication and one Fourier transformation, is not restricted by assumptions of smallness of the displacement field. The algorithm performs well in simulations incorporating a variety of conditions, including both realistic levels of noise and departures from ideality in the reference (i.e. imperfection-free) part of the crystal.

  8. Minimally invasive esophagectomy for cancer: Single center experience after 44 consecutive cases

    Directory of Open Access Journals (Sweden)

    Bjelović Miloš

    2015-01-01

    Full Text Available Introduction. At the Department of Minimally Invasive Upper Digestive Surgery of the Hospital for Digestive Surgery in Belgrade, hybrid minimally invasive esophagectomy (hMIE has been a standard of care for patients with resectable esophageal cancer since 2009. As a next and final step in the change management, from January 2015 we utilized total minimally invasive esophagectomy (tMIE as a standard of care. Objective. The aim of the study was to report initial experiences in hMIE (laparoscopic approach for cancer and analyze surgical technique, major morbidity and 30-day mortality. Methods. A retrospective cohort study included 44 patients who underwent elective hMIE for esophageal cancer at the Department for Minimally Invasive Upper Digestive Surgery, Hospital for Digestive Surgery, Clinical Center of Serbia in Belgrade from April 2009 to December 2014. Results. There were 16 (36% middle thoracic esophagus tumors and 28 (64% tumors of distal thoracic esophagus. Mean duration of the operation was 319 minutes (approximately five hours and 20 minutes. The average blood loss was 173.6 ml. A total of 12 (27% of patients had postoperative complications and mean intensive care unit stay was 2.8 days. Mean hospital stay after surgery was 16 days. The average number of harvested lymph nodes during surgery was 31.9. The overall 30-day mortality rate within 30 days after surgery was 2%. Conclusion. As long as MIE is an oncological equivalent to open esophagectomy (OE, better relation between cost savings and potentially increased effectiveness will make MIE the preferred approach in high-volume esophageal centers that are experienced in minimally invasive procedures.

  9. Primary Sjögren’s syndrome with minimal change disease—A case report

    Directory of Open Access Journals (Sweden)

    Mei-Li Yang

    2011-05-01

    Full Text Available Glomerular involvement in patients with primary Sjögren’s syndrome (pSS has rarely been reported. Among them, membranoproliferative glomerulonephritis and membranous nephropathy are the more common types. We report a middle-aged female presenting concurrently with nephrotic syndrome and microscopic hematuria, and her pSS was diagnosed by positive anti-Ro (SSA/anti-La (SSB autoantibodies, dry mouth, severely diffuse impaired function of both bilateral parotid and submandibular glands, and a positive Schirmer test. Renal pathology revealed minimal change disease and thin basement membrane nephropathy. The patient’s nephrotic syndrome resolved after treatment with corticosteroids. To our knowledge, this is the first report of minimal change disease in a patient with pSS.

  10. Political Minimalism and Social Debates: The Case of Human-Enhancement Technologies.

    Science.gov (United States)

    Rodríguez-Alcázar, Javier

    2017-09-01

    A faulty understanding of the relationship between morality and politics encumbers many contemporary debates on human enhancement. As a result, some ethical reflections on enhancement undervalue its social dimensions, while some social approaches to the topic lack normative import. In this essay, I use my own conception of the relationship between ethics and politics, which I call "political minimalism," in order to support and strengthen the existing social perspectives on human-enhancement technologies.

  11. Source authenticity in the UMLS--a case study of the Minimal Standard Terminology.

    Science.gov (United States)

    Elhanan, Gai; Huang, Kuo-Chuan; Perl, Yehoshua

    2010-12-01

    As the UMLS integrates multiple source vocabularies, the integration process requires that certain adaptation be applied to the source. Our interest is in examining the relationship between the UMLS representation of a source vocabulary and the source vocabulary itself. We investigated the integration of the Minimal Standard Terminology (MST) into the UMLS in order to examine how close its UMLS representation is to the source MST. The MST was conceived as a "minimal" list of terms and structure intended for use within computer systems to facilitate standardized reporting of gastrointestinal endoscopic examinations. Although the MST has an overall schema and implied relationship structure, many of the UMLS integrated MST terms were found to be hierarchically orphaned, and with lateral relationships that do not closely adhere to the source MST. Thus, the MST representation within the UMLS significantly differs from that of the source MST. These representation discrepancies may affect the usability of the MST representation in the UMLS for knowledge acquisition. Furthermore, they pose a problem from the perspective of application developers. While these findings may not necessarily apply to other source terminologies, they highlight the conflict between preservation of authentic concept orientation and the UMLS overall desire to provide fully specified names for all source terms. Copyright © 2010 Elsevier Inc. All rights reserved.

  12. A case report of minimal change nephrotic syndrome complicated with portal, splenic and superior mesenteric vein thrombosis.

    Science.gov (United States)

    Wang, Jun; Fan, QiuLing; Chen, Ying; Dong, Xuezhu; Zhang, YuXia; Feng, JiangMin; Ma, JianFei; Wang, LiNing

    2012-06-01

    Venous thrombosis is common in nephrotic syndrome, but portal vein thrombosis has a relatively low incidence in patients with nephrotic syndrome. We describe here a case of an 18-year old male student with newly diagnosed nephrotic syndrome that was complicated with portal, splenic and superior mesenteric vein thrombosis. In the presence of newly diagnosed nephrotic syndrome of minimal change disease, thrombus formation can occur and should be noted, particularly when it occurs, in rare sites. The recognition in nephrotic syndrome complicated with portal, splenic and superior mesenteric vein thrombosis should be emphasized.

  13. Minimizing Spatial Variability of Healthcare Spatial Accessibility—The Case of a Dengue Fever Outbreak

    Directory of Open Access Journals (Sweden)

    Hone-Jay Chu

    2016-12-01

    Full Text Available Outbreaks of infectious diseases or multi-casualty incidents have the potential to generate a large number of patients. It is a challenge for the healthcare system when demand for care suddenly surges. Traditionally, valuation of heath care spatial accessibility was based on static supply and demand information. In this study, we proposed an optimal model with the three-step floating catchment area (3SFCA to account for the supply to minimize variability in spatial accessibility. We used empirical dengue fever outbreak data in Tainan City, Taiwan in 2015 to demonstrate the dynamic change in spatial accessibility based on the epidemic trend. The x and y coordinates of dengue-infected patients with precision loss were provided publicly by the Tainan City government, and were used as our model’s demand. The spatial accessibility of heath care during the dengue outbreak from August to October 2015 was analyzed spatially and temporally by producing accessibility maps, and conducting capacity change analysis. This study also utilized the particle swarm optimization (PSO model to decrease the spatial variation in accessibility and shortage areas of healthcare resources as the epidemic went on. The proposed method in this study can help decision makers reallocate healthcare resources spatially when the ratios of demand and supply surge too quickly and form clusters in some locations.

  14. Respiratory System Function in Patients After Minimally Invasive Aortic Valve Replacement Surgery: A Case Control Study.

    Science.gov (United States)

    Stoliński, Jarosław; Musiał, Robert; Plicner, Dariusz; Andres, Janusz

    The aim of the study was to comparatively analyze respiratory system function after minimally invasive, through right minithoracotomy aortic valve replacement (RT-AVR) to conventional AVR. Analysis of 201 patients scheduled for RT-AVR and 316 for AVR between January 2010 and November 2013. Complications of the respiratory system and pulmonary functional status are presented. Complications of the respiratory system occurred in 16.8% of AVR and 11.0% of RT-AVR patients (P = 0.067). The rate of pleural effusions, thoracenteses, pneumonias, or phrenic nerve dysfunctions was not significantly different between groups. Perioperative mortality was 1.9% in AVR and 1.0% in RT-AVR (P = 0.417). Mechanical ventilation time after surgery was 9.7 ± 5.9 hours for AVR and 7.2 ± 3.2 hours for RT-AVR patients (P respiratory system complications. Spirometry examinations revealed that pulmonary functional status was more impaired after AVR in comparison with RT-AVR surgery.

  15. Source Authenticity in the UMLS – A Case Study of the Minimal Standard Terminology

    Science.gov (United States)

    Elhanan, Gai; Huang, Kuo-Chuan; Perl, Yehoshua

    2010-01-01

    As the UMLS integrates multiple source vocabularies, the integration process requires that certain adaptation be applied to the source. Our interest is in examining the relationship between the UMLS representation of a source vocabulary and the source vocabulary itself. We investigated the integration of the Minimal Standard Terminology (MST) into the UMLS in order to examine how close its UMLS representation is to the source MST. The MST was conceived as a “minimal” list of terms and structure intended for use within computer systems to facilitate standardized reporting of gastrointestinal endoscopic examinations. Although the MST has an overall schema and implied relationship structure, many of the UMLS integrated MST terms were found to be hierarchically orphaned, and with lateral relationships that do not closely adhere to the source MST. Thus, the MST representation within the UMLS significantly differs from that of the source MST. These representation discrepancies may affect the usability of the MST representation in the UMLS for knowledge acquisition. Furthermore, they pose a problem from the perspective of application developers. While these findings may not necessarily apply to other source terminologies, they highlight the conflict between preservation of authentic concept orientation and the UMLS overall desire to provide fully specified names for all source terms. PMID:20692366

  16. Deterministic and stochastic models for middle east respiratory syndrome (MERS)

    Science.gov (United States)

    Suryani, Dessy Rizki; Zevika, Mona; Nuraini, Nuning

    2018-03-01

    World Health Organization (WHO) data stated that since September 2012, there were 1,733 cases of Middle East Respiratory Syndrome (MERS) with 628 death cases that occurred in 27 countries. MERS was first identified in Saudi Arabia in 2012 and the largest cases of MERS outside Saudi Arabia occurred in South Korea in 2015. MERS is a disease that attacks the respiratory system caused by infection of MERS-CoV. MERS-CoV transmission occurs directly through direct contact between infected individual with non-infected individual or indirectly through contaminated object by the free virus. Suspected, MERS can spread quickly because of the free virus in environment. Mathematical modeling is used to illustrate the transmission of MERS disease using deterministic model and stochastic model. Deterministic model is used to investigate the temporal dynamic from the system to analyze the steady state condition. Stochastic model approach using Continuous Time Markov Chain (CTMC) is used to predict the future states by using random variables. From the models that were built, the threshold value for deterministic models and stochastic models obtained in the same form and the probability of disease extinction can be computed by stochastic model. Simulations for both models using several of different parameters are shown, and the probability of disease extinction will be compared with several initial conditions.

  17. Deterministic equation solving over finite fields

    NARCIS (Netherlands)

    Woestijne, Christiaan Evert van de

    2006-01-01

    It is shown how to solve diagonal forms in many variables over finite fields by means of a deterministic efficient algorithm. Applications to norm equations, quadratic forms, and elliptic curves are given.

  18. A Deterministic and Polynomial Modified Perceptron Algorithm

    Directory of Open Access Journals (Sweden)

    Olof Barr

    2006-01-01

    Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.

  19. Minimal invasive surgery for unicameral bone cyst using demineralized bone matrix: a case series

    Directory of Open Access Journals (Sweden)

    Cho Hwan

    2012-07-01

    Full Text Available Abstract Background Various treatments for unicameral bone cyst have been proposed. Recent concern focuses on the effectiveness of closed methods. This study evaluated the effectiveness of demineralized bone matrix as a graft material after intramedullary decompression for the treatment of unicameral bone cysts. Methods Between October 2008 and June 2010, twenty-five patients with a unicameral bone cyst were treated with intramedullary decompression followed by grafting of demineralized bone matrix. There were 21 males and 4 female patients with mean age of 11.1 years (range, 3–19 years. The proximal metaphysis of the humerus was affected in 12 patients, the proximal femur in five, the calcaneum in three, the distal femur in two, the tibia in two, and the radius in one. There were 17 active cysts and 8 latent cysts. Radiologic change was evaluated according to a modified Neer classification. Time to healing was defined as the period required achieving cortical thickening on the anteroposterior and lateral plain radiographs, as well as consolidation of the cyst. The patients were followed up for mean period of 23.9 months (range, 15–36 months. Results Nineteen of 25 cysts had completely consolidated after a single procedure. The mean time to healing was 6.6 months (range, 3–12 months. Four had incomplete healing radiographically but had no clinical symptom with enough cortical thickness to prevent fracture. None of these four cysts needed a second intervention until the last follow-up. Two of 25 patients required a second intervention because of cyst recurrence. All of the two had a radiographical healing of cyst after mean of 10 additional months of follow-up. Conclusions A minimal invasive technique including the injection of DBM could serve as an excellent treatment method for unicameral bone cysts.

  20. Risk-based and deterministic regulation

    International Nuclear Information System (INIS)

    Fischer, L.E.; Brown, N.W.

    1995-07-01

    Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose

  1. Deterministic Approach to Detect Heart Sound Irregularities

    Directory of Open Access Journals (Sweden)

    Richard Mengko

    2017-07-01

    Full Text Available A new method to detect heart sound that does not require machine learning is proposed. The heart sound is a time series event which is generated by the heart mechanical system. From the analysis of heart sound S-transform and the understanding of how heart works, it can be deducted that each heart sound component has unique properties in terms of timing, frequency, and amplitude. Based on these facts, a deterministic method can be designed to identify each heart sound components. The recorded heart sound then can be printed with each component correctly labeled. This greatly help the physician to diagnose the heart problem. The result shows that most known heart sounds were successfully detected. There are some murmur cases where the detection failed. This can be improved by adding more heuristics including setting some initial parameters such as noise threshold accurately, taking into account the recording equipment and also the environmental condition. It is expected that this method can be integrated into an electronic stethoscope biomedical system.

  2. Surgical case volume in Canadian urology residency: a comparison of trends in open and minimally invasive surgical experience.

    Science.gov (United States)

    Mamut, Adiel E; Afshar, Kourosh; Mickelson, Jennifer J; Macneily, Andrew E

    2011-06-01

    The application of minimally invasive surgery (MIS) has become increasingly common in urology training programs and clinical practice. Our objective was to review surgical case data from all 12 Canadian residency programs to identify trends in resident exposure to MIS and open procedures. Every year, beginning in 2003, an average of 41 postgraduate year 3 to 5 residents reported surgical case data to a secure internet relational database. Data were anonymized and extracted for the period 2003 to 2009 by measuring a set of 11 predefined index cases that could be performed in both an open and MIS fashion. 16,687 index cases were recorded by a total of 198 residents. As a proportion, there was a significant increase in MIS from 12% in 2003 to 2004 to 32% in 2008 to 2009 (P=0.01). A significant decrease in the proportion of index cases performed with an open approach was also observed from 88% in 2003 to 2004 to 68% in 2008 to 2009 (P=0.01). The majority of these shifts were secondary to the increased application of MIS for nephrectomies of all type (29%-45%), nephroureterectomy (27%-76%), adrenalectomy (15%-71%), and pyeloplasty (17%-54%) (Pfashion during the study period. MIS constitutes an increasingly significant component of surgical volume in Canadian urology residencies with a reciprocal decrease in exposure to open surgery. These trends necessitate ongoing evaluation to maintain the integrity of postgraduate urologic training.

  3. Minimally Invasive Total Hip Replacement in an Ipsilateral Post-traumatic above-knee Amputation: A Case Report.

    Science.gov (United States)

    Patnaik, Sanjeev; Nayak, Biswaranjan; Sahoo, Akshaya Kumar; Sahu, Nabin Kumar

    2017-01-01

    Total hip replacement (THR) is a highly successful operation in alleviating pain and improving the overall function of the hip, in end-stage arthritis of the hip, in otherwise fit patients. However, THR as a surgical option in post-traumatic hip arthritis with ipsilateral above-knee amputation is rarely reported. We are presenting a case report of a 30-year-old male, who had previously underwent an above-knee amputation due to road-traffic accident, presenting 24 h after the injury with segmental fracture femur and popliteal artery laceration, for which the limb could not be salvaged. He had an impacted anteroinferior dislocation of the ipsilateral hip with significant cartilage damage of the femoral head which required open reduction. Subsequently, he developed traumatic arthritis of the involved hip which required conversion to an uncemented THR, using a minimally invasive (MIS) anterolateral approach. The preoperative management, surgical technique, and postoperative rehabilitation are described to highlight the technical challenges, these lower limb amputees may present along with review of literature of such rare cases. THR in an above-knee amputee with posttraumatic hip arthritis using MIS technique is an encouraging surgical option for early functional recovery and minimizing surgical complications.

  4. Minimally invasive maxillary sinus elevation using balloon system: A case series

    Directory of Open Access Journals (Sweden)

    Radha Bharathi Dhandapani

    2016-01-01

    Full Text Available The posterior maxillary segment frequently exhibits insufficient bone mass to support dental implants. Sinus floor augmentation enables implant placement in the posterior maxilla. This case series included ten sites, in which sinus floor elevation was done using sinus lift balloon system followed by augmentation utilizing irradiated cancellous bone allograft. Postoperative radiographic assessment of vertical bone gain was done at 3 and 6 months follow-up period. The mean initial and final bone height were 6.16 and 10.50 mm, respectively, with a mean increase of 4.34 mm at 6 months being observed with nil complication. The presented technique might represent a viable alternative for sinus elevation in posterior atrophied maxilla. Irradiated cancellous bone allograft can be advocated as an ideal bone graft material for sinus augmentation procedures.

  5. Minimally invasive surgical treatment for early-stage ovarian cancer: a case report

    Directory of Open Access Journals (Sweden)

    Alexandre Pupo-Nogueira

    2006-12-01

    Full Text Available Case report of a 54-year-old patient, with no complaints and noalterations detected during the physical examination, who underwenta routine pelvic ultrasound that showed a complex cyst on the rightovary which was confirmed with a CT scan. The serum CA125 levelwas elevated while other tumor markers – carcinoembryonic antigen,alphafetoprotein antigen and the beta human chorionic gonadotrophinwere normal. Videolaparoscopy was used for the diagnosis andtherapeutic management, revealing vegetating lesions on bothovaries but no other alterations. Biopsies were performed on thetumor masses and analyzed using the frozen section technique duringthe surgical procedure which revealed a serous neoplasm of lowmalignant potential - borderline. Next, ovarian carcinoma stagingwas performed in accordance with the standards recommended bythe International Federation of Gynecology and Obstetrics: bilateralsalpingo-oophorectomy, total abdominal hysterectomy, bilateralpelvic and para-aortic lymphadenectomy. To complete the staging,an omentectomy was performed by means of a 4 cm transverseincision in the epigastric region which was enlarged using a specialDexterity Protractor™retractor. The incision also enabled the removalof surgical specimens. The patient was discharged from the hospitalon the following day and recovered without any complications.Histological analysis confirmed the borderline tumor and no malignantcells were found on the other surgical specimens. Videolaparoscopy,minilaparotomy and the special retractor enabled adequate diagnosis,staging and removal of the localized ovarian tumor.

  6. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    Science.gov (United States)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  7. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  8. Successful minimally-invasive management of a case of giant prostatic hypertrophy associated with recurrent nephrogenic adenoma of the prostate.

    Science.gov (United States)

    Learney, Robert M; Malde, Sachin; Downes, Mark; Shrotri, Nitin

    2013-04-08

    Benign Prostatic Hypertrophy (BPH) is said to affect at least a third of men over 60. However, the literature contains fewer than 200 reports of prostates over 200g in mass - Giant Prostatic Hypertrophy (GPH). Nephrogenic adenomas are benign lesions of the urinary tract that are believed to represent the local proliferation of shed renal tubular cells implanting at sites of urothelial injury. We present the first case in the literature of these two rare pathologies co-existing in the same patient and the successful management and 36-month follow-up of the patient's symptoms with minimally invasive therapy, including the still-uncommon selective prostatic artery embolisation. We also briefly discuss the role of PAX2 in injured renal tissues and nephrogenic adenomas. Symptomatic Giant Prostatic Hypertrophy (GPH) can be successfully managed with a combination of serial TURPs, 5 α-reductase inhibition and selective prostatic artery embolisation (SPAE).

  9. Minimally Invasive Alveolar Ridge Preservation Utilizing an In Situ Hardening β-Tricalcium Phosphate Bone Substitute: A Multicenter Case Series

    Directory of Open Access Journals (Sweden)

    Minas D. Leventis

    2016-01-01

    Full Text Available Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP granules coated with poly(lactic-co-glycolic acid (PLGA were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques.

  10. Minimally Invasive Alveolar Ridge Preservation Utilizing an In Situ Hardening β-Tricalcium Phosphate Bone Substitute: A Multicenter Case Series

    Science.gov (United States)

    Leventis, Minas D.; Fairbairn, Peter; Kakar, Ashish; Leventis, Angelos D.; Margaritis, Vasileios; Lückerath, Walter; Horowitz, Robert A.; Rao, Bappanadu H.; Lindner, Annette; Nagursky, Heiner

    2016-01-01

    Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP) granules coated with poly(lactic-co-glycolic acid) (PLGA) were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone) in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material) while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques. PMID:27190516

  11. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models......The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...

  12. DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    MICULEAC Melania Elena

    2014-06-01

    Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.

  13. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    Science.gov (United States)

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p probabilistic than in deterministic tracking (p

  14. Deterministic doping and the exploration of spin qubits

    Energy Technology Data Exchange (ETDEWEB)

    Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)

    2015-01-09

    Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.

  15. Shock-induced explosive chemistry in a deterministic sample configuration.

    Energy Technology Data Exchange (ETDEWEB)

    Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith

    2005-10-01

    Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.

  16. Deterministic geologic processes and stochastic modeling

    International Nuclear Information System (INIS)

    Rautman, C.A.; Flint, A.L.

    1991-01-01

    Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues

  17. Deterministic seismic hazard macrozonation of India

    Indian Academy of Sciences (India)

    Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of ...

  18. Deterministic algorithms for multi-criteria TSP

    NARCIS (Netherlands)

    Manthey, Bodo; Ogihara, Mitsunori; Tarui, Jun

    2011-01-01

    We present deterministic approximation algorithms for the multi-criteria traveling salesman problem (TSP). Our algorithms are faster and simpler than the existing randomized algorithms. First, we devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  19. LQ control without Ricatti equations: deterministic systems

    NARCIS (Netherlands)

    D.D. Yao (David); S. Zhang (Shuzhong); X.Y. Zhou (Xun Yu)

    1999-01-01

    textabstractWe study a deterministic linear-quadratic (LQ) control problem over an infinite horizon, and develop a general apprach to the problem based on semi-definite programming (SDP)and related duality analysis. This approach allows the control cost matrix R to be non-negative (semi-definite), a

  20. A Numerical Simulation for a Deterministic Compartmental ...

    African Journals Online (AJOL)

    In this work, an earlier deterministic mathematical model of HIV/AIDS is revisited and numerical solutions obtained using Eulers numerical method. Using hypothetical values for the parameters, a program was written in VISUAL BASIC programming language to generate series for the system of difference equations from the ...

  1. Minimally invasive endoscopic repair of refractory lateral skull base cerebrospinal fluid rhinorrhea: case report and review of the literature.

    Science.gov (United States)

    Lucke-Wold, Brandon; Brown, Erik C; Cetas, Justin S; Dogan, Aclan; Gupta, Sachin; Hullar, Timothy E; Smith, Timothy L; Ciporen, Jeremy N

    2018-03-01

    Cerebrospinal fluid (CSF) leaks occur in approximately 10% of patients undergoing a translabyrinthine, retrosigmoid, or middle fossa approach for vestibular schwannoma resection. Cerebrospinal fluid rhinorrhea also results from trauma, neoplasms, and congenital defects. A high degree of difficulty in repair sometimes requires repetitive microsurgical revisions-a rate of 10% of cases is often cited. This can not only lead to morbidity but is also costly and burdensome to the health care system. In this case-based theoretical analysis, the authors summarize the literature regarding endoscopic endonasal techniques to obliterate the eustachian tube (ET) as well as compare endoscopic endonasal versus open approaches for repair. Given the results of their analysis, they recommend endoscopic endonasal ET obliteration (EEETO) as a first- or second-line technique for the repair of CSF rhinorrhea from a lateral skull base source refractory to spontaneous healing and CSF diversion. They present a case in which EEETO resolved refractory CSF rhinorrhea over a 10-month follow-up after CSF diversions, wound reexploration, revised packing of the ET via a lateral microscopic translabyrinthine approach, and the use of a vascularized flap had failed. They further summarize the literature regarding studies that describe various iterations of EEETO. By its minimally invasive nature, EEETO imposes less morbidity as well as less risk to the patient. It can be readily implemented into algorithms once CSF diversion (for example, lumbar drain) has failed, prior to considering open surgery for repair. Additional studies are warranted to further demonstrate the outcome and cost-saving benefits of EEETO as the data until now have been largely empirical yet very hopeful. The summaries and technical notes described in this paper may serve as a resource for those skull base teams faced with similar challenging and otherwise refractory CSF leaks from a lateral skull base source.

  2. [Minimal invasive esophageal resection with anastomosis on the neck [McKeown]. Our experiences after 20 cases].

    Science.gov (United States)

    Mohos, Elemér; Nagy, Attila; Szabados, György; Réti, György; Kovács, Tamás; Jánó, Zoltán; Berki, Csaba; Mohay, József; Szabó, Lóránt; Bene, Krisztina; Bognár, Gábor; Horzov, Myroslav; Mohos, Petra; Sándor, Gábor; Tornai, Gábor; Szenkovits, Péter; Nagy, Tibor; Orbán, Csaba; Herpai, Vivien

    2016-12-01

    Esophageal resection is a traumatic intervention usually performed on patients with poor condition, resulting high mortality and morbidity. To improve the high incidence of complications, minimal invasive interventions were introduced. The results of the thoracoscopically and laparoscopically performed esophageal resection (McKeown) was investigated after 20 cases and the technical details of the surgical intervention are presented. 20 thoracoscopic esophageal resection with laparoscopic gastric tube formation (sec. Akiyama) preparing the esophago-gastric anastomosis on the neck were performed in our department in the last four years. 1 patient with stricture and the other 19 patients with esophageal cancer were operated on, among them11 had T4 stage. 17 patient received neoadjuvant chemo-radiotherapy because of advanced disease. Regular follow up examinations were performed in the oncological outpatient department. 8 patients are alive after a mean follow up period of 25 months, 2 of them are treated oncologically because of recurrent disease. 19 patients were extubated within 12 hours after the intervention and the time spent in the intensive care unit were reduced to 1 or 2 days. The mean duration of the intervention was 320 minutes. Thoracoscopic dissection was performed in 8 patients without ventilation of the right lung using double lumen tracheal tube, among them 3 patients developed pneumonia in the postoperative period. The remaining 12 patients were operated with ventilated right lung, among them one patient developed pneumonia. One patient was converted because of injury of the thoracic aorta, after urgent thoracotomy we managed to suture the aortic wall. 1 patient died in 30 days after the operation, caused by leakage of the anastomosis, resulting mediastinitis and esophago-tracheal fistula. In two patients re-thoracoscopy and ligation of the thoracic duct was performed because of chylothorax refractory for conservative treatment. According to our

  3. Minimal surfaces

    CERN Document Server

    Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht

    2010-01-01

    Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently

  4. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  5. Design Optimization of a Speed Reducer Using Deterministic Techniques

    Directory of Open Access Journals (Sweden)

    Ming-Hua Lin

    2013-01-01

    Full Text Available The optimal design problem of minimizing the total weight of a speed reducer under constraints is a generalized geometric programming problem. Since the metaheuristic approaches cannot guarantee to find the global optimum of a generalized geometric programming problem, this paper applies an efficient deterministic approach to globally solve speed reducer design problems. The original problem is converted by variable transformations and piecewise linearization techniques. The reformulated problem is a convex mixed-integer nonlinear programming problem solvable to reach an approximate global solution within an acceptable error. Experiment results from solving a practical speed reducer design problem indicate that this study obtains a better solution comparing with the other existing methods.

  6. Nonterminals, homomorphisms and codings in different variations of OL-systems. I. Deterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...

  7. Deterministic dynamics of plasma focus discharges

    International Nuclear Information System (INIS)

    Gratton, J.; Alabraba, M.A.; Warmate, A.G.; Giudice, G.

    1992-04-01

    The performance (neutron yield, X-ray production, etc.) of plasma focus discharges fluctuates strongly in series performed with fixed experimental conditions. Previous work suggests that these fluctuations are due to a deterministic ''internal'' dynamics involving degrees of freedom not controlled by the operator, possibly related to adsorption and desorption of impurities from the electrodes. According to these dynamics the yield of a discharge depends on the outcome of the previous ones. We study 8 series of discharges in three different facilities, with various electrode materials and operating conditions. More evidence of a deterministic internal dynamics is found. The fluctuation pattern depends on the electrode materials and other characteristics of the experiment. A heuristic mathematical model that describes adsorption and desorption of impurities from the electrodes and their consequences on the yield is presented. The model predicts steady yield or periodic and chaotic fluctuations, depending on parameters related to the experimental conditions. (author). 27 refs, 7 figs, 4 tabs

  8. Dynamic optimization deterministic and stochastic models

    CERN Document Server

    Hinderer, Karl; Stieglitz, Michael

    2016-01-01

    This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.

  9. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  10. Advances in stochastic and deterministic global optimization

    CERN Document Server

    Zhigljavsky, Anatoly; Žilinskas, Julius

    2016-01-01

    Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...

  11. Deterministic nanoparticle assemblies: from substrate to solution

    International Nuclear Information System (INIS)

    Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J

    2014-01-01

    The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)

  12. Deterministic properties of mine tremor aftershocks

    CSIR Research Space (South Africa)

    Kgarume, TE

    2010-10-01

    Full Text Available in earthquake generation and rupture mechanisms (Persh and Houston, 2004). Yang and Ben-Zion (2009) found that aftershock productivity has an inverse relationship with the mean heat flow. 2 Deterministic analysis of mine tremor aftershocks 2.1 Mining.... and Houston, H. (2004) Strongly depth-dependent aftershock production in deep earthquakes, Bulletin of the Seismological Society of America, 94, pp. 1808 - 1816. Spottiswoode, S. M. (2000) Aftershocks and foreshocks of mine seismic events, 3rd International...

  13. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....

  14. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  15. A deterministic model of electron transport for electron probe microanalysis

    Science.gov (United States)

    Bünger, J.; Richter, S.; Torrilhon, M.

    2018-01-01

    Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.

  16. Application of AHP for the development of waste management systems that minimize infection risks in developing countries: Case studies Lesotho and South Africa

    CSIR Research Space (South Africa)

    Brent, AC

    2006-09-01

    Full Text Available of HCWM systems, that is to minimize infection of patients and workers, and the public within the system. The tool was applied to two case studies: the sub-Saharan African countries of Lesotho and South Africa. Quantitative weightings from the AHP are used...

  17. Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.

    Science.gov (United States)

    Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O

    2006-03-01

    The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.

  18. Forecasting project schedule performance using probabilistic and deterministic models

    Directory of Open Access Journals (Sweden)

    S.A. Abdel Azeem

    2014-04-01

    Full Text Available Earned value management (EVM was originally developed for cost management and has not widely been used for forecasting project duration. In addition, EVM based formulas for cost or schedule forecasting are still deterministic and do not provide any information about the range of possible outcomes and the probability of meeting the project objectives. The objective of this paper is to develop three models to forecast the estimated duration at completion. Two of these models are deterministic; earned value (EV and earned schedule (ES models. The third model is a probabilistic model and developed based on Kalman filter algorithm and earned schedule management. Hence, the accuracies of the EV, ES and Kalman Filter Forecasting Model (KFFM through the different project periods will be assessed and compared with the other forecasting methods such as the Critical Path Method (CPM, which makes the time forecast at activity level by revising the actual reporting data for each activity at a certain data date. A case study project is used to validate the results of the three models. Hence, the best model is selected based on the lowest average percentage of error. The results showed that the KFFM developed in this study provides probabilistic prediction bounds of project duration at completion and can be applied through the different project periods with smaller errors than those observed in EV and ES forecasting models.

  19. The Clinical Course of Minimal Change Nephrotic Syndrome With Onset in Adulthood or Late Adolescence: A Case Series.

    Science.gov (United States)

    Maas, Rutger J; Deegens, Jeroen K; Beukhof, Johan R; Reichert, Louis J; Ten Dam, Marc A; Beutler, Jaap J; van den Wall Bake, A Warmold L; Rensma, Pieter L; Konings, Constantijn J; Geerse, Daniel A; Feith, Geert W; Van Kuijk, Willi H; Wetzels, Jack F

    2017-05-01

    Few studies have examined the treatment and outcome of adult-onset minimal change nephrotic syndrome (MCNS). We retrospectively studied 125 patients who had MCNS with onset in either adulthood or late adolescence. Presenting characteristics, duration of initial treatment and response to treatment, relapse patterns, complications, and long-term outcome were studied. Case series. Patients with new-onset nephrotic syndrome 16 years or older and a histologic diagnosis of MCNS in 1985 to 2011 were identified from pathology records of 10 participating centers. Partial and complete remission, treatment resistance, relapse, complications, renal survival. Corticosteroids were given as initial treatment in 105 (84%) patients. After 16 weeks of corticosteroid treatment, 92 (88%) of these patients had reached remission. Median time to remission was 4 (IQR, 2-7) weeks. 7 (6%) patients initially received cyclophosphamide with or without corticosteroids, and all attained remission after a median of 4 (IQR, 3-11) weeks. 13 (10%) patients reached remission without immunosuppressive treatment. One or more relapses were observed in 57 (54%) patients who received initial corticosteroid treatment. Second-line cyclophosphamide resulted in stable remission in 57% of patients with relapsing MCNS. Acute kidney injury was observed in 50 (40%) patients. Recovery of kidney function occurred almost without exception. Arterial or venous thrombosis occurred in 11 (9%) patients. At the last follow-up, 113 (90%) patients were in remission and had preserved kidney function. 3 patients with steroid-resistant MCNS progressed to end-stage renal disease, which was associated with focal segmental glomerulosclerosis lesions on repeat biopsy. Retrospective design, variable treatment protocols. The large majority of patients who had MCNS with onset in adulthood or late adolescence were treated with corticosteroids and reached remission, but many had relapses. Cyclophosphamide resulted in stable remission

  20. Nephrotic syndrome due to minimal change disease secondary to spider bite: clinico-pathological case of a non-described complication of latrodectism.

    Science.gov (United States)

    Méndez, Gonzalo P; Enos, Daniel; Moreira, José Luis; Alvaredo, Fátima; Oddó, David

    2017-04-01

    The patient was an 18-year-old man who developed nephrotic syndrome after a 'wheat spider' bite ( Latrodectus mactans ). Due to this atypical manifestation of latrodectism, a renal biopsy was performed showing minimal change disease. The nephrotic syndrome subsided after 1 week without specific treatment. This self-limited evolution suggests that the mechanism of podocyte damage was temporary and potentially mediated by a secondary mechanism of hypersensitivity or direct effect of the α-latrotoxin. The patient did not show signs of relapse in subsequent checkup. This is the first reported case of nephrotic syndrome due to a minimal change lesion secondary to latrodectism.

  1. Novel management of distal tibial and fibular fractures with Acumed fibular nail and minimally invasive plating osteosynthesis technique: A case report.

    Science.gov (United States)

    Wang, Tie-Jun; Ju, Wei-Na; Qi, Bao-Chang

    2017-03-01

    Anatomical characteristics, such as subcutaneous position and minimal muscle cover, contribute to the complexity of fractures of the distal third of the tibia and fibula. Severe damage to soft tissue and instability ensure high risk of delayed bone union and wound complications such as nonunion, infection, and necrosis. This case report discusses management in a 54-year-old woman who sustained fractures of the distal third of the left tibia and fibula, with damage to overlying soft tissue (swelling and blisters). Plating is accepted as the first choice for this type of fracture as it ensures accurate reduction and rigid fixation, but it increases the risk of complications. Closed fracture of the distal third of the left tibia and fibula (AO: 43-A3). After the swelling was alleviated, the patient underwent closed reduction and fixation with an Acumed fibular nail and minimally invasive plating osteosynthesis (MIPO), ensuring a smaller incision and minimal soft-tissue dissection. At the 1-year follow-up, the patient had recovered well and had regained satisfactory function in the treated limb. The Kofoed score of the left ankle was 95. Based on the experience from this case, the operation can be undertaken safely when the swelling has been alleviated. The minimal invasive technique represents the best approach. Considering the merits and good outcome in this case, we recommend the Acumed fibular nail and MIPO technique for treatment of distal tibial and fibular fractures.

  2. A three-arm (laparoscopic, hand-assisted, and robotic) matched-case analysis of intraoperative and postoperative outcomes in minimally invasive colorectal surgery.

    Science.gov (United States)

    Patel, Chirag B; Ragupathi, Madhu; Ramos-Valadez, Diego I; Haas, Eric M

    2011-02-01

    Robotic-assisted laparoscopic surgery is an emerging modality in the field of minimally invasive colorectal surgery. However, there is a dearth of data comparing outcomes with other minimally invasive techniques. We present a 3-arm (conventional, hand-assisted, and robotic) matched-case analysis of intraoperative and short-term outcomes in patients undergoing minimally invasive colorectal procedures. Between August 2008 and October 2009, 70 robotic cases of the rectum and rectosigmoid were performed. Thirty of these were organized into triplets with conventional and hand-assisted cases based on the following 6 matching criteria: 1) surgeon; 2) sex; 3) body mass index; 4) operative procedure; 5) pathology; and 6) history of neoadjuvant therapy in malignant cases. Demographics, intraoperative parameters, and postoperative outcomes were assessed. Pathological outcomes were analyzed in malignant cases. Data were stratified by postoperative diagnosis and operative procedure. There was no significant difference in intraoperative complications, estimated blood loss (126.1 ± 98.5 mL overall), or postoperative morbidity and mortality among the groups. Robotic technique required longer operative time compared with conventional laparoscopic (P hand-assisted (P robotic approach results in short-term outcomes comparable to conventional and hand-assisted laparoscopic approaches for benign and malignant diseases of the rectum and rectosigmoid. With 3-dimensional visualization, additional freedom of motion, and improved ergonomics, this enabling technology may play an important role when performing colorectal procedures involving the pelvic anatomy.

  3. Deterministic and probabilistic approach to safety analysis

    International Nuclear Information System (INIS)

    Heuser, F.W.

    1980-01-01

    The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)

  4. Nine challenges for deterministic epidemic models

    DEFF Research Database (Denmark)

    Roberts, Mick G; Andreasen, Viggo; Lloyd, Alun

    2015-01-01

    Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections......, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern...

  5. An effective multirestart deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows

    NARCIS (Netherlands)

    Bräysy, Olli; Dullaert, Wout; Hasle, Geir; Mester, David; Gendreau, Michel

    This paper presents a new deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows. The objective is to service, at minimal total cost, a set of customers within their time windows by a heterogeneous capacitated vehicle fleet. First, we motivate and

  6. Severe deterministic effects of external exposure and intake of radioactive material: basis for emergency response criteria

    International Nuclear Information System (INIS)

    Kutkov, V; Buglova, E; McKenna, T

    2011-01-01

    Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material.

  7. A mathematical theory for deterministic quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)

    2007-05-15

    Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.

  8. Deterministic prediction of surface wind speed variations

    Directory of Open Access Journals (Sweden)

    G. V. Drisya

    2014-11-01

    Full Text Available Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.

  9. Deterministic global optimization an introduction to the diagonal approach

    CERN Document Server

    Sergeyev, Yaroslav D

    2017-01-01

    This book begins with a concentrated introduction into deterministic global optimization and moves forward to present new original results from the authors who are well known experts in the field. Multiextremal continuous problems that have an unknown structure with Lipschitz objective functions and functions having the first Lipschitz derivatives defined over hyperintervals are examined. A class of algorithms using several Lipschitz constants is introduced which has its origins in the DIRECT (DIviding RECTangles) method. This new class is based on an efficient strategy that is applied for the search domain partitioning. In addition a survey on derivative free methods and methods using the first derivatives is given for both one-dimensional and multi-dimensional cases. Non-smooth and smooth minorants and acceleration techniques that can speed up several classes of global optimization methods with examples of applications and problems arising in numerical testing of global optimization algorithms are discussed...

  10. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  11. Analysis of deterministic cyclic gene regulatory network models with delays

    CERN Document Server

    Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian

    2015-01-01

    This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.

  12. Deterministic Safety Analysis of Kozloduy Units 3 and 4

    International Nuclear Information System (INIS)

    Ivanova, A.

    2002-01-01

    During development of SAR of Kozloduy NPP are used Regulatory basis, guides and recommendations, such as Regulation order No.3 of CUAEPP, Regulation order No.5 of CUAEPP, Guidelines for accident analyses of WWER NPP, Guidance for Accident Analyses of Commercial Nuclear Power Plants, and many others. The list of initiating events is evaluated on the basis of IAEA requirements, generic WWER data and statistical data from NPP. The final categorisation is carried out according to the highest frequency in the above sources. Within DBA are defined Anticipated operational occurrence (AOO) and Postulated accidents. The List of IE considered in SAR of KNPP 3 and 4 is presented. In the process of development of SAR are investigated 11 DBA cases and 9 BDBA cases. The acceptance criteria are chosen from above mentioned references and depend from the categorisation of event. Main Approaches to the deterministic safety analysis are using the best-estimate codes with conservatively selected initial and boundary conditions for DBA and best-estimate codes with relaxed conservatism for the selection of the initial and boundary conditions for BDBA. Computer codes RELAP5/Mod 3.2, MELCOR 1.8.3, DYN3D, SPPS and SMART are used for the SAR KNPP evaluation. The results shows that the new SARs of KNPP 3 and 4 cover the whole spectrum of IE, defined in the regulatory documents and IAEA guidelines. The deterministic analyses of the IEs are performed using best estimate codes with conservative sets of initial and boundary conditions. The worst single failure is selected for each individual IE and different scenarios are specified for the different aspects of the analysis. The analyses show a sufficient margin to the fulfilment of the applicable acceptance criteria and reflect all major plant upgrades except the modification of the SG collectors

  13. Minimally invasive plate osteosynthesis of the distal fibula with the locking compression plate: first experience of 20 cases.

    Science.gov (United States)

    Hess, Florian; Sommer, Christoph

    2011-02-01

    The aim of this study was to evaluate the clinical feasibility and the possible complications associated with minimally invasive plate osteosynthesis of the distal fibula. Regional county hospital. All patients with Orthopaedic Trauma Association 42, 43, 44 fractures of the distal tibia requiring plate fixation of the distal fibula were included in this cohort study. A consecutive series of 701 internally fixed fractures of tibia and ankle yielded 20 fibular fractures treated with this technique. Fractures were treated with the minimally invasive plate osteosynthesis technique using an angular stable screw-plate system for the fibula. Clinical and radiologic outcomes at 24 months. Seventeen fractures healed without complication at an average of 9 weeks. Three aseptic nonunions were recorded: one in a pilon fracture (Orthopaedic Trauma Association 43-C3) and one in a distal lower leg fracture (Orthopaedic Trauma Association 43-A3), both with severe closed soft tissue injury (as a result of a crush mechanism). The third one was in an ankle fracture dislocation (OTA 44-C1) with delayed treatment and inadequate reduction of the simple fibula fracture. Although this technique is comparable to minimally invasive plate osteosynthesis in the tibia or femur, it appears to be more difficult as a result of small bone size. As a result, we reserve this technique for selected complex fractures of the distal fibula with critical soft tissue conditions.

  14. Mechanics from Newton's laws to deterministic chaos

    CERN Document Server

    Scheck, Florian

    2018-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present 6th edition is updated and revised with more explanations, additional examples and problems with solutions, together with new sections on applications in science.   Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics.   The book contains more than 150 problems ...

  15. Deterministic quantum annealing expectation-maximization algorithm

    Science.gov (United States)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  16. Extreme events in multivariate deterministic systems

    Science.gov (United States)

    Nicolis, C.; Nicolis, G.

    2012-05-01

    The probabilistic properties of extreme values in multivariate deterministic dynamical systems are analyzed. It is shown that owing to the intertwining of unstable and stable modes the effect of dynamical complexity on the extremes tends to be masked, in the sense that the cumulative probability distribution of typical variables is differentiable and its associated probability density is continuous. Still, there exist combinations of variables probing the dominant unstable modes displaying singular behavior in the form of nondifferentiability of the cumulative distributions of extremes on certain sets of phase space points. Analytic evaluations and extensive numerical simulations are carried out for characteristic examples of Kolmogorov-type systems, for low-dimensional chaotic flows, and for spatially extended systems.

  17. Inferring hierarchical clustering structures by deterministic annealing

    International Nuclear Information System (INIS)

    Hofmann, T.; Buhmann, J.M.

    1996-01-01

    The unsupervised detection of hierarchical structures is a major topic in unsupervised learning and one of the key questions in data analysis and representation. We propose a novel algorithm for the problem of learning decision trees for data clustering and related problems. In contrast to many other methods based on successive tree growing and pruning, we propose an objective function for tree evaluation and we derive a non-greedy technique for tree growing. Applying the principles of maximum entropy and minimum cross entropy, a deterministic annealing algorithm is derived in a meanfield approximation. This technique allows us to canonically superimpose tree structures and to fit parameters to averaged or open-quote fuzzified close-quote trees

  18. Deterministic effects of interventional radiology procedures

    International Nuclear Information System (INIS)

    Shope, Thomas B.

    1997-01-01

    The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)

  19. Mechanics From Newton's Laws to Deterministic Chaos

    CERN Document Server

    Scheck, Florian

    2010-01-01

    This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...

  20. Primality deterministic and primality probabilistic tests

    Directory of Open Access Journals (Sweden)

    Alfredo Rizzi

    2007-10-01

    Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.

  1. Taxonomic minimalism.

    Science.gov (United States)

    Beattle, A J; Oliver, I

    1994-12-01

    Biological surveys are in increasing demand while taxonomic resources continue to decline. How much formal taxonomy is required to get the job done? The answer depends on the kind of job but it is possible that taxonomic minimalism, especially (1) the use of higher taxonomic ranks, (2) the use of morphospecies rather than species (as identified by Latin binomials), and (3) the involvement of taxonomic specialists only for training and verification, may offer advantages for biodiversity assessment, environmental monitoring and ecological research. As such, formal taxonomy remains central to the process of biological inventory and survey but resources may be allocated more efficiently. For example, if formal Identification is not required, resources may be concentrated on replication and increasing sample sizes. Taxonomic minimalism may also facilitate the inclusion in these activities of important but neglected groups, especially among the invertebrates, and perhaps even microorganisms. Copyright © 1994. Published by Elsevier Ltd.

  2. Deterministic-random separation in nonstationary regime

    Science.gov (United States)

    Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.

    2016-02-01

    In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable

  3. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders

    Science.gov (United States)

    Byun, Tara McAllister; Hitchcock, Elaine R.; Ferron, John

    2017-01-01

    Purpose: Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of…

  4. A critical evaluation of deterministic methods in size optimisation of reliable and cost effective standalone hybrid renewable energy systems

    International Nuclear Information System (INIS)

    Maheri, Alireza

    2014-01-01

    Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a

  5. Region-specific deterministic and probabilistic seismic hazard ...

    Indian Academy of Sciences (India)

    Region-specific deterministic and probabilistic seismic hazard analysis of Kanpur city ... A seismic hazard map of Kanpur city has been developed considering the region-specific seismotectonic parameters within a 500-km radius by deterministic and probabilistic approaches. ... King Saud University, Riyadh, Saudi Arabia.

  6. Deterministic Chaos in the X-ray Sources

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... ... a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic ...

  7. Safety Verification of Piecewise-Deterministic Markov Processes

    DEFF Research Database (Denmark)

    Wisniewski, Rafael; Sloth, Christoffer; Bujorianu, Manuela

    2016-01-01

    We consider the safety problem of piecewise-deterministic Markov processes (PDMP). These are systems that have deterministic dynamics and stochastic jumps, where both the time and the destination of the jumps are stochastic. Specifically, we solve a p-safety problem, where we identify the set...

  8. D2-Tree: A New Overlay with Deterministic Bounds

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Sioutas, Spyros; Tsichlas, Kostas

    2010-01-01

    We present a new overlay, called the Deterministic Decentralized tree (D 2-tree). The D 2-tree compares favourably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers...

  9. Atomic routing in a deterministic queuing model

    Directory of Open Access Journals (Sweden)

    T.L. Werth

    2014-03-01

    We also consider the makespan objective (arrival time of the last user and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.

  10. The Integration of Production-Distribution on Newspapers Supply Chain for Cost Minimization using Analytic Models: Case Study

    Science.gov (United States)

    Febriana Aqidawati, Era; Sutopo, Wahyudi; Hisjam, Muh.

    2018-03-01

    Newspapers are products with special characteristics which are perishable, have a shorter range of time between the production and distribution, zero inventory, and decreasing sales value along with increasing in time. Generally, the problem of production and distribution in the paper supply chain is the integration of production planning and distribution to minimize the total cost. The approach used in this article to solve the problem is using an analytical model. In this article, several parameters and constraints have been considered in the calculation of the total cost of the integration of production and distribution of newspapers during the determined time horizon. This model can be used by production and marketing managers as decision support in determining the optimal quantity of production and distribution in order to obtain minimum cost so that company's competitiveness level can be increased.

  11. Immediate implant placement following minimally invasive extraction: A case report with a 6-year follow-up

    Directory of Open Access Journals (Sweden)

    Po-Sung Fu

    2011-08-01

    Full Text Available Single tooth replacement with a dental implant has become an increasingly favored treatment option in the anterior maxilla; however, bone resorption following maxillary anterior tooth extraction is very common and often compromises gingival tissue for the implant restoration. Achieving predictable peri-implant esthetics requires a proper understanding and preservation of the osseous and gingival tissue surrounding the failing tooth. Therefore, the key to maintaining the interproximal papillae is to preserve the osseous support with minimally invasive extraction. An immediate implant insertion after tooth extraction may maintain the crest bone and the interdental papillae, thus achieving peri-implant esthetics. This article describes the detailed treatment planning and meticulous techniques in immediate implant placement that reduce treatment time and maintain functional as well as esthetic results through a 6-year follow-up.

  12. MIMO capacity for deterministic channel models: sublinear growth

    DEFF Research Database (Denmark)

    Bentosela, Francois; Cornean, Horia; Marchetti, Nicola

    2013-01-01

    This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix. In the cu......This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix....... In the current paper, we apply those results in order to study the (Shannon-Foschini) capacity behavior of a MIMO system as a function of the deterministic spread function of the environment and the number of transmitting and receiving antennas. The antennas are assumed to fill in a given fixed volume. Under...

  13. Masked Visual Analysis: Minimizing Type I Error in Visually Guided Single-Case Design for Communication Disorders.

    Science.gov (United States)

    Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John

    2017-06-10

    Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.

  14. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  15. A Deterministic Approach to Earthquake Prediction

    Directory of Open Access Journals (Sweden)

    Vittorio Sgrigna

    2012-01-01

    Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.

  16. Influenza SIRS with Minimal Pneumonitis.

    Science.gov (United States)

    Erramilli, Shruti; Mannam, Praveen; Manthous, Constantine A

    2016-01-01

    Although systemic inflammatory response syndrome (SIRS) is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca, and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.

  17. Influenza SIRS with Minimal Pneumonitis

    OpenAIRE

    Erramilli, Shruti; Mannam, Praveen; Manthous, Constantine A.

    2016-01-01

    Although systemic inflammatory response syndrome (SIRS) is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca, and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.

  18. Minimally invasive endoscopic treatment of necrotizing pancreatitis: A case report with images and review of the literature

    Directory of Open Access Journals (Sweden)

    Cassia Lemos Moura

    Full Text Available Summary Necrotizing pancreatitis with fluid collections can occur as a complication of acute pancreatitis. The management of these patients depends on the severity and involves multiple medical treatment modalities, as clinical intensive care and surgical intervention. In this article, we show a severe case of walled-off pancreatic necrosis that was conducted by endoscopic drainage with great clinical outcome.

  19. Rare earth elements minimal harvest year variation facilitates robust geographical origin discrimination: The case of PDO "Fava Santorinis".

    Science.gov (United States)

    Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A

    2016-12-15

    This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. $H^0 \\rightarrow Z^0 \\gamma$ channel in ATLAS. \\\\ A study of the Standard Model and \\\\ Minimal Supersymmetric SM case

    CERN Document Server

    Kiourkos, S

    1999-01-01

    One of the potentially accessible decay modes of the Higgs boson in the mass region $100 < m_H < 180$ GeV is the $H^0 \\rightarrow Z^0 \\gamma$ channel. The work presented in this note examines the Standard Model and Minimal Supersymmetric Standard Model predictions for the observability of this channel using particle level simulation as well as the ATLAS fast simulation (ATLFAST). It compares present estimates for the signal observability with previously reported ones in \\cite{unal} specifying the changes arising from the assumed energy of the colliding protons and the improvements in the treatment of theoretical predictions. With the present estimates, the expected significance for the SM Higgs does not exceed, in terms of $\\frac{S}{\\sqrt{B}}$, 1.5 $\\sigma$ (including $Z^0 \\rightarrow e^+ e^-$ and $Z^0 \\rightarrow {\\mu}^+ {\\mu}^-$) for an integrated luminosity of $10^5$ pb$^{-1}$ therefore not favouring this channel for SM Higgs searches. Comparable discovery potential is expected at most for the MSSM $...

  1. Deterministic direct aperture optimization using multiphase piecewise constant segmentation.

    Science.gov (United States)

    Nguyen, Dan; O'Connor, Daniel; Ruan, Dan; Sheng, Ke

    2017-11-01

    . DAO MS achieved essentially the same OAR doses compared with the DAO SA plans for the GBM case. The average difference of OAR D max and D mean between the two plans were within 0.05% of the plan prescription dose. The lung case showed slightly improved critical structure sparing using the DAO MS approach, where the average OAR D max and D mean were reduced by 3.67% and 1.08%, respectively, of the prescription dose. The DAO MS plan substantially improved OAR dose sparing for the H&N patient, where the average OAR D max and D mean were reduced by over 10% of the prescription dose. The DAO MS and DAO SA plans were comparable for the GBM and LNG PTV coverage, while the DAO MS plan substantially improved the H&N PTV coverage, increasing D99 by 6.98% of the prescription dose. For the GBM and LNG patients, the DAO MS and DAO SA plans had comparable high dose spillage but slightly worse conformity with the DAO MS approach. For the H&N plan, DAO MS was considerably superior in high dose spillage and conformity to the DAO SA . The deterministic approach is able to solve the DAO problem substantially faster than the simulated annealing approach, with a 9.5- to 40-fold decrease in total solve time, depending on the patient case. A novel deterministic direct aperture optimization formulation was developed and evaluated. It combines fluence map optimization and the multiphase piecewise constant Mumford-Shah segmentation into a unified framework, and the resulting optimization problem can be solved efficiently. Compared to the widely and commercially used simulated annealing DAO approach, it showed comparable dosimetry behavior for simple plans, and substantially improved OAR sparing, PTV coverage, PTV homogeneity, high dose spillage, and conformity for the more complex head and neck plan. © 2017 American Association of Physicists in Medicine.

  2. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Crump, Alex R. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Resch, Charles T. [Geochemistry Department, Pacific Northwest National Laboratory, Richland WA USA; Fansler, Sarah [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Environmental Compliance and Emergency Preparation, Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Fredrickson, Jim K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland WA USA

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets. The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.

  3. DETERMINISTIC EVALUATION OF DELAYED HYDRIDE CRACKING BEHAVIORS IN PHWR PRESSURE TUBES

    Directory of Open Access Journals (Sweden)

    YOUNG-JIN OH

    2013-04-01

    Full Text Available Pressure tubes made of Zr-2.5 wt% Nb alloy are important components consisting reactor coolant pressure boundary of a pressurized heavy water reactor, in which unanticipated through-wall cracks and rupture may occur due to a delayed hydride cracking (DHC. The Canadian Standards Association has provided deterministic and probabilistic structural integrity evaluation procedures to protect pressure tubes against DHC. However, intuitive understanding and subsequent assessment of flaw behaviors are still insufficient due to complex degradation mechanisms and diverse influential parameters of DHC compared with those of stress corrosion cracking and fatigue crack growth phenomena. In the present study, a deterministic flaw assessment program was developed and applied for systematic integrity assessment of the pressure tubes. Based on the examination results dealing with effects of flaw shapes, pressure tube dimensional changes, hydrogen concentrations of pressure tubes and plant operation scenarios, a simple and rough method for effective cooldown operation was proposed to minimize DHC risks. The developed deterministic assessment program for pressure tubes can be used to derive further technical bases for probabilistic damage frequency assessment.

  4. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  5. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  6. Surface plasmon field enhancements in deterministic aperiodic structures.

    Science.gov (United States)

    Shugayev, Roman

    2010-11-22

    In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.

  7. Deterministic mode representation of random stationary media for scattering problems.

    Science.gov (United States)

    Li, Jia; Korotkova, Olga

    2017-06-01

    Deterministic mode representation (DMR) is introduced for a three-dimensional random medium with a statistically stationary refractive index distribution. The DMR allows for the designing and fine tuning of novel random media by adjusting the weights of individual deterministic modes. To illustrate its usefulness, we have applied the decomposition to the problem of weak light scattering from a Gaussian Schell-model medium. In particular, we have shown how individual deterministic modes of the medium contribute to the scattered far-field spectral density distribution.

  8. Equivalence relations between deterministic and quantum mechanical systems

    International Nuclear Information System (INIS)

    Hooft, G.

    1988-01-01

    Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale

  9. Design and validation of a model to offer environmental consulting services to minimize oil pollution at gas stations: case study: a gas station in Costa Rica

    International Nuclear Information System (INIS)

    Calderon Hernandez, Teresita

    2016-01-01

    An environmental consulting service was designed and validated to minimize hydrocarbon contamination at gas stations, to be used by Migliore S.A. in order to strengthen and increase the number of services offered in the market niche of these companies. A matrix was synthesized with tools such as SWOT, deployment of the Quality Function Deployment (QFD) and international analysis. With the standardized protocols it will be possible to increase in a fluid way, the offer of consulting services. The final validation of the model allowed to verify the functionality of the same, through the generation of solid evaluation criteria that allowed a good knowledge of the gas station, case study, to offer a timely solution to your particular case, in a simple way and harmonious. The company providing environmental consulting services Migliore S.A. can count on a better commercial development, using the designed model [es

  10. Ultrasonographic diagnosis and minimally invasive treatment of a patent urachus associated with a patent omphalomesenteric duct in a newborn: A case report.

    Science.gov (United States)

    Bertozzi, Mirko; Recchia, Nicola; Di Cara, Giuseppe; Riccioni, Sara; Rinaldi, Victoria Elisa; Esposito, Susanna; Appignani, Antonino

    2017-07-01

    Patent urachus (PU) is due to an incomplete obliteration of the urachus, whereas patent omphalomesenteric duct (POMD) is due to an incomplete obliteration of the vitelline duct. These anomalies are very rarely associated with one another. We describe a case of a newborn with a PU associated with a POMD, who was diagnosed by an abdominal ultrasound (US) and laparoscopy, and managed with a minimally invasive excision. A 28-day-old male neonate was referred to our hospital to investigate a delay in umbilical healing, with blood-mucinous material spillage for 3 weeks prior to the referral. The baby had no symptoms and was in good general health. After a thorough cleaning of the umbilical stump, a clear granuloma with a suspected fistula was evident under the seat of the ligature of the stump. An abdominal US examination revealed the formation of a full communication, starting below the umbilical stump and developing along the anterior abdominal wall that connected with the bladder dome. The US also revealed a tubular formation containing air, which was compatible with POMD, in the deepest portion of the same umbilical stump. Considering these findings, the rare diagnosis of a PU associated with a POMD duct was suspected. The child was then hospitalized for an elective laparoscopy that confirmed the US picture, and a minimally invasive excision was performed. The postoperative course was favorable and uneventful. Our case underlines the importance of evaluating all persisting umbilical lesions without delay when conventional pharmacological therapies fail. Using a US as the first approach is valuable and should be supported by laparoscopy to confirm the diagnosis; a minimally invasive excision of the remnants appears to be an effective therapeutic approach.

  11. A case-study of landfill minimization and material recovery via waste co-gasification in a new waste management scheme.

    Science.gov (United States)

    Tanigaki, Nobuhiro; Ishida, Yoshihiro; Osada, Morihiro

    2015-03-01

    This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the advantage of the co-gasification system has. The co-gasification was beneficial for landfill cost in the range of 80 Euro per ton or more. Higher power prices led to lower operation cost in each case. The inert contents in processed waste had a significant influence on the operating cost. These results indicate that co-gasification of bottom ash and incombustibles with municipal solid waste contributes to minimizing the final landfill amount and has

  12. Deterministic Function Computation with Chemical Reaction Networks*

    Science.gov (United States)

    Chen, Ho-Lin; Doty, David; Soloveichik, David

    2013-01-01

    Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068

  13. Quality and Cost of Deterministic Network Calculus - Design and Evaluation of an Accurate and Fast Analysis

    OpenAIRE

    Bondorf, Steffen; Nikolaus, Paul; Schmitt, Jens B.

    2016-01-01

    Networks are integral parts of modern safety-critical systems and certification demands the provision of guarantees for data transmissions. Deterministic Network Calculus (DNC) can compute a worst-case bound on a data flow's end-to-end delay. Accuracy of DNC results has been improved steadily, resulting in two DNC branches: the classical algebraic analysis and the more recent optimization-based analysis. The optimization-based branch provides a theoretical solution for tight bounds. Its compu...

  14. Strategies to enhance waste minimization and energy conservation within organizations: a case study from the UK construction sector.

    Science.gov (United States)

    Jones, Jo; Jackson, Janet; Tudor, Terry; Bates, Margaret

    2012-09-01

    Strategies for enhancing environmental management are a key focus for the government in the UK. Using a manufacturing company from the construction sector as a case study, this paper evaluates selected interventionist techniques, including environmental teams, awareness raising and staff training to improve environmental performance. The study employed a range of methods including questionnaire surveys and audits of energy consumption and generation of waste to examine the outcomes of the selected techniques. The results suggest that initially environmental management was not a focus for either the employees or the company. However, as a result of employing the techniques, the company was able to reduce energy consumption, increase recycling rates and achieve costs savings in excess of £132,000.

  15. Method to deterministically study photonic nanostructures in different experimental instruments

    NARCIS (Netherlands)

    Husken, B.H.; Woldering, L.A.; Blum, Christian; Tjerkstra, R.W.; Vos, Willem L.

    2009-01-01

    We describe an experimental method to recover a single, deterministically fabricated nanostructure in various experimental instruments without the use of artificially fabricated markers, with the aim to study photonic structures. Therefore, a detailed map of the spatial surroundings of the

  16. Deterministic oscillatory search: a new meta-heuristic optimization ...

    Indian Academy of Sciences (India)

    heuristic optimization; power system problem. Abstract. The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical ...

  17. Active Chaotic Flows, Deterministic Modeling, and Communication with Chaos

    National Research Council Canada - National Science Library

    Grebogi, Celso

    2001-01-01

    ...) to establish to what extent a natural chaotic system can be modeled deterministically; and (3) to demonstrate theoretically and experimentally that we can encode a message in a power oscillator...

  18. Cheiloscopy ‑ A diagnostic and deterministic mirror for ...

    African Journals Online (AJOL)

    Cheiloscopy ‑ A diagnostic and deterministic mirror for establishment of person identification and gender discrimination: A study participated by Indian Medical students to aid legal proceedings and criminal investigations.

  19. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  20. Pseudo-random number generator based on asymptotic deterministic randomness

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming

    2008-01-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks

  1. Pseudo-random number generator based on asymptotic deterministic randomness

    Science.gov (United States)

    Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming

    2008-06-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.

  2. A data analysis method for identifying deterministic components of stable and unstable time-delayed systems with colored noise

    Energy Technology Data Exchange (ETDEWEB)

    Patanarapeelert, K. [Faculty of Science, Department of Mathematics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand); Frank, T.D. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany)]. E-mail: tdfrank@uni-muenster.de; Friedrich, R. [Institute for Theoretical Physics, University of Muenster, Wilhelm-Klemm-Str. 9, 48149 Muenster (Germany); Beek, P.J. [Faculty of Human Movement Sciences and Institute for Fundamental and Clinical Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam (Netherlands); Tang, I.M. [Faculty of Science, Department of Physics, Mahidol University, Rama VI Road, Bangkok 10400 (Thailand)

    2006-12-18

    A method is proposed to identify deterministic components of stable and unstable time-delayed systems subjected to noise sources with finite correlation times (colored noise). Both neutral and retarded delay systems are considered. For vanishing correlation times it is shown how to determine their noise amplitudes by minimizing appropriately defined Kullback measures. The method is illustrated by applying it to simulated data from stochastic time-delayed systems representing delay-induced bifurcations, postural sway and ship rolling.

  3. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  4. A Review of Deterministic Optimization Methods in Engineering and Management

    Directory of Open Access Journals (Sweden)

    Ming-Hua Lin

    2012-01-01

    Full Text Available With the increasing reliance on modeling optimization problems in practical applications, a number of theoretical and algorithmic contributions of optimization have been proposed. The approaches developed for treating optimization problems can be classified into deterministic and heuristic. This paper aims to introduce recent advances in deterministic methods for solving signomial programming problems and mixed-integer nonlinear programming problems. A number of important applications in engineering and management are also reviewed to reveal the usefulness of the optimization methods.

  5. Deterministic chaos in the pitting phenomena of passivable alloys

    International Nuclear Information System (INIS)

    Hoerle, Stephane

    1998-01-01

    It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr

  6. Minimally invasive presacral approach for revision of an Axial Lumbar Interbody Fusion rod due to fall-related lumbosacral instability: a case report

    Directory of Open Access Journals (Sweden)

    Cohen Anders

    2011-09-01

    Full Text Available Abstract Introduction The purpose of this study was to describe procedural details of a minimally invasive presacral approach for revision of an L5-S1 Axial Lumbar Interbody Fusion rod. Case presentation A 70-year-old Caucasian man presented to our facility with marked thoracolumbar scoliosis, osteoarthritic changes characterized by high-grade osteophytes, and significant intervertebral disc collapse and calcification. Our patient required crutches during ambulation and reported intractable axial and radicular pain. Multi-level reconstruction of L1-4 was accomplished with extreme lateral interbody fusion, although focal lumbosacral symptoms persisted due to disc space collapse at L5-S1. Lumbosacral interbody distraction and stabilization was achieved four weeks later with the Axial Lumbar Interbody Fusion System (TranS1 Inc., Wilmington, NC, USA and rod implantation via an axial presacral approach. Despite symptom resolution following this procedure, our patient suffered a fall six weeks postoperatively with direct sacral impaction resulting in symptom recurrence and loss of L5-S1 distraction. Following seven months of unsuccessful conservative care, a revision of the Axial Lumbar Interbody Fusion rod was performed that utilized the same presacral approach and used a larger diameter implant. Minimal adhesions were encountered upon presacral re-entry. A precise operative trajectory to the base of the previously implanted rod was achieved using fluoroscopic guidance. Surgical removal of the implant was successful with minimal bone resection required. A larger diameter Axial Lumbar Interbody Fusion rod was then implanted and joint distraction was re-established. The radicular symptoms resolved following revision surgery and our patient was ambulating without assistance on post-operative day one. No adverse events were reported. Conclusions The Axial Lumbar Interbody Fusion distraction rod may be revised and replaced with a larger diameter rod using

  7. An alternate protocol to achieve stochastic and deterministic resonances

    Science.gov (United States)

    Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.

    2017-10-01

    Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.

  8. Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation

    CERN Document Server

    Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F

    2002-01-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...

  9. Avulsion of the left internal mammary artery graft after minimally invasive coronary surgery: fatal complication or medical error? A case report.

    Science.gov (United States)

    Viel, Guido; Balmaceda, Ute; Sperhake, Jan P

    2009-01-01

    Minimally invasive direct coronary artery bypass (MIDCAB) is performed through a left anterior mini-thoracotomy without the use of a cardiopulmonary bypass and offers greater potential for more rapid recovery, reduced pain and a decreased need for blood transfusion than conventional coronary artery bypass grafting. Few major complications of the MIDCAB procedure have been reported in the literature since the first intervention was performed in 1995, but the most serious one is avulsion of the left internal mammary artery (LIMA) graft near the site of anastomosis with the left anterior descending coronary artery. Forensic issues regarding the role of the surgeon in causing this life-threatening emergency condition have not been discussed. We report here the case of a 48-year-old man who died 18 days after a MIDCAB of massive thoracic bleeding due to the avulsion of the LIMA graft. We discuss the probable etiopathogenesis of this fatal complication from a forensic point of view.

  10. Longevity, Growth and Intergenerational Equity: The Deterministic Case

    DEFF Research Database (Denmark)

    Andersen, Torben M.; Gestsson, Marias Halldór

    2016-01-01

    develop an overlapping-generations model in continuous time that encompasses different generations with different mortality rates and thus longevity. Allowing for trend increases in both longevity and productivity, we address the normative issue of intergenerational equity under a utilitarian criterion...

  11. Longevity, Growth and Intergenerational Equity - The Deterministic Case

    DEFF Research Database (Denmark)

    Andersen, Torben M.; Gestsson, Marias Halldór

    . We develop an overlapping generations model in continuous time which encompasses different generations with different mortality rates and thus longevity. Allowing for both trend increases in longevity and productivity, we address the issue of intergenerational equity under a utilitarian criterion...

  12. Deterministic effects of the ionizing radiation

    International Nuclear Information System (INIS)

    Raslawski, Elsa C.

    2001-01-01

    Full text: The deterministic effect is the somatic damage that appears when radiation dose is superior to the minimum value or 'threshold dose'. Over this threshold dose, the frequency and seriousness of the damage increases with the amount given. Sixteen percent of patients younger than 15 years of age with the diagnosis of cancer have the possibility of a cure. The consequences of cancer treatment in children are very serious, as they are physically and emotionally developing. The seriousness of the delayed effects of radiation therapy depends on three factors: a)- The treatment ( dose of radiation, schedule of treatment, time of treatment, beam energy, treatment volume, distribution of the dose, simultaneous chemotherapy, etc.); b)- The patient (state of development, patient predisposition, inherent sensitivity of tissue, the present of other alterations, etc.); c)- The tumor (degree of extension or infiltration, mechanical effects, etc.). The effect of radiation on normal tissue is related to cellular activity and the maturity of the tissue irradiated. Children have a mosaic of tissues in different stages of maturity at different moments in time. On the other hand, each tissue has a different pattern of development, so that sequelae are different in different irradiated tissues of the same patient. We should keep in mind that all the tissues are affected in some degree. Bone tissue evidences damage with growth delay and degree of calcification. Damage is small at 10 Gy; between 10 and 20 Gy growth arrest is partial, whereas at doses larger than 20 Gy growth arrest is complete. The central nervous system is the most affected because the radiation injuries produce demyelination with or without focal or diffuse areas of necrosis in the white matter causing character alterations, lower IQ and functional level, neuro cognitive impairment,etc. The skin is also affected, showing different degrees of erythema such as ulceration and necrosis, different degrees of

  13. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  14. A minimally invasive technique for closing an iatrogenic subclavian artery cannulation using the Angio-Seal closure device: two case reports

    Directory of Open Access Journals (Sweden)

    Szkup Peter L

    2012-03-01

    Full Text Available Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA. This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery and can be a life-saving procedure. Case presentation In the first case, an anesthetist attempted ultrasound-guided access to the right internal jugular vein during the preoperative preparation of a 66-year-old Caucasian man. A 7-French (Fr triple-lumen catheter was inadvertently placed into his arterial system. In the second case, an emergency physician inadvertently placed a 7-Fr catheter into the subclavian artery of a 77-year-old Caucasian woman whilst attempting access to her right internal jugular vein. Both arterial punctures were successfully closed by means of a percutaneous closure device (Angio-Seal. No complications were observed. Conclusions Inadvertent subclavian arterial puncture can be successfully managed with no adverse clinical sequelae by using a percutaneous vascular closure device. This minimally invasive technique may be an option for patients with non-compressible arterial punctures. This report demonstrates two practical points that may help clinicians in decision-making during daily practice. First, it provides a practical solution to a well-known vascular complication. Second, it emphasizes a role for proper vascular ultrasound training for the non-radiologist.

  15. Deterministic and heuristic models of forecasting spare parts demand

    Directory of Open Access Journals (Sweden)

    Ivan S. Milojević

    2012-04-01

    Full Text Available Knowing the demand of spare parts is the basis for successful spare parts inventory management. Inventory management has two aspects. The first one is operational management: acting according to certain models and making decisions in specific situations which could not have been foreseen or have not been encompassed by models. The second aspect is optimization of the model parameters by means of inventory management. Supply items demand (asset demand is the expression of customers' needs in units in the desired time and it is one of the most important parameters in the inventory management. The basic task of the supply system is demand fulfillment. In practice, demand is expressed through requisition or request. Given the conditions in which inventory management is considered, demand can be: - deterministic or stochastic, - stationary or nonstationary, - continuous or discrete, - satisfied or unsatisfied. The application of the maintenance concept is determined by the technological level of development of the assets being maintained. For example, it is hard to imagine that the concept of self-maintenance can be applied to assets developed and put into use 50 or 60 years ago. Even less complex concepts cannot be applied to those vehicles that only have indicators of engine temperature - those that react only when the engine is overheated. This means that the maintenance concepts that can be applied are the traditional preventive maintenance and the corrective maintenance. In order to be applied in a real system, modeling and simulation methods require a completely regulated system and that is not the case with this spare parts supply system. Therefore, this method, which also enables the model development, cannot be applied. Deterministic models of forecasting are almost exclusively related to the concept of preventive maintenance. Maintenance procedures are planned in advance, in accordance with exploitation and time resources. Since the timing

  16. Influenza SIRS with minimal pneumonitis

    Directory of Open Access Journals (Sweden)

    Shruti Erramilli

    2016-08-01

    Full Text Available While systemic inflammatory response syndrome (SIRS, is a known complication of severe influenza pneumonia, it has been reported very rarely in patients with minimal parenchymal lung disease. We here report a case of severe SIRS, anasarca and marked vascular phenomena with minimal or no pneumonitis. This case highlights that viruses, including influenza, may cause vascular dysregulation causing SIRS, even without substantial visceral organ involvement.

  17. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  18. Minimizing waste (off-cuts using cutting stock model: The case of one dimensional cutting stock problem in wood working industry

    Directory of Open Access Journals (Sweden)

    Gbemileke A. Ogunranti

    2016-09-01

    Full Text Available Purpose: The main objective of this study is to develop a model for solving the one dimensional cutting stock problem in the wood working industry, and develop a program for its implementation. Design/methodology/approach: This study adopts the pattern oriented approach in the formulation of the cutting stock model. A pattern generation algorithm was developed and coded using Visual basic.NET language. The cutting stock model developed is a Linear Programming (LP Model constrained by numerous feasible patterns. A LP solver was integrated with the pattern generation algorithm program to develop a one - dimensional cutting stock model application named GB Cutting Stock Program. Findings and Originality/value: Applying the model to a real life optimization problem significantly reduces material waste (off-cuts and minimizes the total stock used. The result yielded about 30.7% cost savings for company-I when the total stock materials used is compared with the former cutting plan. Also, to evaluate the efficiency of the application, Case I problem was solved using two top commercial 1D-cutting stock software.  The results show that the GB program performs better when related results were compared. Research limitations/implications: This study round up the linear programming solution for the number of pattern to cut. Practical implications: From Managerial perspective, implementing optimized cutting plans increases productivity by eliminating calculating errors and drastically reducing operator mistakes. Also, financial benefits that can annually amount to millions in cost savings can be achieved through significant material waste reduction. Originality/value: This paper developed a linear programming one dimensional cutting stock model based on a pattern generation algorithm to minimize waste in the wood working industry. To implement the model, the algorithm was coded using VisualBasic.net and linear programming solver called lpsolvedll (dynamic

  19. Deterministic and stochastic CTMC models from Zika disease transmission

    Science.gov (United States)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  20. Learning to Act: Qualitative Learning of Deterministic Action Models

    DEFF Research Database (Denmark)

    Bolander, Thomas; Gierasimczuk, Nina

    2017-01-01

    in the limit (inconclusive convergence to the right action model). We show that deterministic actions are finitely identifiable, while arbitrary (non-deterministic) actions require more learning power—they are identifiable in the limit. We then move on to a particular learning method, i.e. learning via update......, which proceeds via restriction of a space of events within a learning-specific action model. We show how this method can be adapted to learn conditional and unconditional deterministic action models. We propose update learning mechanisms for the afore mentioned classes of actions and analyse...... their computational complexity. Finally, we study a parametrized learning method which makes use of the upper bound on the number of propositions relevant for a given learning scenario. We conclude with describing related work and numerous directions of further work....

  1. Application of tabu search to deterministic and stochastic optimization problems

    Science.gov (United States)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  2. Deterministic Nanopatterning of Diamond Using Electron Beams.

    Science.gov (United States)

    Bishop, James; Fronzi, Marco; Elbadawi, Christopher; Nikam, Vikram; Pritchard, Joshua; Fröch, Johannes E; Duong, Ngoc My Hanh; Ford, Michael J; Aharonovich, Igor; Lobo, Charlene J; Toth, Milos

    2018-03-27

    Diamond is an ideal material for a broad range of current and emerging applications in tribology, quantum photonics, high-power electronics, and sensing. However, top-down processing is very challenging due to its extreme chemical and physical properties. Gas-mediated electron beam-induced etching (EBIE) has recently emerged as a minimally invasive, facile means to dry etch and pattern diamond at the nanoscale using oxidizing precursor gases such as O 2 and H 2 O. Here we explain the roles of oxygen and hydrogen in the etch process and show that oxygen gives rise to rapid, isotropic etching, while the addition of hydrogen gives rise to anisotropic etching and the formation of topographic surface patterns. We identify the etch reaction pathways and show that the anisotropy is caused by preferential passivation of specific crystal planes. The anisotropy can be controlled by the partial pressure of hydrogen and by using a remote RF plasma source to radicalize the precursor gas. It can be used to manipulate the geometries of topographic surface patterns as well as nano- and microstructures fabricated by EBIE. Our findings constitute a comprehensive explanation of the anisotropic etch process and advance present understanding of electron-surface interactions.

  3. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    Science.gov (United States)

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is

  4. Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion

    Science.gov (United States)

    Majda, Andrew J.; Tong, Xin T.

    2016-10-01

    Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.

  5. A national review of the frequency of minimally invasive surgery among general surgery residents: assessment of ACGME case logs during 2 decades of general surgery resident training.

    Science.gov (United States)

    Richards, Morgan K; McAteer, Jarod P; Drake, F Thurston; Goldin, Adam B; Khandelwal, Saurabh; Gow, Kenneth W

    2015-02-01

    Minimally invasive surgery (MIS) has created a shift in how many surgical diseases are treated. Examining the effect on resident operative experience provides valuable insight into trends that may be useful for restructuring the requirements of resident training. To evaluate changes in general surgery resident operative experience regarding MIS. Retrospective review of the frequency of MIS relative to open operations among general surgery residents using the Accreditation Council for Graduate Medical Education case logs for academic years 1993-1994 through 2011-2012. General surgery residency training among accredited programs in the United States. We analyzed the difference in the mean number of MIS techniques and corresponding open procedures across training periods using 2-tailed t tests with statistical significance set at P surgery has an increasingly prominent role in contemporary surgical therapy for many common diseases. The open approach, however, still predominates in all but 5 procedures. Residents today must become efficient at performing multiple techniques for a single procedure, which demands a broader skill set than in the past.

  6. Minimal Poems Written in 1979 Minimal Poems Written in 1979

    Directory of Open Access Journals (Sweden)

    Sandra Sirangelo Maggio

    2008-04-01

    Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.

  7. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  8. DCBRP: a deterministic chain-based routing protocol for wireless sensor networks.

    Science.gov (United States)

    Marhoon, Haydar Abdulameer; Mahmuddin, M; Nor, Shahrudin Awang

    2016-01-01

    Wireless sensor networks (WSNs) are a promising area for both researchers and industry because of their various applications The sensor node expends the majority of its energy on communication with other nodes. Therefore, the routing protocol plays an important role in delivering network data while minimizing energy consumption as much as possible. The chain-based routing approach is superior to other approaches. However, chain-based routing protocols still expend substantial energy in the Chain Head (CH) node. In addition, these protocols also have the bottleneck issues. A novel routing protocol which is Deterministic Chain-Based Routing Protocol (DCBRP). DCBRP consists of three mechanisms: Backbone Construction Mechanism, Chain Head Selection (CHS), and the Next Hop Connection Mechanism. The CHS mechanism is presented in detail, and it is evaluated through comparison with the CCM and TSCP using an ns-3 simulator. It show that DCBRP outperforms both CCM and TSCP in terms of end-to-end delay by 19.3 and 65%, respectively, CH energy consumption by 18.3 and 23.0%, respectively, overall energy consumption by 23.7 and 31.4%, respectively, network lifetime by 22 and 38%, respectively, and the energy*delay metric by 44.85 and 77.54%, respectively. DCBRP can be used in any deterministic node deployment applications, such as smart cities or smart agriculture, to reduce energy depletion and prolong the lifetimes of WSNs.

  9. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  10. Testing for converging deterministic seasonal variation in European industrial production

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R.M. Kunst (Robert)

    1999-01-01

    textabstractIn this paper we consider deterministic seasonal variation in quarterly production for several European countries, and we address the question whether this variation has become more similar across countries over time. Due to economic and institutional factors, one may expect convergence

  11. A Deterministic Annealing Approach to Clustering AIRS Data

    Science.gov (United States)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  12. Mixed motion in deterministic ratchets due to anisotropic permeability

    NARCIS (Netherlands)

    Kulrattanarak, T.; Sman, van der R.G.M.; Lubbersen, Y.S.; Schroën, C.G.P.H.; Pham, H.T.M.; Sarro, P.M.; Boom, R.M.

    2011-01-01

    Nowadays microfluidic devices are becoming popular for cell/DNA sorting and fractionation. One class of these devices, namely deterministic ratchets, seems most promising for continuous fractionation applications of suspensions (Kulrattanarak et al., 2008 [1]). Next to the two main types of particle

  13. Deterministic control of ferroelastic switching in multiferroic materials

    NARCIS (Netherlands)

    Balke, N.; Choudhury, S.; Jesse, S.; Huijben, Mark; Chu, Y.H.; Baddorf, A.P.; Chen, L.Q.; Ramesh, R.; Kalinin, S.V.

    2009-01-01

    Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate

  14. Deterministic event-based simulation of quantum phenomena

    NARCIS (Netherlands)

    De Raedt, K; De Raedt, H; Michielsen, K

    2005-01-01

    We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the

  15. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we

  16. Deterministic oscillatory search: a new meta-heuristic optimization ...

    Indian Academy of Sciences (India)

    The paper proposes a new optimization algorithm that is extremely robust in solving mathematical and engineering problems. The algorithm combines the deterministic nature of classical methods of optimization and global converging characteristics of meta-heuristic algorithms. Common traits of nature-inspired algorithms ...

  17. Deterministic Versus Stochastic Interpretation of Continuously Monitored Sewer Systems

    DEFF Research Database (Denmark)

    Harremoës, Poul; Carstensen, Niels Jacob

    1994-01-01

    An analysis has been made of the uncertainty of input parameters to deterministic models for sewer systems. The analysis reveals a very significant uncertainty, which can be decreased, but not eliminated and has to be considered for engineering application. Stochastic models have a potential for ...

  18. About the Possibility of Creation of a Deterministic Unified Mechanics

    International Nuclear Information System (INIS)

    Khomyakov, G.K.

    2005-01-01

    The possibility of creation of a unified deterministic scheme of classical and quantum mechanics, allowing to preserve their achievements is discussed. It is shown that the canonical system of ordinary differential equation of Hamilton classical mechanics can be added with the vector system of ordinary differential equation for the variables of equations. The interpretational problems of quantum mechanics are considered

  19. Nonlinear deterministic structures and the randomness of protein sequences

    CERN Document Server

    Huang Yan Zhao

    2003-01-01

    To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.

  20. Risk-based versus deterministic explosives safety criteria

    Energy Technology Data Exchange (ETDEWEB)

    Wright, R.E.

    1996-12-01

    The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.

  1. Practical deterministic secure quantum communication in a lossy channel

    Science.gov (United States)

    Qaisar, Saad; Rehman, Junaid ur; Jeong, Youngmin; Shin, Hyundong

    2017-04-01

    Losses in a quantum channel do not allow deterministic communication. We propose a two-way six-state deterministic secure quantum communication scheme that is robust in a lossy channel. Our protocol can be used for two purposes: (a) establishment of a deterministic key, and (b) direct communication of a secret message. Our protocol is directly integrable with the decoy state method while achieving deterministic communication without using a quantum memory. In our protocol, a legitimate party has the control to assign a desired bit value to a successfully transmitted qubit in the public discussion step. Before the public discussion, no information is leaked to the eavesdropper (Eve) even if all the qubits are measured or prepared by her. Hence, our scheme is used as a quantum direct communication (QDC) protocol, to meet the quality of service requirement of swift data communication. We compare the security of our protocol against the photon number splitting attack in the absence of the decoy state method with two QDC protocols. We compute the success probability of Eve when our protocol is used as a multiparty key distribution scheme. We also propose the criteria to compute the efficiency of QDC protocols.

  2. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  3. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  4. Deterministic entanglement of Rydberg ensembles by engineered dissipation

    DEFF Research Database (Denmark)

    Dasari, Durga; Mølmer, Klaus

    2014-01-01

    We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can be d...

  5. The State of Deterministic Thinking among Mothers of Autistic Children

    Directory of Open Access Journals (Sweden)

    Mehrnoush Esbati

    2011-10-01

    Full Text Available Objectives: The purpose of the present study was to investigate the effectiveness of cognitive-behavior education on decreasing deterministic thinking in mothers of children with autism spectrum disorders. Methods: Participants were 24 mothers of autistic children who were referred to counseling centers of Tehran and their children’s disorder had been diagnosed at least by a psychiatrist and a counselor. They were randomly selected and assigned into control and experimental groups. Measurement tool was Deterministic Thinking Questionnaire and both groups answered it before and after education and the answers were analyzed by analysis of covariance. Results: The results indicated that cognitive-behavior education decreased deterministic thinking among mothers of autistic children, it decreased four sub scale of deterministic thinking: interaction with others, absolute thinking, prediction of future, and negative events (P<0.05 as well. Discussions: By learning cognitive and behavioral techniques, parents of children with autism can reach higher level of psychological well-being and it is likely that these cognitive-behavioral skills would have a positive impact on general life satisfaction of mothers of children with autism.

  6. Multidirectional sorting modes in deterministic lateral displacement devices

    DEFF Research Database (Denmark)

    Long, B.R.; Heller, Martin; Beech, J.P.

    2008-01-01

    Deterministic lateral displacement (DLD) devices separate micrometer-scale particles in solution based on their size using a laminar microfluidic flow in an array of obstacles. We investigate array geometries with rational row-shift fractions in DLD devices by use of a simple model including both...

  7. Deterministic teleportation using single-photon entanglement as a resource

    DEFF Research Database (Denmark)

    Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.

    2012-01-01

    We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...

  8. Deterministic algorithms for multi-criteria Max-TSP

    NARCIS (Netherlands)

    Manthey, Bodo

    2012-01-01

    We present deterministic approximation algorithms for the multi-criteria maximum traveling salesman problem (Max-TSP). Our algorithms are faster and simpler than the existing randomized algorithms. We devise algorithms for the symmetric and asymmetric multi-criteria Max-TSP that achieve ratios of

  9. Demonstration of deterministic and high fidelity squeezing of quantum information

    DEFF Research Database (Denmark)

    Yoshikawa, J-I.; Hayashi, T-; Akiyama, T.

    2007-01-01

    By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an...

  10. The Role of Auxiliary Variables in Deterministic and Deterministic-Stochastic Spatial Models of Air Temperature in Poland

    Science.gov (United States)

    Szymanowski, Mariusz; Kryza, Maciej

    2017-02-01

    Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly

  11. Minimal Surfaces for Hitchin Representations

    DEFF Research Database (Denmark)

    Li, Qiongling; Dai, Song

    2018-01-01

    . In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system......Given a reductive representation $\\rho: \\pi_1(S)\\rightarrow G$, there exists a $\\rho$-equivariant harmonic map $f$ from the universal cover of a fixed Riemann surface $\\Sigma$ to the symmetric space $G/K$ associated to $G$. If the Hopf differential of $f$ vanishes, the harmonic map is then minimal...

  12. The integrated model for solving the single-period deterministic inventory routing problem

    Science.gov (United States)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  13. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    Science.gov (United States)

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  14. A Hybrid Approach to Simulate X-Ray Imaging Techniques, Combining Monte Carlo and Deterministic Algorithms

    Science.gov (United States)

    Freud, N.; Letang, J.-M.; Babot, D.

    2005-10-01

    In this paper, we propose a hybrid approach to simulate multiple scattering of photons in objects under X-ray inspection, without recourse to parallel computing and without any approximation sacrificing accuracy. Photon scattering is considered from two points of view: it contributes to X-ray imaging and to the dose absorbed by the patient. The proposed hybrid approach consists of a Monte Carlo stage followed by a deterministic phase, thus taking advantage of the complementarity between these two methods. In the first stage, a set of scattering events occurring in the inspected object is determined by means of classical Monte Carlo simulation. Then this set of scattering events is used to compute the energy imparted to the detector, with a deterministic algorithm based on a "forced detection" scheme. Regarding dose evaluation, we propose to assess separately the energy deposited by direct radiation (using a deterministic algorithm) and by scattered radiation (using our hybrid approach). The results obtained in a test case are compared to those obtained with the Monte Carlo method alone (Geant4 code) and found to be in excellent agreement. The proposed hybrid approach makes it possible to simulate the contribution of each type (Compton or Rayleigh) and order of scattering, separately or together, with a single PC, within reasonable computation times (from minutes to hours, depending on the required detector resolution and statistics). It is possible to simulate radiographic images virtually free from photon noise. In the case of dose evaluation, the hybrid approach appears particularly suitable to calculate the dose absorbed by regions of interest (rather than the entire irradiated organ) with computation time and statistical fluctuations considerably reduced in comparison with conventional Monte Carlo simulation.

  15. Order and Chaos in Some Deterministic Infinite Trigonometric Products

    Science.gov (United States)

    Albert, Leif; Kiessling, Michael K.-H.

    2017-08-01

    It is shown that the deterministic infinite trigonometric products \\prod _{n\\in N}[ 1- p +p cos ( style n^{-s}_{_{}}t) ] =: {{ Cl }_{p;s}^{}}(t) with parameters p\\in (0,1] & s>1/2, and variable t\\in R, are inverse Fourier transforms of the probability distributions for certain random series Ω p^ζ (s) taking values in the real ω line; i.e. the {{ Cl }_{p;s}^{}}(t) are characteristic functions of the Ω p^ζ (s). The special case p=1=s yields the familiar random harmonic series, while in general Ω p^ζ (s) is a "random Riemann-ζ function," a notion which will be explained and illustrated—and connected to the Riemann hypothesis. It will be shown that Ω p^ζ (s) is a very regular random variable, having a probability density function (PDF) on the ω line which is a Schwartz function. More precisely, an elementary proof is given that there exists some K_{p;s}^{}>0, and a function F_{p;s}^{}(|t|) bounded by |F_{p;s}^{}(|t|)|!≤ \\exp \\big (K_{p;s}^{} |t|^{1/(s+1)}), and C_{p;s}^{} =-1/s\\int _0^∞ ln |{1-p+p cos ξ }|1/ξ ^{1+1/s}{d}ξ , such that \\forall t\\in R:\\quad {{ Cl }_{p;s}^{}}(t) = \\exp \\bigl ({- C_{p;s}^{} |t|^{1/s}\\bigr )F_{p;s}^{}(|t|)}; the regularity of Ω p^ζ (s) follows. Incidentally, this theorem confirms a surmise by Benoit Cloitre, that ln {{ Cl }_{{{1}/{3}};2}^{}}(t) ˜ -C√{t} ( t→ ∞) for some C>0. Graphical evidence suggests that {{ Cl }_{{{1}/{3}};2}^{}}(t) is an empirically unpredictable (chaotic) function of t. This is reflected in the rich structure of the pertinent PDF (the Fourier transform of {{ Cl }_{{{1}/{3}};2}^{}}), and illustrated by random sampling of the Riemann-ζ walks, whose branching rules allow the build-up of fractal-like structures.

  16. Molecular detection of minimal residual disease is a strong predictive factor of relapse in childhood B-lineage acute lymphoblastic leukemia with medium risk features. A case control study of the International BFM study group

    NARCIS (Netherlands)

    Biondi, A; Valsecchi, MG; Seriu, T; D'Aniello, E; Willemse, MJ; Fasching, K; Pannunzio, A; Gadner, H; Schrappe, M; Kamps, WA; Bartram, CR; van Dongen, JJM; Panzer-Grumayer, ER

    2000-01-01

    The medium-risk B cell precursor acute lymphoblastic leukemia (ALL) accounts for 50-60% of total childhood ALL and comprises the largest number of relapses still unpredictable with diagnostic criteria. To evaluate the prognostic impact of minimal residual disease (MRD) in this specific group, a case

  17. Information-Theoretic Analysis of Memoryless Deterministic Systems

    Directory of Open Access Journals (Sweden)

    Bernhard C. Geiger

    2016-11-01

    Full Text Available The information loss in deterministic, memoryless systems is investigated by evaluating the conditional entropy of the input random variable given the output random variable. It is shown that for a large class of systems the information loss is finite, even if the input has a continuous distribution. For systems with infinite information loss, a relative measure is defined and shown to be related to Rényi information dimension. As deterministic signal processing can only destroy information, it is important to know how this information loss affects the solution of inverse problems. Hence, we connect the probability of perfectly reconstructing the input to the information lost in the system via Fano-type bounds. The theoretical results are illustrated by example systems commonly used in discrete-time, nonlinear signal processing and communications.

  18. Deterministic Hydraulic Load Analysis on Reactor Internals of APR1400

    International Nuclear Information System (INIS)

    Kim, Kyu Hyung; Ko, Do Young; Gu, Ja Yeong

    2011-01-01

    The structural integrity of the reactor vessel internals (RVI) of the nuclear power plants that have been constructed should be verified in accordance with the US Nuclear Regulatory Commission Regulatory Guide 1.20 (RG1.20) comprehensive vibration assessment program (CVAP) during preoperational and initial startup testing. The program consists of a vibration and stress analysis, a vibration measurement, an inspection, and an assessment of each program. The vibration and stress analysis program is comprised of a hydraulic load analysis and a structural response analysis. The hydraulic loads include the random hydraulic loads induced by turbulent flow and deterministic hydraulic loads induced by pump pulsation. This paper describes a developed full scope 3-D model and the deterministic hydraulic loads for the RVI of the APR1400

  19. Deterministic Brownian motion generated from differential delay equations.

    Science.gov (United States)

    Lei, Jinzhi; Mackey, Michael C

    2011-10-01

    This paper addresses the question of how Brownian-like motion can arise from the solution of a deterministic differential delay equation. To study this we analytically study the bifurcation properties of an apparently simple differential delay equation and then numerically investigate the probabilistic properties of chaotic solutions of the same equation. Our results show that solutions of the deterministic equation with randomly selected initial conditions display a Gaussian-like density for long time, but the densities are supported on an interval of finite measure. Using these chaotic solutions as velocities, we are able to produce Brownian-like motions, which show statistical properties akin to those of a classical Brownian motion over both short and long time scales. Several conjectures are formulated for the probabilistic properties of the solution of the differential delay equation. Numerical studies suggest that these conjectures could be "universal" for similar types of "chaotic" dynamics, but we have been unable to prove this.

  20. On the secure obfuscation of deterministic finite automata.

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, William Erik

    2008-06-01

    In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.

  1. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  2. Relationship of Deterministic Thinking With Loneliness and Depression in the Elderly

    Directory of Open Access Journals (Sweden)

    Mehdi Sharifi

    2017-12-01

    Conclusion According to the results, it can be said that deterministic thinking has a significant relationship with depression and sense of loneliness in older adults. So, deterministic thinking acts as a predictor of depression and sense of loneliness in older adults. Therefore, psychological interventions for challenging cognitive distortion of deterministic thinking and attention to mental health in older adult are very important. 

  3. Evaluation of Deterministic and Stochastic Components of Traffic Counts

    Directory of Open Access Journals (Sweden)

    Ivan Bošnjak

    2012-10-01

    Full Text Available Traffic counts or statistical evidence of the traffic processare often a characteristic of time-series data. In this paper fundamentalproblem of estimating deterministic and stochasticcomponents of a traffic process are considered, in the context of"generalised traffic modelling". Different methods for identificationand/or elimination of the trend and seasonal componentsare applied for concrete traffic counts. Further investigationsand applications of ARIMA models, Hilbert space formulationsand state-space representations are suggested.

  4. Efficient deterministic secure quantum communication protocols using multipartite entangled states

    Science.gov (United States)

    Joy, Dintomon; Surendran, Supin P.; Sabir, M.

    2017-06-01

    We propose two deterministic secure quantum communication protocols employing three-qubit GHZ-like states and five-qubit Brown states as quantum channels for secure transmission of information in units of two bits and three bits using multipartite teleportation schemes developed here. In these schemes, the sender's capability in selecting quantum channels and the measuring bases leads to improved qubit efficiency of the protocols.

  5. The deterministic SIS epidemic model in a Markovian random environment.

    Science.gov (United States)

    Economou, Antonis; Lopez-Herrero, Maria Jesus

    2016-07-01

    We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population.

  6. Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates

    Science.gov (United States)

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2012-03-27

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.

  7. A note on controllability of deterministic context-free systems

    Czech Academy of Sciences Publication Activity Database

    Masopust, Tomáš

    2012-01-01

    Roč. 48, č. 8 (2012), s. 1934-1937 ISSN 0005-1098 R&D Projects: GA ČR(CZ) GPP202/11/P028 Institutional support: RVO:67985840 Keywords : discrete-event systems * controllability * deterministic context-free systems Subject RIV: BA - General Mathematics Impact factor: 2.919, year: 2012 http://www.sciencedirect.com/science/article/pii/S0005109812002543

  8. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Harrisson, G.; Marleau, G.

    2012-01-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  9. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  10. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  11. Distinguishing deterministic and noise components in ELM time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N

    2004-01-01

    Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type

  12. Deterministic hazard quotients (HQs): Heading down the wrong road

    International Nuclear Information System (INIS)

    Wilde, L.; Hunter, C.; Simpson, J.

    1995-01-01

    The use of deterministic hazard quotients (HQs) in ecological risk assessment is common as a screening method in remediation of brownfield sites dominated by total petroleum hydrocarbon (TPH) contamination. An HQ ≥ 1 indicates further risk evaluation is needed, but an HQ ≤ 1 generally excludes a site from further evaluation. Is the predicted hazard known with such certainty that differences of 10% (0.1) do not affect the ability to exclude or include a site from further evaluation? Current screening methods do not quantify uncertainty associated with HQs. To account for uncertainty in the HQ, exposure point concentrations (EPCs) or ecological benchmark values (EBVs) are conservatively biased. To increase understanding of the uncertainty associated with HQs, EPCs (measured and modeled) and toxicity EBVs were evaluated using a conservative deterministic HQ method. The evaluation was then repeated using a probabilistic (stochastic) method. The probabilistic method used data distributions for EPCs and EBVs to generate HQs with measurements of associated uncertainty. Sensitivity analyses were used to identify the most important factors significantly influencing risk determination. Understanding uncertainty associated with HQ methods gives risk managers a more powerful tool than deterministic approaches

  13. Are deterministic methods suitable for short term reserve planning?

    International Nuclear Information System (INIS)

    Voorspools, Kris R.; D'haeseleer, William D.

    2005-01-01

    Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods

  14. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    Science.gov (United States)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  15. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    Science.gov (United States)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  16. Developments based on stochastic and determinist methods for studying complex nuclear systems; Developpements utilisant des methodes stochastiques et deterministes pour l'analyse de systemes nucleaires complexes

    Energy Technology Data Exchange (ETDEWEB)

    Giffard, F.X

    2000-05-19

    In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)

  17. Constrained Minimization Algorithms

    Science.gov (United States)

    Lantéri, H.; Theys, C.; Richard, C.

    2013-03-01

    In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.

  18. Sludge minimization technologies - an overview

    Energy Technology Data Exchange (ETDEWEB)

    Oedegaard, Hallvard

    2003-07-01

    The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)

  19. Minimal Reducts with Grasp

    Directory of Open Access Journals (Sweden)

    Iris Iddaly Mendez Gurrola

    2011-03-01

    Full Text Available The proper detection of patient level of dementia is important to offer the suitable treatment. The diagnosis is based on certain criteria, reflected in the clinical examinations. From these examinations emerge the limitations and the degree in which each patient is in. In order to reduce the total of limitations to be evaluated, we used the rough set theory, this theory has been applied in areas of the artificial intelligence such as decision analysis, expert systems, knowledge discovery, classification with multiple attributes. In our case this theory is applied to find the minimal limitations set or reduct that generate the same classification that considering all the limitations, to fulfill this purpose we development an algorithm GRASP (Greedy Randomized Adaptive Search Procedure.

  20. Quantifying diffusion MRI tractography of the corticospinal tract in brain tumors with deterministic and probabilistic methods.

    Science.gov (United States)

    Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S; Henry, Roland G

    2013-01-01

    sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and

  1. Minimal surfaces in Riemannian manifolds

    International Nuclear Information System (INIS)

    Ji Min; Wang Guangyin

    1990-10-01

    A multiple solution to the Plateau problem in a Riemannian manifold is established. In S n , the existence of two solutions to this problem is obtained. The Morse-Tompkins-Shiffman theorem is extended to the case when the ambient space admits no minimal sphere. (author). 20 refs

  2. Minimal change disease

    Science.gov (United States)

    Minimal change nephrotic syndrome; Nil disease; Lipoid nephrosis; Idiopathic nephrotic syndrome of childhood ... which filter blood and produce urine. In minimal change disease, there is damage to the glomeruli. These ...

  3. Detection of deterministic and probabilistic convection initiation using Himawari-8 Advanced Himawari Imager data

    Science.gov (United States)

    Lee, Sanggyun; Han, Hyangsun; Im, Jungho; Jang, Eunna; Lee, Myong-In

    2017-05-01

    The detection of convective initiation (CI) is very important because convective clouds bring heavy rainfall and thunderstorms that typically cause severe socio-economic damage. In this study, deterministic and probabilistic CI detection models based on decision trees (DT), random forest (RF), and logistic regression (LR) were developed using Himawari-8 Advanced Himawari Imager (AHI) data obtained from June to August 2016 over the Korean Peninsula. A total of 12 interest fields that contain brightness temperature, spectral differences of the brightness temperatures, and their time trends were used to develop CI detection models. While, in our study, the interest field of 11.2 µm Tb was considered the most crucial for detecting CI in the deterministic models and the probabilistic RF model, the trispectral difference, i.e. (8.6-11.2 µm)-(11.2-12.4 µm), was determined to be the most important one in the LR model. The performance of the four models varied by CI case and validation data. Nonetheless, the DT model typically showed higher probability of detection (POD), while the RF model produced higher overall accuracy (OA) and critical success index (CSI) and lower false alarm rate (FAR) than the other models. The CI detection of the mean lead times by the four models were in the range of 20-40 min, which implies that convective clouds can be detected 30 min in advance, before precipitation intensity exceeds 35 dBZ over the Korean Peninsula in summer using the Himawari-8 AHI data.

  4. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  5. Is non-minimal inflation eternal?

    Science.gov (United States)

    Feng, Chao-Jun; Li, Xin-Zhou

    2010-12-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as that in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation, in which the inflaton is non-minimally coupled to the gravity. We find that the non-minimal coupling inflation could be eternal in some parameter spaces.

  6. Is Non-minimal Inflation Eternal?

    OpenAIRE

    Feng, Chao-Jun; Li, Xin-Zhou

    2009-01-01

    The possibility that the non-minimal coupling inflation could be eternal is investigated. We calculate the quantum fluctuation of the inflaton in a Hubble time and find that it has the same value as in the minimal case in the slow-roll limit. Armed with this result, we have studied some concrete non-minimal inflationary models including the chaotic inflation and the natural inflation while the inflaton is non-minimally coupled to the gravity and we find that these non-minimal inflations could...

  7. Parallel Time 0(log N) Acceptance of Deterministic CFLs.

    Science.gov (United States)

    1984-03-01

    algpritht inai be L,,’d Il sirnol1ii a sp,ice-hotinded auxiliar . pulihdo- n aitomaton. In Section 7. we. gnI’ c a eollplenicntai r(,;tl onginerlin...lllI.iliotn of P-R AM, h) dv c.hminitic auxiliar \\ Il )As. IIt SCL I It 7. ’A i,1ncliti SOIni" rla’,Itd %01~k. anId I SeCtion1 h. ’A C ljentit) ,iltiaut n...bounded. t(n) time-bounded deterministic auxiliar ) pushdown automaton M with a stack discipline satisfying the assumptions of Section 1. Each surface

  8. CALTRANS: A parallel, deterministic, 3D neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Carson, L.; Ferguson, J.; Rogers, J.

    1994-04-01

    Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.

  9. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  10. A deterministic global optimization using smooth diagonal auxiliary functions

    Science.gov (United States)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.

    2015-04-01

    In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.

  11. Deterministic computational modeling of the radioactive decay phenomenon

    International Nuclear Information System (INIS)

    Dias, Hugo Rafael; Barros, Ricardo C.

    2007-01-01

    Based on the deterministic mathematical model, we develop a computational modeling for the problem of radioactivity, and also emphasizing the development of a computational application, i.e., a construction of algorithms, programing and results presentation for this mathematical modeling. The application models the single or composed radioactive decay using classical numeric methods such as the trapezoidal implicit, and the most recent numerical methods, which are free of time truncation, signifying more safety of the calculated values, such as speed and efficiency in the results obtaining. (author)

  12. Enhanced deterministic phase retrieval using a partially developed speckle field

    DEFF Research Database (Denmark)

    Almoro, Percival F.; Waller, Laura; Agour, Mostafa

    2012-01-01

    A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle...... intensity measurements are recorded at the output plane corresponding to axially-propagated representations of the PDSF in the input plane. The speckle intensity measurements are then used in a conventional transport of intensity equation (TIE) to reconstruct directly the test wavefront. The PDSF in our...

  13. Calculating Certified Compilers for Non-deterministic Languages

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2015-01-01

    Reasoning about programming languages with non-deterministic semantics entails many difficulties. For instance, to prove correctness of a compiler for such a language, one typically has to split the correctness property into a soundness and a completeness part, and then prove these two parts...... be used to formally derive -- from the semantics of the source language -- a compiler that is correct by construction. For such a derivation to succeed it is crucial that the underlying correctness argument proceeds as a single calculation, as opposed to separate calculations of the two directions...... of the correctness property. We demonstrate our technique by deriving a compiler for a simple language with interrupts....

  14. The deterministic optical alignment of the HERMES spectrograph

    Science.gov (United States)

    Gers, Luke; Staszak, Nicholas

    2014-07-01

    The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.

  15. Separation of parasites from human blood using deterministic lateral displacement.

    Science.gov (United States)

    Holm, Stefan H; Beech, Jason P; Barrett, Michael P; Tegenfeldt, Jonas O

    2011-04-07

    We present the use of a simple microfluidic technique to separate living parasites from human blood. Parasitic trypanosomatids cause a range of human and animal diseases. African trypanosomes, responsible for human African trypanosomiasis (sleeping sickness), live free in the blood and other tissue fluids. Diagnosis relies on detection and due to their often low numbers against an overwhelming background of predominantly red blood cells it is crucial to separate the parasites from the blood. By modifying the method of deterministic lateral displacement, confining parasites and red blood cells in channels of optimized depth which accentuates morphological differences, we were able to achieve separation thus offering a potential route to diagnostics.

  16. Synchronization of linearly coupled networks of deterministic ratchets

    International Nuclear Information System (INIS)

    Lu Pingli; Yang Ying; Huang Lin

    2008-01-01

    This Letter focuses on the synchronization in a class of dynamical complex networks with each node being a deterministic ratchet. In virtue of the technique derived from pendulum-like nonlinear analytic theory and Kalman-Yakubovich-Popov (KYP) lemma, simple linear matrix inequality (LMI) formulations are established to guarantee the stable synchronization of such networks. An interesting conclusion is reached that the stability of synchronization in the coupled whole N-dimensional networks can be converted into that of the simplest 2-dimensional space

  17. Outcomes in cases of lumbar degenerative spondylolisthesis more than 5 years after treatment with minimally invasive decompression: examination of pre- and postoperative slippage, intervertebral disc changes, and clinical results.

    Science.gov (United States)

    Mori, Gen; Mikami, Yasuo; Arai, Yuji; Ikeda, Takumi; Nagae, Masateru; Tonomura, Hitoshi; Takatori, Ryota; Sawada, Koshiro; Fujiwara, Hiroyoshi; Kubo, Toshikazu

    2016-03-01

    There are reports that fusion is the standard treatment of choice for cases of lumbar degenerative spondylolisthesis (LDS) associated with lumbar spinal canal stenosis with a large degree of slippage. The reasons why, however, have not been clarified. On the other hand, it is known that the progress of slippage decreases and restabilization occurs over the natural course of LDS. Therefore, if minimally invasive decompression could be performed, there would be little possibility of it influencing the natural course of LDS, so it would not be necessary to include preoperative percentage slip in the criteria for the selection of fusion. This study examined the course of LDS cases more than 5 years after treatment with minimally invasive decompression to determine whether pre- and postoperative slippage and disc changes influence the clinical results. A total of 51 intervertebral segments in 51 cases with the chief complaint of radicular or cauda equina symptoms due to lumbar spinal canal stenosis were examined after prospective treatment with minimally invasive decompression for LDS. The mean age of the patients at the time of surgery was 66.7 years and the mean follow-up period was 7 years 4 months. Minimally invasive decompression was performed regardless of the degree of low-back pain or percentage slip. The outcome variables were clinical results and changes in imaging findings. Over the follow-up period, postoperative percentage slip increased and disc height decreased, but the Japanese Orthopaedic Association score improved. Regardless of the preoperative percentage slip, disc height, or degree of intervertebral disc degeneration or segmental instability, the clinical results were favorable. In the high preoperative percentage slip group, low disc height group, and progressive disc degeneration group, there was little postoperative progress of slippage. In the group with a postoperative slippage increase of more than 5%, slippage increased significantly at

  18. Simulation of dose deposition in stereotactic synchrotron radiation therapy: a fast approach combining Monte Carlo and deterministic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr

    2009-08-07

    A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.

  19. Minimal sequential Hausdorff spaces

    Directory of Open Access Journals (Sweden)

    Bhamini M. P. Nayar

    2004-01-01

    Full Text Available A sequential space (X,T is called minimal sequential if no sequential topology on X is strictly weaker than T. This paper begins the study of minimal sequential Hausdorff spaces. Characterizations of minimal sequential Hausdorff spaces are obtained using filter bases, sequences, and functions satisfying certain graph conditions. Relationships between this class of spaces and other classes of spaces, for example, minimal Hausdorff spaces, countably compact spaces, H-closed spaces, SQ-closed spaces, and subspaces of minimal sequential spaces, are investigated. While the property of being sequential is not (in general preserved by products, some information is provided on the question of when the product of minimal sequential spaces is minimal sequential.

  20. A novel minimally invasive percutaneous facet augmentation device for the treatment of lumbar radiculopathy and axial back pain: technical description, surgical technique and case presentations

    OpenAIRE

    Khoo, Larry T.; Chen, Nan Fu; Armin, Sean; Stiner, Eric; Dipp, Juan; Flores, Ricardo; Palmer, Sylvain

    2009-01-01

    OBJECTIVE: to describe a new posterior minimally invasive method of facet stabilization for treatment of the degenerating lumbar motion segment. The biomechanics of this Percudyn (Interventional Spine; Irvine, CA) system are distinct from that of other interspinous dynamic stabilization systems as it acts bilaterally directly within the middle column of the spine. Based on biomechanical evalution, the paired prosthesis supports, cushions, and reinforces the facet complexes by limiting both ex...

  1. Maximizing overall liking results in a superior product to minimizing deviations from ideal ratings: an optimization case study with coffee-flavored milk.

    Science.gov (United States)

    Li, Bangde; Hayes, John E; Ziegler, Gregory R

    2015-06-01

    In just-about-right (JAR) scaling and ideal scaling, attribute delta (i.e., "Too Little" or "Too Much") reflects a subject's dissatisfaction level for an attribute relative to their hypothetical ideal. Dissatisfaction (attribute delta) is a different construct from consumer acceptability, operationalized as liking. Therefore, we hypothesized minimizing dissatisfaction and maximizing liking would yield different optimal formulations. The objective of this research was to compare product optimization strategies, i.e. maximizing liking vis-à-vis minimizing dissatisfaction. Coffee-flavored dairy beverages (n = 20) were formulated using a fractional mixture design that constrained the proportions of coffee extract, milk, sucrose, and water. Participants (n = 388) were randomly assigned to one of three research conditions, where they evaluated 4 of the 20 samples using an incomplete block design. Samples were rated for overall liking and for intensity of the attributes sweetness, milk flavor, thickness and coffee flavor . Where appropriate, measures of overall product quality ( Ideal_Delta and JAR_Delta ) were calculated as the sum of the absolute values of the four attribute deltas. Optimal formulations were estimated by: a) maximizing liking ; b) minimizing Ideal_Delta ; or c) minimizing JAR_Delta . A validation study was conducted to evaluate product optimization models. Participants indicated a preference for a coffee-flavored dairy beverage with more coffee extract and less milk and sucrose in the dissatisfaction model compared to the formula obtained by maximizing liking . That is, when liking was optimized, participants generally liked a weaker, milkier and sweeter coffee-flavored dairy beverage. Predicted liking scores were validated in a subsequent experiment, and the optimal product formulated to maximize liking was significantly preferred to that formulated to minimize dissatisfaction by a paired preference test. These findings are consistent with the view

  2. A deterministic solution of the first order linear Boltzmann transport equation in the presence of external magnetic fields.

    Science.gov (United States)

    St Aubin, J; Keyvanloo, A; Vassiliev, O; Fallone, B G

    2015-02-01

    Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of today's treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf_macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 × 10 cm(2) field as well as a smaller 2 × 2 cm(2) field. Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithm's high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization. The feasibility of including magnetic field

  3. Minimizing Costs Can Be Costly

    Directory of Open Access Journals (Sweden)

    Rasmus Rasmussen

    2010-01-01

    Full Text Available A quite common practice, even in academic literature, is to simplify a decision problem and model it as a cost-minimizing problem. In fact, some type of models has been standardized to minimization problems, like Quadratic Assignment Problems (QAPs, where a maximization formulation would be treated as a “generalized” QAP and not solvable by many of the specially designed softwares for QAP. Ignoring revenues when modeling a decision problem works only if costs can be separated from the decisions influencing revenues. More often than we think this is not the case, and minimizing costs will not lead to maximized profit. This will be demonstrated using spreadsheets to solve a small example. The example is also used to demonstrate other pitfalls in network models: the inability to generally balance the problem or allocate costs in advance, and the tendency to anticipate a specific type of solution and thereby make constraints too limiting when formulating the problem.

  4. Strongly Deterministic Population Dynamics in Closed Microbial Communities

    Directory of Open Access Journals (Sweden)

    Zak Frentz

    2015-10-01

    Full Text Available Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.

  5. Deterministic extinction effect of parasites on host populations.

    Science.gov (United States)

    Hwang, Tzy-Wei; Kuang, Yang

    2003-01-01

    Experimental studies have shown that parasites can reduce host density and even drive host population to extinction. Conventional mathematical models for parasite-host interactions, while can address the host density reduction scenario, fail to explain such deterministic extinction phenomena. In order to understand the parasite induced host extinction, Ebert et al. (2000) formulated a plausible but ad hoc epidemiological microparasite model and its stochastic variation. The deterministic model, resembles a simple SI type model, predicts the existence of a globally attractive positive steady state. Their simulation of the stochastic model indicates that extinction of host is a likely outcome in some parameter regions. A careful examination of their ad hoc model suggests an alternative and plausible model assumption. With this modification, we show that the revised parasite-host model can exhibit the observed parasite induced host extinction. This finding strengthens and complements that of Ebert et al. (2000), since all continuous models are likely break down when all population densities are small. This extinction dynamics resembles that of ratio-dependent predator-prey models. We report here a complete global study of the revised parasite-host model. Biological implications and limitations of our findings are also presented.

  6. Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations

    Science.gov (United States)

    Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael

    2012-02-01

    We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations

  7. A survey of deterministic solvers for rarefied flows (Invited)

    Science.gov (United States)

    Mieussens, Luc

    2014-12-01

    Numerical simulations of rarefied gas flows are generally made with DSMC methods. Up to a recent period, deterministic numerical methods based on a discretization of the Boltzmann equation were restricted to simple problems (1D, linearized flows, or simple geometries, for instance). In the last decade, several deterministic solvers have been developed in different teams to tackle more complex problems like 2D and 3D flows. Some of them are based on the full Boltzmann equation. Solving this equation numerically is still very challenging, and 3D solvers are still restricted to monoatomic gases, even if recent works have proved it was possible to simulate simple flows for polyatomic gases. Other solvers are based on simpler BGK like models: they allow for much more intensive simulations on 3D flows for realistic geometries, but treating complex gases requires extended BGK models that are still under development. In this paper, we discuss the main features of these existing solvers, and we focus on their strengths and inefficiencies. We will also review some recent results that show how these solvers can be improved: - higher accuracy (higher order finite volume methods, discontinuous Galerkin approaches) - lower memory and CPU costs with special velocity discretization (adaptive grids, spectral methods) - multi-scale simulations by using hybrid and asymptotic preserving schemes - efficient implementation on high performance computers (parallel computing, hybrid parallelization) Finally, we propose some perspectives to make these solvers more efficient and more popular.

  8. Deterministic direct reprogramming of somatic cells to pluripotency.

    Science.gov (United States)

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  9. Bayesian analysis of deterministic and stochastic prisoner's dilemma games

    Directory of Open Access Journals (Sweden)

    Howard Kunreuther

    2009-08-01

    Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.

  10. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  11. Seismic hazard in Romania associated to Vrancea subcrustal source: Deterministic evaluation

    International Nuclear Information System (INIS)

    Radulian, M.; Mandrescu, N.; Vaccari, F.; Moldoveanu, C.L.; Panza, G.F.

    2002-09-01

    Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the more efficient high-frequency radiation, especially in the case of large earthquakes, in comparison with shallow earthquakes of similar size. The seismic hazard is computed in terms of peak ground motion values characterizing the complete synthetic seismograms generated by the modal summation technique on a grid covering the Romanian territory. Two representative scenario earthquakes are considered in the computation, corresponding to the largest instrumentally recorded earthquakes, one located in the upper part of the slab (M w = 7.4; h = 90 km), the other located in the lower part of the slab (M w = 7.7; h = 150 km). The seismic hazard distribution, expressed in terms of Design Ground Acceleration values, is very sensitive to magnitude, focal depth and focal mechanism. For a variation of 0.3 magnitude units the hazard level generally increases by a factor of two. The increase of the focal depth leads to stronger radiation at large epicentral distance (100 - 200

  12. Minimal TUD spaces

    Directory of Open Access Journals (Sweden)

    A.E. McCluskey

    2002-04-01

    Full Text Available A topological space is TUD if the derived set of each point is the union of disjoint closed sets. We show that there is a minimal TUD space which is not just the Alexandroff topology on a linear order. Indeed the structure of the underlying partial order of a minimal TUD space can be quite complex. This contrasts sharply with the known results on minimality for weak separation axioms.

  13. Comparative analysis among deterministic and stochastic collision damage models for oil tanker and bulk carrier reliability

    Directory of Open Access Journals (Sweden)

    A. Campanile

    2018-01-01

    Full Text Available The incidence of collision damage models on oil tanker and bulk carrier reliability is investigated considering the IACS deterministic model against GOALDS/IMO database statistics for collision events, substantiating the probabilistic model. Statistical properties of hull girder residual strength are determined by Monte Carlo simulation, based on random generation of damage dimensions and a modified form of incremental-iterative method, to account for neutral axis rotation and equilibrium of horizontal bending moment, due to cross-section asymmetry after collision events. Reliability analysis is performed, to investigate the incidence of collision penetration depth and height statistical properties on hull girder sagging/hogging failure probabilities. Besides, the incidence of corrosion on hull girder residual strength and reliability is also discussed, focussing on gross, hull girder net and local net scantlings, respectively. The ISSC double hull oil tanker and single side bulk carrier, assumed as test cases in the ISSC 2012 report, are taken as reference ships.

  14. A direct method for string to deterministic finite automaton conversion for fast text searching

    Energy Technology Data Exchange (ETDEWEB)

    Berlin, G.J.

    1991-12-31

    This paper describes a simple technique for generating a minimum state deterministic finite automation (DFA) directly from a restricted set of regular expressions. The resulting DFA is used for string searches that do not alter the target text and require only a single pass through the input. The technique is used for very fast, mixed or same case, single or multiple string searches. The technique is also capable of directly converting multiple strings with wild card character specifiers by constructing parallel DFAs. Construction of the automation is performed in a time proportional to the length of the regular expression. Algorithms are given for construction of the automatons and recognizers. Although the regular expression to DFA parser does not support all classes of regular expressions, it supports a sufficient subset to make it useful for the most commonly encountered text searching functions.

  15. A direct method for string to deterministic finite automaton conversion for fast text searching

    Energy Technology Data Exchange (ETDEWEB)

    Berlin, G.J.

    1991-01-01

    This paper describes a simple technique for generating a minimum state deterministic finite automation (DFA) directly from a restricted set of regular expressions. The resulting DFA is used for string searches that do not alter the target text and require only a single pass through the input. The technique is used for very fast, mixed or same case, single or multiple string searches. The technique is also capable of directly converting multiple strings with wild card character specifiers by constructing parallel DFAs. Construction of the automation is performed in a time proportional to the length of the regular expression. Algorithms are given for construction of the automatons and recognizers. Although the regular expression to DFA parser does not support all classes of regular expressions, it supports a sufficient subset to make it useful for the most commonly encountered text searching functions.

  16. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    Science.gov (United States)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  17. A joint stochastic-deterministic approach for long-term and short-term modelling of monthly flow rates

    Science.gov (United States)

    Stojković, Milan; Kostić, Srđan; Plavšić, Jasna; Prohaska, Stevan

    2017-01-01

    The authors present a detailed procedure for modelling of mean monthly flow time-series using records of the Great Morava River (Serbia). The proposed procedure overcomes a major challenge of other available methods by disaggregating the time series in order to capture the main properties of the hydrologic process in both long-run and short-run. The main assumption of the conducted research is that a time series of monthly flow rates represents a stochastic process comprised of deterministic, stochastic and random components, the former of which can be further decomposed into a composite trend and two periodic components (short-term or seasonal periodicity and long-term or multi-annual periodicity). In the present paper, the deterministic component of a monthly flow time-series is assessed by spectral analysis, whereas its stochastic component is modelled using cross-correlation transfer functions, artificial neural networks and polynomial regression. The results suggest that the deterministic component can be expressed solely as a function of time, whereas the stochastic component changes as a nonlinear function of climatic factors (rainfall and temperature). For the calibration period, the results of the analysis infers a lower value of Kling-Gupta Efficiency in the case of transfer functions (0.736), whereas artificial neural networks and polynomial regression suggest a significantly better match between the observed and simulated values, 0.841 and 0.891, respectively. It seems that transfer functions fail to capture high monthly flow rates, whereas the model based on polynomial regression reproduces high monthly flows much better because it is able to successfully capture a highly nonlinear relationship between the inputs and the output. The proposed methodology that uses a combination of artificial neural networks, spectral analysis and polynomial regression for deterministic and stochastic components can be applied to forecast monthly or seasonal flow rates.

  18. Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared

    International Nuclear Information System (INIS)

    Chouhan, S.L.; Davis, P.

    2002-04-01

    The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs

  19. Genera of minimal balance surfaces

    International Nuclear Information System (INIS)

    Fischer, W.; Koch, E.

    1989-01-01

    The genus of a three-periodic intersection-free surface in R 3 refers to a primitive unit cell of its symmetry group. Two procedures for the calculation of the genus are described: (1) by means of labyrinth graphs; (2) via the Euler characteristic derived from a tiling on the surface. In both cases new formulae based on crystallographic concepts are given. For all known minimal balance surfaces the genera and the labyrinth graphs are tabulated. (orig.)

  20. Minimizing Mutual Couping

    DEFF Research Database (Denmark)

    2010-01-01

    Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna.......Disclosed herein are techniques, systems, and methods relating to minimizing mutual coupling between a first antenna and a second antenna....

  1. Deterministic learning enhanced neutral network control of unmanned helicopter

    Directory of Open Access Journals (Sweden)

    Yiming Jiang

    2016-11-01

    Full Text Available In this article, a neural network–based tracking controller is developed for an unmanned helicopter system with guaranteed global stability in the presence of uncertain system dynamics. Due to the coupling and modeling uncertainties of the helicopter systems, neutral networks approximation techniques are employed to compensate the unknown dynamics of each subsystem. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is also integrated into the control design, such that the resulted neural controller is always valid without any concern on either initial conditions or range of state variables. In addition, deterministic learning is applied to the neutral network learning control, such that the adaptive neutral networks are able to store the learned knowledge that could be reused to construct neutral network controller with improved control performance. Simulation studies are carried out on a helicopter model to illustrate the effectiveness of the proposed control design.

  2. HSimulator: Hybrid Stochastic/Deterministic Simulation of Biochemical Reaction Networks

    Directory of Open Access Journals (Sweden)

    Luca Marchetti

    2017-01-01

    Full Text Available HSimulator is a multithread simulator for mass-action biochemical reaction systems placed in a well-mixed environment. HSimulator provides optimized implementation of a set of widespread state-of-the-art stochastic, deterministic, and hybrid simulation strategies including the first publicly available implementation of the Hybrid Rejection-based Stochastic Simulation Algorithm (HRSSA. HRSSA, the fastest hybrid algorithm to date, allows for an efficient simulation of the models while ensuring the exact simulation of a subset of the reaction network modeling slow reactions. Benchmarks show that HSimulator is often considerably faster than the other considered simulators. The software, running on Java v6.0 or higher, offers a simulation GUI for modeling and visually exploring biological processes and a Javadoc-documented Java library to support the development of custom applications. HSimulator is released under the COSBI Shared Source license agreement (COSBI-SSLA.

  3. Deterministically entangling multiple remote quantum memories inside an optical cavity

    Science.gov (United States)

    Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun

    2018-01-01

    Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.

  4. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  5. Properties of the Statistical Complexity Functional and Partially Deterministic HMMs

    Directory of Open Access Journals (Sweden)

    Wolfgang Löhr

    2009-08-01

    Full Text Available Statistical complexity is a measure of complexity of discrete-time stationary stochastic processes, which has many applications. We investigate its more abstract properties as a non-linear function of the space of processes and show its close relation to the Knight’s prediction process. We prove lower semi-continuity, concavity, and a formula for the ergodic decomposition of statistical complexity. On the way, we show that the discrete version of the prediction process has a continuous Markov transition. We also prove that, given the past output of a partially deterministic hidden Markov model (HMM, the uncertainty of the internal state is constant over time and knowledge of the internal state gives no additional information on the future output. Using this fact, we show that the causal state distribution is the unique stationary representation on prediction space that may have finite entropy.

  6. Integrated deterministic and probabilistic safety assessment: Concepts, challenges, research directions

    International Nuclear Information System (INIS)

    Zio, Enrico

    2014-01-01

    Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives

  7. Integrated deterministic and probabilistic safety assessment: Concepts, challenges, research directions

    Energy Technology Data Exchange (ETDEWEB)

    Zio, Enrico, E-mail: enrico.zio@ecp.fr [Ecole Centrale Paris and Supelec, Chair on System Science and the Energetic Challenge, European Foundation for New Energy – Electricite de France (EDF), Grande Voie des Vignes, 92295 Chatenay-Malabry Cedex (France); Dipartimento di Energia, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy)

    2014-12-15

    Highlights: • IDPSA contributes to robust risk-informed decision making in nuclear safety. • IDPSA considers time-dependent interactions among component failures and system process. • Also, IDPSA considers time-dependent interactions among control and operator actions. • Computational efficiency by advanced Monte Carlo and meta-modelling simulations. • Efficient post-processing of IDPSA output by clustering and data mining. - Abstract: Integrated deterministic and probabilistic safety assessment (IDPSA) is conceived as a way to analyze the evolution of accident scenarios in complex dynamic systems, like nuclear, aerospace and process ones, accounting for the mutual interactions between the failure and recovery of system components, the evolving physical processes, the control and operator actions, the software and firmware. In spite of the potential offered by IDPSA, several challenges need to be effectively addressed for its development and practical deployment. In this paper, we give an overview of these and discuss the related implications in terms of research perspectives.

  8. Sensitivity analysis in a Lassa fever deterministic mathematical model

    Science.gov (United States)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  9. A Deterministic Entropy Based on the Instantaneous Phase Space Volume

    Science.gov (United States)

    Diebner, Hans H.; Rössler, Otto E.

    1998-02-01

    A deterministic entropic measure is derived for the time evolution of Newtonian N-particle systems based on the volume of the instantaneously occupied phase space (IOPS). This measure is found as a natural extension of Boltzmann's entropy. The instantaneous arrangement of the particles is exploited in the form of spatial correlations. The new entropy is a bridge between the time-dependent Boltzmann entropy, formulated on the basis of densities in the one-particle phase space, and the static Gibbs entropy which uses densities in the full phase space. We apply the new concept in a molecular dynamics simulation (MDS) using an exactly time reversible "discrete Newtonian equation of motion" recently derived from the fundamental principle of least action in discretized space-time. The simulation therefore is consistent with micro-time-reversibility. Entropy becomes an exact momentary observable in both time directions in fulfillment of a dream of Boltzmann.

  10. Distributed Design of a Central Service to Ensure Deterministic Behavior

    Directory of Open Access Journals (Sweden)

    Imran Ali Jokhio

    2012-10-01

    Full Text Available A central authentication service to EPC (Electronic Product Code system architecture is proposed in our previous work. A challenge for a central service always arises that how it can ensure a certain level of delay while processing emergent data. The increasing data in the EPC system architecture is tags data. Therefore, authenticating increasing number of tag in the central authentication service with a deterministic time response is investigated and a distributed authentication service is designed in a layered approach. A distributed design of tag searching services in SOA (Service Oriented Architecture style is also presented. Using the SOA architectural style a self-adaptive authentication service over Cloud is also proposed for the central authentication service, that may also be extended for other applications.

  11. A deterministic model of nettle caterpillar life cycle

    Science.gov (United States)

    Syukriyah, Y.; Nuraini, N.; Handayani, D.

    2018-03-01

    Palm oil is an excellent product in the plantation sector in Indonesia. The level of palm oil productivity is very potential to increase every year. However, the level of palm oil productivity is lower than its potential. Pests and diseases are the main factors that can reduce production levels by up to 40%. The existence of pests in plants can be caused by various factors, so the anticipation in controlling pest attacks should be prepared as early as possible. Caterpillars are the main pests in oil palm. The nettle caterpillars are leaf eaters that can significantly decrease palm productivity. We construct a deterministic model that describes the life cycle of the caterpillar and its mitigation by using a caterpillar predator. The equilibrium points of the model are analyzed. The numerical simulations are constructed to give a representation how the predator as the natural enemies affects the nettle caterpillar life cycle.

  12. The degree of irreversibility in deterministic finite automata

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Holzer, Markus; Kutrib, Martin

    2016-01-01

    for nondeterministic finite state automata (NFA) is PSPACE-complete. The recent DFA method essentially works by minimizing the DFA and inspecting it for a forbidden pattern. We here study the degree of irreversibility for a regular language, the minimal number of such forbidden patterns necessary in any DFA accepting...

  13. Deterministic calculations of radiation doses from brachytherapy seeds

    International Nuclear Information System (INIS)

    Reis, Sergio Carneiro dos; Vasconcelos, Vanderley de; Santos, Ana Maria Matildes dos

    2009-01-01

    Brachytherapy is used for treating certain types of cancer by inserting radioactive sources into tumours. CDTN/CNEN is developing brachytherapy seeds to be used mainly in prostate cancer treatment. Dose calculations play a very significant role in the characterization of the developed seeds. The current state-of-the-art of computation dosimetry relies on Monte Carlo methods using, for instance, MCNP codes. However, deterministic calculations have some advantages, as, for example, short computer time to find solutions. This paper presents a software developed to calculate doses in a two-dimensional space surrounding the seed, using a deterministic algorithm. The analysed seeds consist of capsules similar to IMC6711 (OncoSeed), that are commercially available. The exposure rates and absorbed doses are computed using the Sievert integral and the Meisberger third order polynomial, respectively. The software also allows the isodose visualization at the surface plan. The user can choose between four different radionuclides ( 192 Ir, 198 Au, 137 Cs and 60 Co). He also have to enter as input data: the exposure rate constant; the source activity; the active length of the source; the number of segments in which the source will be divided; the total source length; the source diameter; and the actual and effective source thickness. The computed results were benchmarked against results from literature and developed software will be used to support the characterization process of the source that is being developed at CDTN. The software was implemented using Borland Delphi in Windows environment and is an alternative to Monte Carlo based codes. (author)

  14. Absorbing phase transitions in deterministic fixed-energy sandpile models

    Science.gov (United States)

    Park, Su-Chan

    2018-03-01

    We investigate the origin of the difference, which was noticed by Fey et al. [Phys. Rev. Lett. 104, 145703 (2010), 10.1103/PhysRevLett.104.145703], between the steady state density of an Abelian sandpile model (ASM) and the transition point of its corresponding deterministic fixed-energy sandpile model (DFES). Being deterministic, the configuration space of a DFES can be divided into two disjoint classes such that every configuration in one class should evolve into one of absorbing states, whereas no configurations in the other class can reach an absorbing state. Since the two classes are separated in terms of toppling dynamics, the system can be made to exhibit an absorbing phase transition (APT) at various points that depend on the initial probability distribution of the configurations. Furthermore, we show that in general the transition point also depends on whether an infinite-size limit is taken before or after the infinite-time limit. To demonstrate, we numerically study the two-dimensional DFES with Bak-Tang-Wiesenfeld toppling rule (BTW-FES). We confirm that there are indeed many thresholds. Nonetheless, the critical phenomena at various transition points are found to be universal. We furthermore discuss a microscopic absorbing phase transition, or a so-called spreading dynamics, of the BTW-FES, to find that the phase transition in this setting is related to the dynamical isotropic percolation process rather than self-organized criticality. In particular, we argue that choosing recurrent configurations of the corresponding ASM as an initial configuration does not allow for a nontrivial APT in the DFES.

  15. Deterministic Earthquake Hazard Assessment by Public Agencies in California

    Science.gov (United States)

    Mualchin, L.

    2005-12-01

    Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.

  16. Four small supernumerary marker chromosomes derived from chromosomes 6, 8, 11 and 12 in a patient with minimal clinical abnormalities: a case report

    Directory of Open Access Journals (Sweden)

    Hamid Ahmed B

    2010-08-01

    Full Text Available Abstract Introduction Small supernumerary marker chromosomes are still a problem in cytogenetic diagnostic and genetic counseling. This holds especially true for the rare cases with multiple small supernumerary marker chromosomes. Most such cases are reported to be clinically severely affected due to the chromosomal imbalances induced by the presence of small supernumerary marker chromosomes. Here we report the first case of a patient having four different small supernumerary marker chromosomes which, apart from slight developmental retardation in youth and non-malignant hyperpigmentation, presented no other clinical signs. Case presentation Our patient was a 30-year-old Caucasian man, delivered by caesarean section because of macrosomy. At birth he presented with bilateral cryptorchidism but no other birth defects. At age of around two years he showed psychomotor delay and a bilateral convergent strabismus. Later he had slight learning difficulties, with normal social behavior and now lives an independent life as an adult. Apart from hypogenitalism, he has multiple hyperpigmented nevi all over his body, short feet with pes cavus and claw toes. At age of 30 years, cytogenetic and molecular cytogenetic analysis revealed a karyotype of 50,XY,+min(6(:p11.1-> q11.1:,+min(8(:p11.1->q11.1:,+min(11(:p11.11->q11:,+min(12(:p11.2~12->q10:, leading overall to a small partial trisomy in 12p11.1~12.1. Conclusions Including this case, four single case reports are available in the literature with a karyotype 50,XN,+4mar. For prenatally detected multiple small supernumerary marker chromosomes in particular we learn from this case that such a cytogenetic condition may be correlated with a positive clinical outcome.

  17. Levy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    International Nuclear Information System (INIS)

    Boyer, D; Miramontes, O; Larralde, H

    2009-01-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ∼ k -β , in some range of the exponent β, the foraging medium induces movements that are similar to Levy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  18. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    Science.gov (United States)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  19. Neutron response of silicon carbide semiconductor detectors from deterministic adjoint transport calculations

    International Nuclear Information System (INIS)

    Rowe, M.; Manalo, K.; Plower, T.; Sjoden, G.

    2009-01-01

    Evaluation of silicon carbide (SiC) semiconductor detectors for use in power monitoring is of significant interest because of their distinct advantages, including small size, small mass, and their inactivity both chemically and neutronically. The main focus of this paper includes evaluating the predicted response of a SiC detector when placed in a 17 x 17 Westinghouse PWR assembly, using the PENTRAN code system for the 3-D deterministic adjoint transport computations. Adjoint transport results indicated maximum adjoint values of 1, 0.507 and 0.308 were obtained for the thermal, epithermal and fast neutron energy groups, respectively. Within a radial distance of 6.08 cm from the SiC detector, local fuel pins contribute 75.33% at this radius within the thermal group response. A total of 35.85% of the response in the epithermal group is accounted for in the same 6.08 cm radius; similarly, 21.58% of the fast group response is accounted for in the same radius. This means that for neutrons, the effective monitoring range of the SiC detectors is on the order of five fuel pins away from the detector; pins outside this range in the fuel lattice are minimally 'seen' by the SiC detector. (authors)

  20. Automatic design of deterministic and non-halting membrane systems by tuning syntactical ingredients.

    Science.gov (United States)

    Zhang, Gexiang; Rong, Haina; Ou, Zhu; Pérez-Jiménez, Mario J; Gheorghe, Marian

    2014-09-01

    To solve the programmability issue of membrane computing models, the automatic design of membrane systems is a newly initiated and promising research direction. In this paper, we propose an automatic design method, Permutation Penalty Genetic Algorithm (PPGA), for a deterministic and non-halting membrane system by tuning membrane structures, initial objects and evolution rules. The main ideas of PPGA are the introduction of the permutation encoding technique for a membrane system, a penalty function evaluation approach for a candidate membrane system and a genetic algorithm for evolving a population of membrane systems toward a successful one fulfilling a given computational task. Experimental results show that PPGA can successfully accomplish the automatic design of a cell-like membrane system for computing the square of n ( n ≥ 1 is a natural number) and can find the minimal membrane systems with respect to their membrane structures, alphabet, initial objects, and evolution rules for fulfilling the given task. We also provide the guidelines on how to set the parameters of PPGA.

  1. Review of the Monte Carlo and deterministic codes in radiation protection and dosimetry

    International Nuclear Information System (INIS)

    Tagziria, H.

    2000-02-01

    Monte Carlo technique is that the solutions are given at specific locations only, are statistically fluctuating and are arrived at with lots of computer effort. Sooner rather than later, however, one would expect that powerful variance reductions and ever-faster processor machines would balance these disadvantages out. This is especially true if one considers the rapid advances in computer technology and parallel computers, which can achieve a 300, fold faster convergence. In many fields and cases the user would, however, benefit greatly by considering when possible alternative methods to the Monte Carlo technique, such as deterministic methods, at least as a way of validation. It can be shown in fact, that for less complex problems a deterministic approach can have many advantages. In its earlier manifestations, Monte Carlo simulation was primarily performed by experts who were intimately involved in the development of the computer code. Increasingly, however, codes are being supplied as relatively user-friendly packages for widespread use, which allows them to be used by those with less specialist knowledge. This enables them to be used as 'black boxes', which in turn provides scope for costly errors, especially in the choice of cross section data and accelerator techniques. The Monte Carlo method as employed with modem computers goes back several decades, and nowadays science and software libraries would be virtually empty if one excluded work that is either directly or indirectly related to this technique. This is specifically true in the fields of 'computational dosimetry', 'radiation protection' and radiation transport in general. Hundreds of codes have been written and applied with various degrees of success. Some of these have become trademarks, generally well supported and taken over by the thousands of users. Other codes, which should be encouraged, are the so-called in house codes, which still serve well their developers' and their groups' in their intended

  2. A Kalman-filter bias correction of ozone deterministic, ensemble-averaged, and probabilistic forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Monache, L D; Grell, G A; McKeen, S; Wilczak, J; Pagowski, M O; Peckham, S; Stull, R; McHenry, J; McQueen, J

    2006-03-20

    Kalman filtering (KF) is used to postprocess numerical-model output to estimate systematic errors in surface ozone forecasts. It is implemented with a recursive algorithm that updates its estimate of future ozone-concentration bias by using past forecasts and observations. KF performance is tested for three types of ozone forecasts: deterministic, ensemble-averaged, and probabilistic forecasts. Eight photochemical models were run for 56 days during summer 2004 over northeastern USA and southern Canada as part of the International Consortium for Atmospheric Research on Transport and Transformation New England Air Quality (AQ) Study. The raw and KF-corrected predictions are compared with ozone measurements from the Aerometric Information Retrieval Now data set, which includes roughly 360 surface stations. The completeness of the data set allowed a thorough sensitivity test of key KF parameters. It is found that the KF improves forecasts of ozone-concentration magnitude and the ability to predict rare events, both for deterministic and ensemble-averaged forecasts. It also improves the ability to predict the daily maximum ozone concentration, and reduces the time lag between the forecast and observed maxima. For this case study, KF considerably improves the predictive skill of probabilistic forecasts of ozone concentration greater than thresholds of 10 to 50 ppbv, but it degrades it for thresholds of 70 to 90 ppbv. Moreover, KF considerably reduces probabilistic forecast bias. The significance of KF postprocessing and ensemble-averaging is that they are both effective for real-time AQ forecasting. KF reduces systematic errors, whereas ensemble-averaging reduces random errors. When combined they produce the best overall forecast.

  3. Non- and minimally invasive full-mouth rehabilitation of patients with loss of vertical dimension of occlusion using CAD/CAM: an innovative concept demonstrated with a case report.

    Science.gov (United States)

    Bosch, Gabriel; Ender, Andreas; Mehl, Albert

    2015-01-01

    Abrasion and erosion are two increasingly common indications for dental treatment. Thanks to modern digital technologies and new restorative materials, there are novel therapeutic approaches to restoring such losses of tooth structure in a virtually non-invasive manner. The case study in this article demonstrates one such innovative approach. The patient's severely abraded natural dentition was restored in a defect-driven, minimally invasive manner using high-performance composite materials in the posterior region, and the "sandwich technique" in the anterior region. The restorations were milled on an optimized milling machine with milling cycles adapted for the fabrication of precision-fit restorations with thin edges.

  4. Assessing inter- and intra-individual cognitive variability in patients at risk for cognitive impairment: the case of minimal hepatic encephalopathy.

    Science.gov (United States)

    Bisiacchi, Patrizia; Cona, Giorgia; Tarantino, Vincenza; Schiff, Sami; Montagnese, Sara; Amodio, Piero; Capizzi, Giovanna

    2014-12-01

    Recent evidence reveals that inter- and intra-individual variability significantly affects cognitive performance in a number of neuropsychological pathologies. We applied a flexible family of statistical models to elucidate the contribution of inter- and intra-individual variables on cognitive functioning in healthy volunteers and patients at risk for hepatic encephalopathy (HE). Sixty-five volunteers (32 patients with cirrhosis and 33 healthy volunteers) were assessed by means of the Inhibitory Control Task (ICT). A Generalized Additive Model for Location, Scale and Shape (GAMLSS) was fitted for jointly modeling the mean and the intra-variability of Reaction Times (RTs) as a function of socio-demographic and task related covariates. Furthermore, a Generalized Linear Mixed Model (GLMM) was fitted for modeling accuracy. When controlling for the covariates, patients without minimal hepatic encephalopathy (MHE) did not differ from patients with MHE in the low-demanding condition, both in terms of RTs and accuracy. Moreover, they showed a significant decline in accuracy compared to the control group. Compared to patients with MHE, patients without MHE showed faster RTs and higher accuracy only in the high-demanding condition. The results revealed that the application of GAMLSS and GLMM models are able to capture subtle cognitive alterations, previously not detected, in patients' subclinical pathologies.

  5. Minimally Invasive Transforaminal Lumbar Interbody Fusion with Percutaneous Bilateral Pedicle Screw Fixation for Lumbosacral Spine Degenerative Diseases. A retrospective database of 40 consecutive treated cases and literature review.

    Science.gov (United States)

    Millimaggi, Daniele Francesco; DI Norcia, Valerio; Luzzi, Sabino; Alfiero, Tommaso; Galzio, Renato Juan; Ricci, Alessandro

    2017-04-12

    To report our results about minimally invasive transforaminal lumbar interbody fusion (MI-TLIF) with bilateral pedicle screw fixation, in patients with degenerative lumbosacral spine disease. To describe the indications, surgical technique and results of a consecutive series of 40 patients undergone MI-TLIF. Despite the limited number of clinical studies, published data suggest tremendous potential advantages of this technique. Forty patients with radiological findings of degenerative lumbosacral spine disease were undergone MI-TLIF between July 2012 and January 2015. Clinical outcomes were assessed by means of Oswestry Disability Index (ODI) and Health Survey Scoring (SF36) before surgery and at first year follow-up. Furthermore, the following parameters were retrospectively reviewed: age, sex, working activity, body mass index (BMI), type of degenerative disease, number of levels of fusion, operative time, blood loss, length of hospital stay. Average operative time was of 230 minutes, mean estimated blood loss 170 mL, average length of hospital stay 5 days. The ODI improved from a score of 59, preoperatively, to post-operative score of 20 at first year follow-up. Average SF36 score increased from 36 to 54 (Physical Health) and from 29 to 50 (Mental Health) at first year outcome evaluation. MI-TLIF with bilateral pedicle screw fixation is an excellent choice for selected patients suffering from symptomatic degenerative lumbosacral spine disease, especially secondary to recurrent disk herniations.

  6. Can statistic adjustment of OR minimize the potential confounding bias for meta-analysis of case-control study? A secondary data analysis.

    Science.gov (United States)

    Liu, Tianyi; Nie, Xiaolu; Wu, Zehao; Zhang, Ying; Feng, Guoshuang; Cai, Siyu; Lv, Yaqi; Peng, Xiaoxia

    2017-12-29

    Different confounder adjustment strategies were used to estimate odds ratios (ORs) in case-control study, i.e. how many confounders original studies adjusted and what the variables are. This secondary data analysis is aimed to detect whether there are potential biases caused by difference of confounding factor adjustment strategies in case-control study, and whether such bias would impact the summary effect size of meta-analysis. We included all meta-analyses that focused on the association between breast cancer and passive smoking among non-smoking women, as well as each original case-control studies included in these meta-analyses. The relative deviations (RDs) of each original study were calculated to detect how magnitude the adjustment would impact the estimation of ORs, compared with crude ORs. At the same time, a scatter diagram was sketched to describe the distribution of adjusted ORs with different number of adjusted confounders. Substantial inconsistency existed in meta-analysis of case-control studies, which would influence the precision of the summary effect size. First, mixed unadjusted and adjusted ORs were used to combine individual OR in majority of meta-analysis. Second, original studies with different adjustment strategies of confounders were combined, i.e. the number of adjusted confounders and different factors being adjusted in each original study. Third, adjustment did not make the effect size of original studies trend to constringency, which suggested that model fitting might have failed to correct the systematic error caused by confounding. The heterogeneity of confounder adjustment strategies in case-control studies may lead to further bias for summary effect size in meta-analyses, especially for weak or medium associations so that the direction of causal inference would be even reversed. Therefore, further methodological researches are needed, referring to the assessment of confounder adjustment strategies, as well as how to take this kind

  7. Power Minimization techniques for Networked Data Centers

    International Nuclear Information System (INIS)

    Low, Steven; Tang, Kevin

    2011-01-01

    Our objective is to develop a mathematical model to optimize energy consumption at multiple levels in networked data centers, and develop abstract algorithms to optimize not only individual servers, but also coordinate the energy consumption of clusters of servers within a data center and across geographically distributed data centers to minimize the overall energy cost and consumption of brown energy of an enterprise. In this project, we have formulated a variety of optimization models, some stochastic others deterministic, and have obtained a variety of qualitative results on the structural properties, robustness, and scalability of the optimal policies. We have also systematically derived from these models decentralized algorithms to optimize energy efficiency, analyzed their optimality and stability properties. Finally, we have conducted preliminary numerical simulations to illustrate the behavior of these algorithms. We draw the following conclusion. First, there is a substantial opportunity to minimize both the amount and the cost of electricity consumption in a network of datacenters, by exploiting the fact that traffic load, electricity cost, and availability of renewable generation fluctuate over time and across geographical locations. Judiciously matching these stochastic processes can optimize the tradeoff between brown energy consumption, electricity cost, and response time. Second, given the stochastic nature of these three processes, real-time dynamic feedback should form the core of any optimization strategy. The key is to develop decentralized algorithms that can be implemented at different parts of the network as simple, local algorithms that coordinate through asynchronous message passing.

  8. Minimalism. Clip and Save.

    Science.gov (United States)

    Hubbard, Guy

    2002-01-01

    Provides background information on the art movement called "Minimalism" discussing why it started and its characteristics. Includes learning activities and information on the artist, Donald Judd. Includes a reproduction of one of his art works and discusses its content. (CMK)

  9. Ruled Laguerre minimal surfaces

    KAUST Repository

    Skopenkov, Mikhail

    2011-10-30

    A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.

  10. On Notions of Security for Deterministic Encryption, and Efficient Constructions Without Random Oracles

    NARCIS (Netherlands)

    S. Boldyreva; S. Fehr (Serge); A. O'Neill; D. Wagner

    2008-01-01

    textabstractThe study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic

  11. Using EFDD as a Robust Technique for Deterministic Excitation in Operational Modal Analysis

    DEFF Research Database (Denmark)

    Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune

    2007-01-01

    carried out on a plate structure excited by respectively a pure stochastic signal and the same stochastic signal superimposed by a deterministic signal. Good agreement was found in terms of both natural frequencies, damping ratios and mode shapes. Even the influence of a deterministic signal located...

  12. Minimally invasive, imaging guided virtual autopsy compared to conventional autopsy in foetal, newborn and infant cases: study protocol for the paediatric virtual autopsy trial

    Science.gov (United States)

    2014-01-01

    Background In light of declining autopsy rates around the world, post-mortem MR imaging is a promising alternative to conventional autopsy in the investigation of infant death. A major drawback of this non-invasive autopsy approach is the fact that histopathological and microbiological examination of the tissue is not possible. The objective of this prospective study is to compare the performance of minimally invasive, virtual autopsy, including CT-guided biopsy, with conventional autopsy procedures in a paediatric population. Methods/Design Foetuses, newborns and infants that are referred for autopsy at three different institutions associated with the University of Zurich will be eligible for recruitment. All bodies will be examined with a commercial CT and a 3 Tesla MRI scanner, masked to the results of conventional autopsy. After cross-sectional imaging, CT-guided tissue sampling will be performed by a multifunctional robotic system (Virtobot) allowing for automated post-mortem biopsies. Virtual autopsy results will be classified with regards to the likely final diagnosis and major pathological findings and compared to the results of conventional autopsy, which remains the diagnostic gold standard. Discussion There is an urgent need for the development of alternative post-mortem examination methods, not only as a counselling tool for families and as a quality control measure for clinical diagnosis and treatment but also as an instrument to advance medical knowledge and clinical practice. This interdisciplinary study will determine whether virtual autopsy will narrow the gap in information between non-invasive and traditional autopsy procedures. Trial Registration ClinicalTrials.gov: NCT01888380 PMID:24438163

  13. The combined use of computer-guided, minimally invasive, flapless corticotomy and clear aligners as a novel approach to moderate crowding: A case report.

    Science.gov (United States)

    Cassetta, Michele; Altieri, Federica; Pandolfi, Stefano; Giansanti, Matteo

    2017-03-01

    The aim of this case report was to describe an innovative orthodontic treatment method that combined surgical and orthodontic techniques. The novel method was used to achieve a positive result in a case of moderate crowding by employing a computer-guided piezocision procedure followed by the use of clear aligners. A 23-year-old woman had a malocclusion with moderate crowding. Her periodontal indices, oral health-related quality of life (OHRQoL), and treatment time were evaluated. The treatment included interproximal corticotomy cuts extending through the entire thickness of the cortical layer, without a full-thickness flap reflection. This was achieved with a three-dimensionally printed surgical guide using computer-aided design and computer-aided manufacturing. Orthodontic force was applied to the teeth immediately after surgery by using clear appliances for better control of tooth movement. The total treatment time was 8 months. The periodontal indices improved after crowding correction, but the oral health impact profile showed a slight deterioration of OHRQoL during the 3 days following surgery. At the 2-year retention follow-up, the stability of treatment was excellent. The reduction in surgical time and patient discomfort, increased periodontal safety and patient acceptability, and accurate control of orthodontic movement without the risk of losing anchorage may encourage the use of this combined technique in appropriate cases.

  14. DETERMINISTICALLY-MODIFIED INTEGRAL ESTIMATORS OF GRAVITATIONAL TENSOR

    Directory of Open Access Journals (Sweden)

    Mohsen Romeshkani

    Full Text Available The Earth's global gravity field modelling is an important subject in Physical Geodesy. For this purpose different satellite gravimetry missions have been designed and launched. Satellite gravity gradiometry (SGG is a technique to measure the second-order derivatives of the gravity field. The gravity field and steady state ocean circulation explorer (GOCE is the first satellite mission which uses this technique and is dedicated to recover Earth's gravity models (EGMs up to medium wavelengths. The existing terrestrial gravimetric data and EGM scan be used for validation of the GOCE data prior to their use. In this research, the tensor of gravitation in the local north-oriented frame is generated using deterministically-modified integral estimators involving terrestrial data and EGMs. The paper presents that the SGG data is assessable with an accuracy of 1-2 mE in Fennoscandia using a modified integral estimatorby the Molodensky method. A degree of modification of 100 and an integration cap size of for integrating terrestrial data are proper parameters for the estimator.

  15. A Modified Deterministic Model for Reverse Supply Chain in Manufacturing

    Directory of Open Access Journals (Sweden)

    R. N. Mahapatra

    2013-01-01

    Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.

  16. Conversion of dependability deterministic requirements into probabilistic requirements

    International Nuclear Information System (INIS)

    Bourgade, E.; Le, P.

    1993-02-01

    This report concerns the on-going survey conducted jointly by the DAM/CCE and NRE/SR branches on the inclusion of dependability requirements in control and instrumentation projects. Its purpose is to enable a customer (the prime contractor) to convert into probabilistic terms dependability deterministic requirements expressed in the form ''a maximum permissible number of failures, of maximum duration d in a period t''. The customer shall select a confidence level for each previously defined undesirable event, by assigning a maximum probability of occurrence. Using the formulae we propose for two repair policies - constant rate or constant time - these probabilized requirements can then be transformed into equivalent failure rates. It is shown that the same formula can be used for both policies, providing certain realistic assumptions are confirmed, and that for a constant time repair policy, the correct result can always be obtained. The equivalent failure rates thus determined can be included in the specifications supplied to the contractors, who will then be able to proceed to their previsional justification. (author), 8 refs., 3 annexes

  17. A deterministic seismic hazard map of India and adjacent areas

    International Nuclear Information System (INIS)

    Parvez, Imtiyaz A.; Vaccari, Franco; Panza, Giuliano

    2001-09-01

    A seismic hazard map of the territory of India and adjacent areas has been prepared using a deterministic approach based on the computation of synthetic seismograms complete of all main phases. The input data set consists of structural models, seismogenic zones, focal mechanisms and earthquake catalogue. The synthetic seismograms have been generated by the modal summation technique. The seismic hazard, expressed in terms of maximum displacement (DMAX), maximum velocity (VMAX), and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid of 0.2 deg. x 0.2 deg. over the studied territory. The estimated values of the peak ground acceleration are compared with the observed data available for the Himalayan region and found in good agreement. Many parts of the Himalayan region have the DGA values exceeding 0.6 g. The epicentral areas of the great Assam earthquakes of 1897 and 1950 represent the maximum hazard with DGA values reaching 1.2-1.3 g. (author)

  18. Entrepreneurs, chance, and the deterministic concentration of wealth.

    Science.gov (United States)

    Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen

    2011-01-01

    In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.

  19. Is there a sharp phase transition for deterministic cellular automata?

    International Nuclear Information System (INIS)

    Wootters, W.K.

    1990-01-01

    Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs

  20. Rapid detection of small oscillation faults via deterministic learning.

    Science.gov (United States)

    Wang, Cong; Chen, Tianrui

    2011-08-01

    Detection of small faults is one of the most important and challenging tasks in the area of fault diagnosis. In this paper, we present an approach for the rapid detection of small oscillation faults based on a recently proposed deterministic learning (DL) theory. The approach consists of two phases: the training phase and the test phase. In the training phase, the system dynamics underlying normal and fault oscillations are locally accurately approximated through DL. The obtained knowledge of system dynamics is stored in constant radial basis function (RBF) networks. In the diagnosis phase, rapid detection is implemented. Specially, a bank of estimators are constructed using the constant RBF neural networks to represent the training normal and fault modes. By comparing the set of estimators with the test monitored system, a set of residuals are generated, and the average L(1) norms of the residuals are taken as the measure of the differences between the dynamics of the monitored system and the dynamics of the training normal mode and oscillation faults. The occurrence of a test oscillation fault can be rapidly detected according to the smallest residual principle. A rigorous analysis of the performance of the detection scheme is also given. The novelty of the paper lies in that the modeling uncertainty and nonlinear fault functions are accurately approximated and then the knowledge is utilized to achieve rapid detection of small oscillation faults. Simulation studies are included to demonstrate the effectiveness of the approach.

  1. Entrepreneurs, Chance, and the Deterministic Concentration of Wealth

    Science.gov (United States)

    Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen

    2011-01-01

    In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540

  2. Emergent Minimally Invasive Esophagogastrectomy

    Directory of Open Access Journals (Sweden)

    Thomas Fabian

    2017-01-01

    Full Text Available Introduction. Esophageal perforation in the setting of a malignancy carries a high morbidity and mortality. We describe our management of such a patient using minimally invasive approach. Methods. An 83-year-old female presented with an iatrogenic esophageal perforation during the workup of dysphagia. She was referred for surgical evaluation immediately after the event which occurred in the endoscopy suite. Minimally invasive esophagectomy was chosen to provide definitive treatment for both her malignancy and esophageal perforation. Results. Following an uncomplicated operative course, she was eventually discharged to extended care for rehabilitation and remains alive four years after her resection. Conclusion. Although traditional open techniques are the accepted gold standard of treatment for esophageal perforation, minimally invasive esophagectomy plays an important role in experienced hands and may be offered to such patients.

  3. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; A. Ryttov, T.

    2007-01-01

    Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars, pseudoscal......Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars......, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find...

  4. Minimal open strings

    International Nuclear Information System (INIS)

    Hosomichi, Kazuo

    2008-01-01

    We study FZZT-branes and open string amplitudes in (p, q) minimal string theory. We focus on the simplest boundary changing operators in two-matrix models, and identify the corresponding operators in worldsheet theory through the comparison of amplitudes. Along the way, we find a novel linear relation among FZZT boundary states in minimal string theory. We also show that the boundary ground ring is realized on physical open string operators in a very simple manner, and discuss its use for perturbative computation of higher open string amplitudes.

  5. Minimal genus one curves

    OpenAIRE

    Sadek, Mohammad

    2012-01-01

    In this paper we consider genus one equations of degree $n$, namely a (generalised) binary quartic when $n=2$, a ternary cubic when $n=3$, and a pair of quaternary quadrics when $n=4$. A new definition for the minimality of genus one equations of degree $n$ over local fields is introduced. The advantage of this definition is that it does not depend on invariant theory of genus one curves. We prove that this definition coincides with the classical definition of minimality for all $n\\le4$. As a...

  6. Minimal Walking Technicolor

    DEFF Research Database (Denmark)

    Frandsen, Mads Toudal

    2007-01-01

    I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the under......I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from...

  7. Minimalism and Speakers’ Intuitions

    Directory of Open Access Journals (Sweden)

    Matías Gariazzo

    2011-08-01

    Full Text Available Minimalism proposes a semantics that does not account for speakers’ intuitions about the truth conditions of a range of sentences or utterances. Thus, a challenge for this view is to offer an explanation of how its assignment of semantic contents to these sentences is grounded in their use. Such an account was mainly offered by Soames, but also suggested by Cappelen and Lepore. The article criticizes this explanation by presenting four kinds of counterexamples to it, and arrives at the conclusion that minimalism has not successfully answered the above-mentioned challenge.

  8. The Relation between Deterministic Thinking and Mental Health among Substance Abusers Involved in a Rehabilitation Program

    Directory of Open Access Journals (Sweden)

    Seyed Jalal Younesi

    2015-06-01

    Full Text Available Objective: The current research is to investigate the relation between deterministic thinking and mental health among drug abusers, in which the role of  cognitive distortions is considered and clarified by focusing on deterministic thinking. Methods: The present study is descriptive and correlative. All individuals with experience of drug abuse who had been referred to the Shafagh Rehabilitation center (Kahrizak were considered as the statistical population. 110 individuals who were addicted to drugs (stimulants and Methamphetamine were selected from this population by purposeful sampling to answer questionnaires about deterministic thinking and general health. For data analysis Pearson coefficient correlation and regression analysis was used. Results: The results showed that there is a positive and significant relationship between deterministic thinking and the lack of mental health at the statistical level [r=%22, P<0.05], which had the closest relation to deterministic thinking among the factors of mental health, such as anxiety and depression. It was found that the two factors of deterministic thinking which function as the strongest variables that predict the lack of mental health are: definitiveness in predicting tragic events and future anticipation. Discussion: It seems that drug abusers suffer from deterministic thinking when they are confronted with difficult situations, so they are more affected by depression and anxiety. This way of thinking may play a major role in impelling or restraining drug addiction.

  9. Comparison of deterministic and stochastic techniques for estimation of design basis floods for nuclear power plants

    International Nuclear Information System (INIS)

    Solomon, S.I.; Harvey, K.D.

    1982-12-01

    The IAEA Safety Guide 50-SG-S10A recommends that design basis floods be estimated by deterministic techniques using probable maximum precipitation and a rainfall runoff model to evaluate the corresponding flood. The Guide indicates that stochastic techniques are also acceptable in which case floods of very low probability have to be estimated. The paper compares the results of applying the two techniques in two river basins at a number of locations and concludes that the uncertainty of the results of both techniques is of the same order of magnitude. However, the use of the unit hydrograph as the rainfall runoff model may lead in some cases to nonconservative estimates. A distributed non-linear rainfall runoff model leads to estimates of probable maximum flood flows which are very close to values of flows having a 10 6 - 10 7 years return interval estimated using a conservative and relatively simple stochastic technique. Recommendations on the practical application of Safety Guide 50-SG-10A are made and the extension of the stochastic technique to ungauged sites and other design parameters is discussed

  10. Ways To Minimize Bullying.

    Science.gov (United States)

    Mueller, Mary Ellen; Parisi, Mary Joy

    This report delineates a series of interventions aimed at minimizing incidences of bullying in a suburban elementary school. The social services staff was scheduled to initiate an anti-bullying incentive in fall 2001 due to the increased occurrences of bullying during the prior year. The target population consisted of third- and fourth-grade…

  11. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  12. Minimally invasive distal pancreatectomy

    NARCIS (Netherlands)

    Røsok, Bård I.; de Rooij, Thijs; van Hilst, Jony; Diener, Markus K.; Allen, Peter J.; Vollmer, Charles M.; Kooby, David A.; Shrikhande, Shailesh V.; Asbun, Horacio J.; Barkun, Jeffrey; Besselink, Marc G.; Boggi, Ugo; Conlon, Kevin; Han, Ho Seong; Hansen, Paul; Kendrick, Michael L.; Kooby, David; Montagnini, Andre L.; Palanivelu, Chinnasamy; Wakabayashi, Go; Zeh, Herbert J.

    2017-01-01

    The first International conference on Minimally Invasive Pancreas Resection was arranged in conjunction with the annual meeting of the International Hepato-Pancreato-Biliary Association (IHPBA), in Sao Paulo, Brazil on April 19th 2016. The presented evidence and outcomes resulting from the session

  13. The Minimal Era

    Science.gov (United States)

    Van Ness, Wilhelmina

    1974-01-01

    Described the development of Minimal Art, a composite name that has been applied to the scattering of bland, bleak, non-objective fine arts painting and sculpture forms that proliferated slightly mysteriously in the middle 1960's as Pop Art began to decline. (Author/RK)

  14. Minimal DBM Substraction

    DEFF Research Database (Denmark)

    David, Alexandre; Håkansson, John; G. Larsen, Kim

    In this paper we present an algorithm to compute DBM substractions with a guaranteed minimal number of splits and disjoint DBMs to avoid any redundance. The substraction is one of the few operations that result in a non-convex zone, and thus, requires splitting. It is of prime importance to reduce...

  15. Cluster dynamics modelling of materials: A new hybrid deterministic/stochastic coupling approach

    Science.gov (United States)

    Terrier, Pierre; Athènes, Manuel; Jourdan, Thomas; Adjanor, Gilles; Stoltz, Gabriel

    2017-12-01

    Deterministic simulations of the rate equations governing cluster dynamics in materials are limited by the number of equations to integrate. Stochastic simulations are limited by the high frequency of certain events. We propose a coupling method combining deterministic and stochastic approaches. It allows handling different time scale phenomena for cluster dynamics. This method, based on a splitting of the dynamics, is generic and we highlight two different hybrid deterministic/stochastic methods. These coupling schemes are highly parallelizable and specifically designed to treat large size cluster problems. The proof of concept is made on a simple model of vacancy clustering under thermal ageing.

  16. Anti-deterministic behaviour of discrete systems that are less predictable than noise

    Science.gov (United States)

    Urbanowicz, Krzysztof; Kantz, Holger; Holyst, Janusz A.

    2005-05-01

    We present a new type of deterministic dynamical behaviour that is less predictable than white noise. We call it anti-deterministic (AD) because time series corresponding to the dynamics of such systems do not generate deterministic lines in recurrence plots for small thresholds. We show that although the dynamics is chaotic in the sense of exponential divergence of nearby initial conditions and although some properties of AD data are similar to white noise, the AD dynamics is in fact, less predictable than noise and hence is different from pseudo-random number generators.

  17. Loop-lifted XQuery RPC with deterministic updates

    NARCIS (Netherlands)

    Y. Zhang (Ying); P.A. Boncz (Peter)

    2006-01-01

    htmlabstractXRPC is a minimal XQuery extension that enables distributed query execution, combining the Remote Procedure Call (RPC) paradigm with the existing concept of XQuery functions. By calling out of a for-loop to multiple destinations, and by calling functions that themselves perform XRPC

  18. Short-term results of microhook ab interno trabeculotomy, a novel minimally invasive glaucoma surgery in Japanese eyes: initial case series.

    Science.gov (United States)

    Tanito, Masaki; Sano, Ichiya; Ikeda, Yoshifumi; Fujihara, Etsuko

    2017-08-01

    To report the first early postoperative results and safety profile after microhook ab interno trabeculotomy (μLOT). This initial retrospective observational case series included 24 consecutive glaucomatous eyes of 17 Japanese patients (7 men, 10 women; mean age ± standard deviation, 66.7 ± 17.9 years) who underwent μLOT. The trabeculotomy extent, surgical time, perioperative complications, interventions for complications and additional glaucoma surgeries during the follow-up for more than 3 months were collected by reviewing the medical and surgical records. The intraocular pressure (IOP), numbers of antiglaucoma medications, logarithm of the minimum angle of resolution visual acuity (VA), anterior chamber (AC) flare and corneal endothelial cell density (CECD) were compared preoperatively and postoperatively. The trabecular meshwork was incised for a mean of 3.6 ± 0.5 clock hours temporally, 3.7 ± 0.5 clock hours nasally and total 7.3 ± 0.6 clock hours during the 6.2 ± 1.6-min surgery. The mean preoperative IOP of 25.9 ± 14.3 mmHg and number of antiglaucoma medication of 3.3 ± 1.0 decreased significantly (p = 0.0002 and p = 0.005, respectively) to 14.7 ± 3.6 mmHg and 2.8 ± 0.8 at the final visit at 188.6 ± 68.8 days postoperatively. Compared with preoperatively, the final VA, AC flare and CECD did not change significantly. Hyphema with niveau formation (nine eyes, 38%) and washout of hyphema (two eyes, 8%) were the most common postoperative complication and intervention, respectively. At the final visit, 19 eyes (79%) achieved successful IOP control of 18 mmHg or less and a 15% reduction or greater. Microhook trabeculotomy normalizes the IOP during the early postoperative period in patients with glaucoma. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  19. Activity modes selection for project crashing through deterministic simulation

    Directory of Open Access Journals (Sweden)

    Ashok Mohanty

    2011-12-01

    Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.

  20. Reduced-Complexity Deterministic Annealing for Vector Quantizer Design

    Directory of Open Access Journals (Sweden)

    Ortega Antonio

    2005-01-01

    Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.

  1. Parkinson's disease classification using gait analysis via deterministic learning.

    Science.gov (United States)

    Zeng, Wei; Liu, Fenglin; Wang, Qinghui; Wang, Ying; Ma, Limin; Zhang, Yu

    2016-10-28

    Gait analysis plays an important role in maintaining the well-being of human mobility and health care, and is a valuable tool for obtaining quantitative information on motor deficits in Parkinson's disease (PD). In this paper, we propose a method to classify (diagnose) patients with PD and healthy control subjects using gait analysis via deterministic learning theory. The classification approach consists of two phases: a training phase and a classification phase. In the training phase, gait characteristics represented by the gait dynamics are derived from the vertical ground reaction forces under the usual and self-selected paces of the subjects. The gait dynamics underlying gait patterns of healthy controls and PD patients are locally accurately approximated by radial basis function (RBF) neural networks. The obtained knowledge of approximated gait dynamics is stored in constant RBF networks. The gait patterns of healthy controls and PD patients constitute a training set. In the classification phase, a bank of dynamical estimators is constructed for all the training gait patterns. Prior knowledge of gait dynamics represented by the constant RBF networks is embedded in the estimators. By comparing the set of estimators with a test gait pattern of a certain PD patient to be classified (diagnosed), a set of classification errors are generated. The average L 1 norms of the errors are taken as the classification measure between the dynamics of the training gait patterns and the dynamics of the test PD gait pattern according to the smallest error principle. When the gait patterns of 93 PD patients and 73 healthy controls are classified with five-fold cross-validation method, the accuracy, sensitivity and specificity of the results are 96.39%, 96.77% and 95.89%, respectively. Based on the results, it may be claimed that the features and the classifiers used in the present study could effectively separate the gait patterns between the groups of PD patients and healthy

  2. Development of a Deterministic Ethernet Building blocks for Space Applications

    Science.gov (United States)

    Fidi, C.; Jakovljevic, Mirko

    2015-09-01

    The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.

  3. Minimal access surgery for multiorgan hydatid cysts

    Directory of Open Access Journals (Sweden)

    Parelkar Sandesh

    2010-01-01

    Full Text Available Multiorgan hydatid cysts caused by larval growth of Echinococcus granulosus, is a rare condition in paediatric age group. There are very few reports of management of multiorgan hydatid cysts, involving lung, liver, and spleen by minimally invasive approach in paediatric age group. Herewith, we are reporting a case of hydatid cysts involving lung, liver, and spleen in a six-year-old child managed by minimally invasive surgery along with the review of literature.

  4. Minimally Invasive Surgery in Thymic Malignances

    Directory of Open Access Journals (Sweden)

    Wentao FANG

    2018-04-01

    Full Text Available Surgery is the most important therapy for thymic malignances. The last decade has seen increasing adoption of minimally invasive surgery (MIS for thymectomy. MIS for early stage thymoma patients has been shown to yield similar oncological results while being helpful in minimize surgical trauma, improving postoperative recovery, and reduce incisional pain. Meanwhile, With the advance in surgical techniques, the patients with locally advanced thymic tumors, preoperative induction therapies or recurrent diseases, may also benefit from MIS in selected cases.

  5. Transience and capacity of minimal submanifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen; Palmer, V.

    2003-01-01

    We prove explicit lower bounds for the capacity of annular domains of minimal submanifolds P-m in ambient Riemannian spaces N-n with sectional curvatures bounded from above. We characterize the situations in which the lower bounds for the capacity are actually attained. Furthermore we apply...... these bounds to prove that Brownian motion defined on a complete minimal submanifold is transient when the ambient space is a negatively curved Hadamard-Cartan manifold. The proof stems directly from the capacity bounds and also covers the case of minimal submanifolds of dimension m > 2 in Euclidean spaces....

  6. Discrete Minimal Surface Algebras

    Directory of Open Access Journals (Sweden)

    Joakim Arnlind

    2010-05-01

    Full Text Available We consider discrete minimal surface algebras (DMSA as generalized noncommutative analogues of minimal surfaces in higher dimensional spheres. These algebras appear naturally in membrane theory, where sequences of their representations are used as a regularization. After showing that the defining relations of the algebra are consistent, and that one can compute a basis of the enveloping algebra, we give several explicit examples of DMSAs in terms of subsets of sl_n (any semi-simple Lie algebra providing a trivial example by itself. A special class of DMSAs are Yang-Mills algebras. The representation graph is introduced to study representations of DMSAs of dimension d ≤ 4, and properties of representations are related to properties of graphs. The representation graph of a tensor product is (generically the Cartesian product of the corresponding graphs. We provide explicit examples of irreducible representations and, for coinciding eigenvalues, classify all the unitary representations of the corresponding algebras.

  7. Minimal Composite Inflation

    DEFF Research Database (Denmark)

    Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco

    2011-01-01

    We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity......, and that the underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....

  8. Minimally Invasive Abdominal Surgery

    OpenAIRE

    Richardson, William S.; Carter, Kristine M.; Fuhrman, George M.; Bolton, John S.; Bowen, John C.

    2000-01-01

    In the last decade, laparoscopy has been the most innovative surgical movement in general surgery. Minimally invasive surgery performed through a few small incisions, laparoscopy is the standard of care for the treatment of gallbladder disease and the gold standard for the treatment of reflux disease. The indications for a laparoscopic approach to abdominal disease continue to increase, and many diseases may be treated with laparoscopic techniques. At Ochsner, laparoscopic techniques have dem...

  9. Minimal hepatic encephalopathy

    OpenAIRE

    Stinton, Laura M; Jayakumar, Saumya

    2013-01-01

    Minimal hepatic encephalopathy (MHE) is the earliest form of hepatic encephalopathy and can affect up to 80% of cirrhotic patients. By definition, it has no obvious clinical manifestation and is characterized by neurocognitive impairment in attention, vigilance and integrative function. Although often not considered to be clinically relevant and, therefore, not diagnosed or treated, MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis o...

  10. A plateau–valley separation method for textured surfaces with a deterministic pattern

    DEFF Research Database (Denmark)

    Godi, Alessandro; Kühle, Anders; De Chiffre, Leonardo

    2014-01-01

    The effective characterization of textured surfaces presenting a deterministic pattern of lubricant reservoirs is an issue with which many researchers are nowadays struggling. Existing standards are not suitable for the characterization of such surfaces, providing at times values without physical...

  11. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...

  12. A deterministic approach for performance assessment and optimization of power distribution units in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.

    2009-01-01

    This paper presents a deterministic approach for performance assessment and optimization of power distribution units in Iran. The deterministic approach is composed of data envelopment analysis (DEA), principal component analysis (PCA) and correlation techniques. Seventeen electricity distribution units have been considered for the purpose of this study. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study considers an integrated deterministic DEA-PCA approach since the DEA model should be verified and validated by a robust multivariate methodology such as PCA. Moreover, the DEA models are verified and validated by PCA, Spearman and Kendall's Tau correlation techniques, while previous studies do not have the verification and validation features. Also, both input- and output-oriented DEA models are used for sensitivity analysis of the input and output variables. Finally, this is the first study to present an integrated deterministic approach for assessment and optimization of power distributions in Iran

  13. Performance Analysis of Recurrence Matrix Statistics for the Detection of Deterministic Signals in Noise

    National Research Council Canada - National Science Library

    Michalowicz, Joseph V; Nichols, Jonathan M; Bucholtz, Frank

    2008-01-01

    Understanding the limitations to detecting deterministic signals in the presence of noise, especially additive, white Gaussian noise, is of importance for the design of LPI systems and anti-LPI signal defense...

  14. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  15. Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region

    International Nuclear Information System (INIS)

    Panza, G.F.; Vaccari, F.; Kouteva, M.

    2008-03-01

    A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)

  16. Matching allele dynamics and coevolution in a minimal predator-prey replicator model

    Energy Technology Data Exchange (ETDEWEB)

    Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)

    2008-01-21

    A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.

  17. On the application of deterministic optimization methods to stochastic control problems

    Science.gov (United States)

    Kramer, L. C.; Athans, M.

    1974-01-01

    A technique is presented by which deterministic optimization techniques, for example, the maximum principle of Pontriagin, can be applied to stochastic optimal control problems formulated around linear systems with Gaussian noises and general cost criteria. Using this technique, the stochastic nature of the problem is suppressed but for two expectation operations, the optimization being deterministic. The use of the technique in treating problems with quadratic and nonquadratic costs is illustrated.

  18. Deterministic and stochastic simulation and analysis of biochemical reaction networks the lactose operon example.

    Science.gov (United States)

    Yildirim, Necmettin; Kazanci, Caner

    2011-01-01

    A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. © 2011 Elsevier Inc. All rights reserved.

  19. Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads

    Directory of Open Access Journals (Sweden)

    Králik Juraj

    2014-12-01

    Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented

  20. Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A.F.; Roussin, R.W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  1. Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992

    Energy Technology Data Exchange (ETDEWEB)

    Rice, A. F.; Roussin, R. W. [eds.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.

  2. f-MAC: A Deterministic Media Access Control Protocol Without Time Synchronization

    OpenAIRE

    Roedig, Utz; Barroso, Andre; Sreenan, Cormac J.

    2006-01-01

    Nodes in a wireless network transmit messages through a shared medium. Thus, a Media Access Control (MAC) protocol is necessary to regulate and coordinate medium access. For some application areas it is necessary to have a deterministic MAC protocol which can give guarantees on message delay and channel throughput. Schedule based MAC protocols, based on time synchronization among nodes, are currently used to implement deterministic MAC protocols. Time synchronization is difficult and costly, ...

  3. Deterministic Modeling of the High Temperature Test Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.

    2010-06-01

    Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the

  4. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  5. Modelling the protocol stack in NCS with deterministic and stochastic petri net

    Science.gov (United States)

    Hui, Chen; Chunjie, Zhou; Weifeng, Zhu

    2011-06-01

    Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.

  6. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    Science.gov (United States)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  7. Efficiency of transport in periodic potentials: dichotomous noise contra deterministic force

    Science.gov (United States)

    Spiechowicz, J.; Łuczka, J.; Machura, L.

    2016-05-01

    We study the transport of an inertial Brownian particle moving in a symmetric and periodic one-dimensional potential, and subjected to both a symmetric, unbiased external harmonic force as well as biased dichotomic noise η (t) also known as a random telegraph signal or a two state continuous-time Markov process. In doing so, we concentrate on the previously reported regime (Spiechowicz et al 2014 Phys. Rev. E 90 032104) for which non-negative biased noise η (t) in the form of generalized white Poissonian noise can induce anomalous transport processes similar to those generated by a deterministic constant force F= but significantly more effective than F, i.e. the particle moves much faster, the velocity fluctuations are noticeably reduced and the transport efficiency is enhanced several times. Here, we confirm this result for the case of dichotomous fluctuations which, in contrast to white Poissonian noise, can assume positive as well as negative values and examine the role of thermal noise in the observed phenomenon. We focus our attention on the impact of bidirectionality of dichotomous fluctuations and reveal that the effect of nonequilibrium noise enhanced efficiency is still detectable. This result may explain transport phenomena occurring in strongly fluctuating environments of both physical and biological origin. Our predictions can be corroborated experimentally by use of a setup that consists of a resistively and capacitively shunted Josephson junction.

  8. The giant acoustic atom - a single quantum system with a deterministic time delay

    Science.gov (United States)

    Guo, Lingzhen; Grimsmo, Arne; Frisk Kockum, Anton; Pletyukhov, Mikhail; Johansson, Göran

    2017-04-01

    We investigate the quantum dynamics of a single transmon qubit coupled to surface acoustic waves (SAWs) via two distant connection points. Since the acoustic speed is five orders of magnitude slower than the speed of light, the travelling time between the two connection points needs to be taken into account. Therefore, we treat the transmon qubit as a giant atom with a deterministic time delay. We find that the spontaneous emission of the system, formed by the giant atom and the SAWs between its connection points, initially follows a polynomial decay law instead of an exponential one, as would be the case for a small atom. We obtain exact analytical results for the scattering properties of the giant atom up to two-phonon processes by using a diagrammatic approach. The time delay gives rise to novel features in the reflection, transmission, power spectra, and second-order correlation functions of the system. Furthermore, we find the short-time dynamics of the giant atom for arbitrary drive strength by a numerically exact method for open quantum systems with a finite-time-delay feedback loop. L. G. acknowledges financial support from Carl-Zeiss Stiftung (0563-2.8/508/2).

  9. Minimization of the LCA impact of thermodynamic cycles using a combined simulation-optimization approach

    International Nuclear Information System (INIS)

    Brunet, Robert; Cortés, Daniel; Guillén-Gosálbez, Gonzalo; Jiménez, Laureano; Boer, Dieter

    2012-01-01

    This work presents a computational approach for the simultaneous minimization of the total cost and environmental impact of thermodynamic cycles. Our method combines process simulation, multi-objective optimization and life cycle assessment (LCA) within a unified framework that identifies in a systematic manner optimal design and operating conditions according to several economic and LCA impacts. Our approach takes advantages of the complementary strengths of process simulation (in which mass, energy balances and thermodynamic calculations are implemented in an easy manner) and rigorous deterministic optimization tools. We demonstrate the capabilities of this strategy by means of two case studies in which we address the design of a 10 MW Rankine cycle modeled in Aspen Hysys, and a 90 kW ammonia-water absorption cooling cycle implemented in Aspen Plus. Numerical results show that it is possible to achieve environmental and cost savings using our rigorous approach. - Highlights: ► Novel framework for the optimal design of thermdoynamic cycles. ► Combined use of simulation and optimization tools. ► Optimal design and operating conditions according to several economic and LCA impacts. ► Design of a 10MW Rankine cycle in Aspen Hysys, and a 90kW absorption cycle in Aspen Plus.

  10. Legal incentives for minimizing waste

    International Nuclear Information System (INIS)

    Clearwater, S.W.; Scanlon, J.M.

    1991-01-01

    Waste minimization, or pollution prevention, has become an integral component of federal and state environmental regulation. Minimizing waste offers many economic and public relations benefits. In addition, waste minimization efforts can also dramatically reduce potential criminal requirements. This paper addresses the legal incentives for minimizing waste under current and proposed environmental laws and regulations

  11. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Jinaphanh, A.

    2012-01-01

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  12. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    ) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  13. Minimal Super Technicolor

    DEFF Research Database (Denmark)

    Antola, M.; Di Chiara, S.; Sannino, F.

    2011-01-01

    We introduce novel extensions of the Standard Model featuring a supersymmetric technicolor sector (supertechnicolor). As the first minimal conformal supertechnicolor model we consider N=4 Super Yang-Mills which breaks to N=1 via the electroweak interactions. This is a well defined, economical...... and calculable extension of the SM involving the smallest number of fields. It constitutes an explicit example of a natural superconformal extension of the Standard Model featuring a well defined connection to string theory. It allows to interpolate, depending on how we break the underlying supersymmetry...

  14. Minimally Invasive Parathyroidectomy

    Directory of Open Access Journals (Sweden)

    Lee F. Starker

    2011-01-01

    Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.

  15. Wildfire susceptibility mapping: comparing deterministic and stochastic approaches

    Science.gov (United States)

    Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj

    2016-04-01

    Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.

  16. A deterministic aggregate production planning model considering quality of products

    International Nuclear Information System (INIS)

    Madadi, Najmeh; Wong, Kuan Yew

    2013-01-01

    Aggregate Production Planning (APP) is a medium-term planning which is concerned with the lowest-cost method of production planning to meet customers' requirements and to satisfy fluctuating demand over a planning time horizon. APP problem has been studied widely since it was introduced and formulated in 1950s. However, in several conducted studies in the APP area, most of the researchers have concentrated on some common objectives such as minimization of cost, fluctuation in the number of workers, and inventory level. Specifically, maintaining quality at the desirable level as an objective while minimizing cost has not been considered in previous studies. In this study, an attempt has been made to develop a multi-objective mixed integer linear programming model that serves those companies aiming to incur the minimum level of operational cost while maintaining quality at an acceptable level. In order to obtain the solution to the multi-objective model, the Fuzzy Goal Programming approach and max-min operator of Bellman-Zadeh were applied to the model. At the final step, IBM ILOG CPLEX Optimization Studio software was used to obtain the experimental results based on the data collected from an automotive parts manufacturing company. The results show that incorporating quality in the model imposes some costs, however a trade-off should be done between the cost resulting from producing products with higher quality and the cost that the firm may incur due to customer dissatisfaction and sale losses.

  17. When to conduct probabilistic linkage vs. deterministic linkage? A simulation study.

    Science.gov (United States)

    Zhu, Ying; Matsuyama, Yutaka; Ohashi, Yasuo; Setoguchi, Soko

    2015-08-01

    When unique identifiers are unavailable, successful record linkage depends greatly on data quality and types of variables available. While probabilistic linkage theoretically captures more true matches than deterministic linkage by allowing imperfection in identifiers, studies have shown inconclusive results likely due to variations in data quality, implementation of linkage methodology and validation method. The simulation study aimed to understand data characteristics that affect the performance of probabilistic vs. deterministic linkage. We created ninety-six scenarios that represent real-life situations using non-unique identifiers. We systematically introduced a range of discriminative power, rate of missing and error, and file size to increase linkage patterns and difficulties. We assessed the performance difference of linkage methods using standard validity measures and computation time. Across scenarios, deterministic linkage showed advantage in PPV while probabilistic linkage showed advantage in sensitivity. Probabilistic linkage uniformly outperformed deterministic linkage as the former generated linkages with better trade-off between sensitivity and PPV regardless of data quality. However, with low rate of missing and error in data, deterministic linkage performed not significantly worse. The implementation of deterministic linkage in SAS took less than 1min, and probabilistic linkage took 2min to 2h depending on file size. Our simulation study demonstrated that the intrinsic rate of missing and error of linkage variables was key to choosing between linkage methods. In general, probabilistic linkage was a better choice, but for exceptionally good quality data (<5% error), deterministic linkage was a more resource efficient choice. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Deterministic Effects of Occupational Exposures in the Mayak Nuclear Workers Cohort

    International Nuclear Information System (INIS)

    Azinova, T. V.; Okladnikova, N. D.; Sumina, M. V.; Pesternikova, V. S.; Osovets, V. S.; Druzhimina, M. B.; Seminikhina, N. g.

    2004-01-01

    A wide spread and utilization of nuclear energy in the recent decade leads to a stable increasing of contingents exposed to ionizing radiation sources. in order to predict radiation risks it's important to have and apply all the experience in assessment of health effects due to radiation exposures generated by now in different countries. the proposed report will present results of the long-term follow-up for a cohort of nuclear workers at the Mayak Production Association, with was the first nuclear facility in Russia. The established system of individual dosimetry of external exposure, monitoring of internal radiation and special system of medical follow-up of healthy nuclear workers during the last 50 years allowed collecting of the unique primary data to study radiation effects, their patterns and mechanisms specific of exposure dose. The study cohort includes 61 percent of males and 39 percent of females. The vital status is known for 90 percent of cases, 44 percent of workers are still alive and undergo regular medical examination in our Clinic. Unfortunately, by now 50 percent of workers have died. 6 percent of workers were lost for the follow-up. total doses from chronic external gamma rays in the cohort ranged from 0.6 to 10.8 Gy (annual exposure doses were from 0.001 to 7.4 Gy), Pu body burden was from 0.3 to 72.3 kBq. Most intensive chronic exposure of workers was registered during 1948 to 1958. At this time, 19 radiation accidents occurred at the Mayak PA. Thus, the highest incidence of deterministic effects was observed right at this period. In the cohort of Mayak nuclear workers there were diagnosed 60 cases of acute radiation syndrome (I to IV degrees of severity); 2079 cases of chronic radiation sickness; 120 cases of plutonium pneumoscelarosis; 5 cases of radiation cataracts; and over 400 cases of local radiation injuries. The report will present dependences of the observed effects on absorbed radiation dose and dose rate in terms of acute radiation

  19. Minimally extended SILH

    International Nuclear Information System (INIS)

    Chala, Mikael; Grojean, Christophe; Humboldt-Univ. Berlin; Lima, Leonardo de; Univ. Estadual Paulista, Sao Paulo

    2017-03-01

    Higgs boson compositeness is a phenomenologically viable scenario addressing the hierarchy problem. In minimal models, the Higgs boson is the only degree of freedom of the strong sector below the strong interaction scale. We present here the simplest extension of such a framework with an additional composite spin-zero singlet. To this end, we adopt an effective field theory approach and develop a set of rules to estimate the size of the various operator coefficients, relating them to the parameters of the strong sector and its structural features. As a result, we obtain the patterns of new interactions affecting both the new singlet and the Higgs boson's physics. We identify the characteristics of the singlet field which cause its effects on Higgs physics to dominate over the ones inherited from the composite nature of the Higgs boson. Our effective field theory construction is supported by comparisons with explicit UV models.

  20. Minimal Hepatic Encephalopathy

    Directory of Open Access Journals (Sweden)

    Laura M Stinton

    2013-01-01

    Full Text Available Minimal hepatic encephalopathy (MHE is the earliest form of hepatic encephalopathy and can affect up to 80% of cirrhotic patients. By definition, it has no obvious clinical manifestation and is characterized by neurocognitive impairment in attention, vigilance and integrative function. Although often not considered to be clinically relevant and, therefore, not diagnosed or treated, MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis of MHE has traditionally been achieved through neuropsychological examination, psychometric tests or the newer critical flicker frequency test. A new smartphone application (EncephalApp Stroop Test may serve to function as a screening tool for patients requiring further testing. In addition to physician reporting and driving restrictions, medical treatment for MHE includes non-absorbable disaccharides (eg, lactulose, probiotics or rifaximin. Liver transplantation may not result in reversal of the cognitive deficits associated with MHE.

  1. Minimal hepatic encephalopathy.

    Science.gov (United States)

    Stinton, Laura M; Jayakumar, Saumya

    2013-10-01

    Minimal hepatic encephalopathy (MHE) is the earliest form of hepatic encephalopathy and can affect up to 80% of cirrhotic patients. By definition, it has no obvious clinical manifestation and is characterized by neurocognitive impairment in attention, vigilance and integrative function. Although often not considered to be clinically relevant and, therefore, not diagnosed or treated, MHE has been shown to affect daily functioning, quality of life, driving and overall mortality. The diagnosis of MHE has traditionally been achieved through neuropsychological examination, psychometric tests or the newer critical flicker frequency test. A new smartphone application (EncephalApp Stroop Test) may serve to function as a screening tool for patients requiring further testing. In addition to physician reporting and driving restrictions, medical treatment for MHE includes non-absorbable disaccharides (eg, lactulose), probiotics or rifaximin. Liver transplantation may not result in reversal of the cognitive deficits associated with MHE.

  2. Learn with SAT to Minimize Büchi Automata

    Directory of Open Access Journals (Sweden)

    Stephan Barth

    2012-10-01

    Full Text Available We describe a minimization procedure for nondeterministic Büchi automata (NBA. For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher'' in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.

  3. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...

  4. Swarm robotics and minimalism

    Science.gov (United States)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  5. Minimal dilaton model

    Directory of Open Access Journals (Sweden)

    Oda Kin-ya

    2013-05-01

    Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.

  6. Deterministic analysis of operational events in nuclear power plants. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    2007-03-01

    Computer codes are being used to analyse operational events in nuclear power plants but until now no special attention has been given to the dissemination of the benefits from these analyses. The IAEA's Incident Reporting System contains more than 3000 reported operational events. Even though deterministic analyses were certainly performed for some of them, only a few reports are supplemented by the results of the computer code analysis. From 23-26 May 2005 a Technical Meeting on Deterministic Analysis of Operational Events in Nuclear Power Plants was organized by the IAEA and held at the International Centre of Croatian Universities in Dubrovnik, Croatia. The objective of the meeting was to provide an international forum for presentations and discussions on how deterministic analysis can be utilized for the evaluation of operational events at nuclear power plants in addition to the traditional root cause evaluation methods

  7. Deterministic Approach for Estimating Critical Rainfall Threshold of Rainfall-induced Landslide in Taiwan

    Science.gov (United States)

    Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei

    2013-04-01

    Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally

  8. Observations and modeling of deterministic properties of human ...

    Indian Academy of Sciences (India)

    Simple models show that in Type-I intermittency a characteristic U-shaped probability distribution is obtained for the laminar phase length. The laminar phase length distribution characteristic for Type-I intermittency may be obtained in human heart rate variability data for some cases of pathology. The heart and its regulatory ...

  9. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    Science.gov (United States)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch

  10. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    International Nuclear Information System (INIS)

    Azmy, Yousry; Wang, Yaqi

    2013-01-01

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code's numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory's Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  11. A non-deterministic approach to forecasting the trophic evolution of lakes

    Directory of Open Access Journals (Sweden)

    Roberto Bertoni

    2016-03-01

    Full Text Available Limnologists have long recognized that one of the goals of their discipline is to increase its predictive capability. In recent years, the role of prediction in applied ecology escalated, mainly due to man’s increased ability to change the biosphere. Such alterations often came with unplanned and noticeably negative side effects mushrooming from lack of proper attention to long-term consequences. Regression analysis of common limnological parameters has been successfully applied to develop predictive models relating the variability of limnological parameters to specific key causes. These approaches, though, are biased by the requirement of a priori cause-relation assumption, oftentimes difficult to find in the complex, nonlinear relationships entangling ecological data. A set of quantitative tools that can help addressing current environmental challenges avoiding such restrictions is currently being researched and developed within the framework of ecological informatics. One of these approaches attempting to model the relationship between a set of inputs and known outputs, is based on genetic algorithms and programming (GP. This stochastic optimization tool is based on the process of evolution in natural systems and was inspired by a direct analogy to sexual reproduction and Charles Darwin’s principle of natural selection. GP works through genetic algorithms that use selection and recombination operators to generate a population of equations. Thanks to a 25-years long time-series of regular limnological data, the deep, large, oligotrophic Lake Maggiore (Northern Italy is the ideal case study to test the predictive ability of GP. Testing of GP on the multi-year data series of this lake has allowed us to verify the forecasting efficacy of the models emerging from GP application. In addition, this non-deterministic approach leads to the discovery of non-obvious relationships between variables and enabled the formulation of new stochastic models.

  12. Deterministic Seirs Epidemic Model for Modeling Vital Dynamics, Vaccinations, and Temporary Immunity

    Directory of Open Access Journals (Sweden)

    Marek B. Trawicki

    2017-01-01

    Full Text Available In this paper, the author proposes a new SEIRS model that generalizes several classical deterministic epidemic models (e.g., SIR and SIS and SEIR and SEIRS involving the relationships between the susceptible S, exposed E, infected I, and recovered R individuals for understanding the proliferation of infectious diseases. As a way to incorporate the most important features of the previous models under the assumption of homogeneous mixing (mass-action principle of the individuals in the population N, the SEIRS model utilizes vital dynamics with unequal birth and death rates, vaccinations for newborns and non-newborns, and temporary immunity. In order to determine the equilibrium points, namely the disease-free and endemic equilibrium points, and study their local stability behaviors, the SEIRS model is rescaled with the total time-varying population and analyzed according to its epidemic condition R0 for two cases of no epidemic (R0 ≤ 1 and epidemic (R0 > 1 using the time-series and phase portraits of the susceptible s, exposed e, infected i, and recovered r individuals. Based on the experimental results using a set of arbitrarily-defined parameters for horizontal transmission of the infectious diseases, the proportional population of the SEIRS model consisted primarily of the recovered r (0.7–0.9 individuals and susceptible s (0.0–0.1 individuals (epidemic and recovered r (0.9 individuals with only a small proportional population for the susceptible s (0.1 individuals (no epidemic. Overall, the initial conditions for the susceptible s, exposed e, infected i, and recovered r individuals reached the corresponding equilibrium point for local stability: no epidemic (DFE X ¯ D F E and epidemic (EE X ¯ E E .

  13. Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA

    Science.gov (United States)

    Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.

    2017-12-01

    Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.

  14. Verification & Validation of High-Order Short-Characteristics-Based Deterministic Transport Methodology on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States); Wang, Yaqi [North Carolina State Univ., Raleigh, NC (United States)

    2013-12-20

    The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code’s numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory’s Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.

  15. Deterministic Simulation of Alternative Breeding Objectives and Schemes for Pure Bred Cattle in Kenya

    International Nuclear Information System (INIS)

    Kahi, A.K.

    2002-01-01

    Alternative breeding objectives and schemes for milk production were evaluated for their economic efficiency using deterministic simulation. A two-tier open nucleus breeding scheme and a young bull system (YBS) were assumed with intensive recording and 100% artificial insemination (AI) in the nucleus and 35% AI in the commercial population, which was assumed to comprise of the smallholder herds. Since most production systems are dual purpose, breeding objectives were defined, which represented different scenarios. These objectives represented the present (objective 1- dual purpose), smallholder (objective 2- dual purpose with limited mature live weight) and future production situations (objective 3- dual purpose with fat based milk price). Breeding objectives differed in the trials included and their economic values while the breeding schemes differed in records available for use as selection criteria as well as in the costs and investment parameters. since the main question for establishing a breeding and recording programme is that of efficiency of investment, the monetary genetic response and profit per cow in the population were used as evaluation criteria. All breeding objectives and schemes realized profits. The objectives and schemes that ranked highly for annual monetary genetic response and total return per cow did not rank the same in profit per cow in all cases. In objective 3, the scheme that assumed records on fat yield (FY) were available for use as selection criterion and that, which assumed no records on FY,differed very little in profit per cow (approximately 4%). Therefore, under the current production and marketing conditions, a breeding scheme that requires measuring of the fat content does not seem to be justified from an economic point of view. There is evidence that a well-organised breeding programme utilizing an open nucleus, a YBS and the smallholder farms as well as commercial population could sustain itself

  16. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Yi [Hong Kong Polytechnic University, Kowloon, Hong Kong (China); Small, Michael [Hong Kong Polytechnic University, Kowloon, Hong Kong (China); Coward, David [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Howell, Eric [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Zhao Chunnong [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Ju Li [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia); Blair, David [School of Physics, University of Western Australia, Crawley, WA 6009 (Australia)

    2006-03-07

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  17. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  18. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  19. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  20. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    2015-01-01

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee...

  1. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...... that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...

  2. Tsunamigenic scenarios for southern Peru and northern Chile seismic gap: Deterministic and probabilistic hybrid approach for hazard assessment

    Science.gov (United States)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.

    2017-12-01

    Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The

  3. One-dimensional Gromov minimal filling problem

    International Nuclear Information System (INIS)

    Ivanov, Alexandr O; Tuzhilin, Alexey A

    2012-01-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  4. A novel malformation complex of bilateral and symmetric preaxial radial ray-thumb aplasia and lower limb defects with minimal facial dysmorphic features: a case report and literature review.

    Science.gov (United States)

    Al Kaissi, Ali; Klaushofer, Klaus; Krebs, Alexander; Grill, Franz

    2008-10-24

    Radial hemimelia is a congenital abnormality characterised by the partial or complete absence of the radius. The longitudinal hemimelia indicates the absence of one or more bones along the preaxial (medial) or postaxial (lateral) side of the limb. Preaxial limb defects occurred more frequently with a combination of microtia, esophageal atresia, anorectal atresia, heart defects, unilateral kidney dysgenesis, and some axial skeletal defects. Postaxial acrofacial dysostoses are characterised by distinctive facies and postaxial limb deficiencies, involving the 5th finger, metacarpal/ulnar/fibular/and metatarsal. The patient, an 8-year-old-boy with minimal craniofacial dysmorphic features but with profound upper limb defects of bilateral and symmetrical absence of the radius and the thumbs respectively. In addition, there was a unilateral tibio-fibular hypoplasia (hemimelia) associated with hypoplasia of the terminal phalanges and malsegmentation of the upper thoracic vertebrae, causing effectively the development of thoracic kyphosis. In the typical form of the preaxial acrofacial dysostosis, there are aberrations in the development of the first and second branchial arches and limb buds. The craniofacial dysmorphic features are characteristic such as micrognathia, zygomatic hypoplasia, cleft palate, and preaxial limb defects. Nager and de Reynier in 1948, who used the term acrofacial dysostosis (AFD) to distinguish the condition from mandibulofacial dysostosis. Neither the facial features nor the limb defects in our present patient appear to be absolutely typical with the previously reported cases of AFD. Our patient expands the phenotype of syndromic preaxial limb malformation complex. He might represent a new syndromic entity of mild naso-maxillary malformation in connection with axial and extra-axial malformation complex.

  5. Quantization of the minimal and non-minimal vector field in curved space

    OpenAIRE

    Toms, David J.

    2015-01-01

    The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic...

  6. Optimal deterministic shallow cuttings for 3D dominance ranges

    DEFF Research Database (Denmark)

    Afshani, Peyman; Tsakalidis, Konstantinos

    2014-01-01

    In the concurrent range reporting (CRR) problem, the input is L disjoint sets S1..., SL of points in Rd with a total of N points. The goal is to preprocess the sets into a structure such that, given a query range r and an arbitrary set Q ⊆ {1,..., L}, we can efficiently report all the points in Si...... model (as well as comparison models such as the real RAM model), answering queries requires Ω(|Q|log(L/|Q|) + logN + K) time in the worst case, where K is the number of output points. In one dimension, we achieve this query time with a linear-space dynamic data structure that requires optimal O(log N...... times of O(|Q|log(N/|Q|) + K) and O(2LL + logN + K). Finally, we give an optimal data structure for three-sided ranges for the case L = O(log N). Copyright © 2014 by the Society for Industrial and Applied Mathematics....

  7. Blackfolds, plane waves and minimal surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Armas, Jay [Physique Théorique et Mathématique, Université Libre de Bruxelles and International Solvay Institutes, ULB-Campus Plaine CP231, B-1050 Brussels (Belgium); Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland); Blau, Matthias [Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland)

    2015-07-29

    Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.

  8. Deterministic Price Setting Rules to Guarantee Profitability of Unbundling in the Airline Industry

    NARCIS (Netherlands)

    Van Diepen, G.; Curran, R.

    2011-01-01

    Unbundling the traditional airfare is one of the airline industry’s practices to generate ancillary revenue in its struggle for profitability. However, unbundling might just as well negatively affect profit. In this paper deterministic price setting rules are established to guarantee profitability

  9. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    Science.gov (United States)

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  10. Use of deterministic sampling for exploring likelihoods in linkage analysis for quantitative traits.

    NARCIS (Netherlands)

    Mackinnon, M.J.; Beek, van der S.; Kinghorn, B.P.

    1996-01-01

    Deterministic sampling was used to numerically evaluate the expected log-likelihood surfaces of QTL-marker linkage models in large pedigrees with simple structures. By calculating the expected values of likelihoods, questions of power of experimental designs, bias in parameter estimates, approximate

  11. 2D deterministic radiation transport with the discontinuous finite element method

    International Nuclear Information System (INIS)

    Kershaw, D.; Harte, J.

    1993-01-01

    This report provides a complete description of the analytic and discretized equations for 2D deterministic radiation transport. This computational model has been checked against a wide variety of analytic test problems and found to give excellent results. We make extensive use of the discontinuous finite element method

  12. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    Science.gov (United States)

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  13. Top-down fabrication of plasmonic nanostructures for deterministic coupling to single quantum emitters

    NARCIS (Netherlands)

    Pfaff, W.; Vos, A.; Hanson, R.

    2013-01-01

    Metal nanostructures can be used to harvest and guide the emission of single photon emitters on-chip via surface plasmon polaritons. In order to develop and characterize photonic devices based on emitter-plasmon hybrid structures, a deterministic and scalable fabrication method for such structures

  14. Temperature regulates deterministic processes and the succession of microbial interactions in anaerobic digestion process

    Czech Academy of Sciences Publication Activity Database

    Lin, Qiang; De Vrieze, J.; Li, Ch.; Li, J.; Li, J.; Yao, M.; Heděnec, Petr; Li, H.; Li, T.; Rui, J.; Frouz, Jan; Li, X.

    2017-01-01

    Roč. 123, October (2017), s. 134-143 ISSN 0043-1354 Institutional support: RVO:60077344 Keywords : anaerobic digestion * deterministic process * microbial interactions * modularity * temperature gradient Subject RIV: DJ - Water Pollution ; Quality OBOR OECD: Water resources Impact factor: 6.942, year: 2016

  15. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    Science.gov (United States)

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  16. Performance of HSPA Vertical Sectorization System under Semi-Deterministic Propagation Model

    DEFF Research Database (Denmark)

    Nguyen, Huan Cong; Makinen, Jarmo; Stoermer, Wolfgang

    2013-01-01

    The performance of the Vertical Sectorization (VS) system has been evaluated previously using an empirical propagation model and a regular network layout. In this paper, our aim is to investigate the gain of the VS system under a more realistic scenario. A semi-deterministic path loss model run o...

  17. Against explanatory minimalism in psychiatry

    Directory of Open Access Journals (Sweden)

    Tim eThornton

    2015-12-01

    Full Text Available The idea that psychiatry contains, in principle, a series of levels of explanation has been criticised both as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell’s criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation respectively and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein’s Zettel. But attention to the context of Wittgenstein’s remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of level of explanation. Only in a context broader than the one provided by interventionism is the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  18. Against Explanatory Minimalism in Psychiatry.

    Science.gov (United States)

    Thornton, Tim

    2015-01-01

    The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.

  19. Global Analysis of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J

    2010-01-01

    Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ

  20. Deterministic quantum state transfer between remote qubits in cavities

    Science.gov (United States)

    Vogell, B.; Vermersch, B.; Northup, T. E.; Lanyon, B. P.; Muschik, C. A.

    2017-12-01

    Performing a faithful transfer of an unknown quantum state is a key challenge for enabling quantum networks. The realization of networks with a small number of quantum links is now actively pursued, which calls for an assessment of different state transfer methods to guide future design decisions. Here, we theoretically investigate quantum state transfer between two distant qubits, each in a cavity, connected by a waveguide, e.g., an optical fiber. We evaluate the achievable success probabilities of state transfer for two different protocols: standard wave packet shaping and adiabatic passage. The main loss sources are transmission losses in the waveguide and absorption losses in the cavities. While special cases studied in the literature indicate that adiabatic passages may be beneficial in this context, it remained an open question under which conditions this is the case and whether their use will be advantageous in practice. We answer these questions by providing a full analysis, showing that state transfer by adiabatic passage—in contrast to wave packet shaping—can mitigate the effects of undesired cavity losses, far beyond the regime of coupling to a single waveguide mode and the regime of lossless waveguides, as was proposed so far. Furthermore, we show that the photon arrival probability is in fact bounded in a trade-off between losses due to non-adiabaticity and due to coupling to off-resonant waveguide modes. We clarify that neither protocol can avoid transmission losses and discuss how the cavity parameters should be chosen to achieve an optimal state transfer.

  1. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  2. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Minimally legally invasive dentistry.

    Science.gov (United States)

    Lam, R

    2014-12-01

    One disadvantage of the rapid advances in modern dentistry is that treatment options have never been more varied or confusing. Compounded by a more educated population greatly assisted by online information in an increasingly litigious society, a major concern in recent times is increased litigation against health practitioners. The manner in which courts handle disputes is ambiguous and what is considered fair or just may not be reflected in the judicial process. Although legal decisions in Australia follow a doctrine of precedent, the law is not static and is often reflected by community sentiment. In medical litigation, this has seen the rejection of the Bolam principle with a preference towards greater patient rights. Recent court decisions may change the practice of dentistry and it is important that the clinician is not caught unaware. The aim of this article is to discuss legal issues that are pertinent to the practice of modern dentistry through an analysis of legal cases that have shaped health law. Through these discussions, the importance of continuing professional development, professional association and informed consent will be realized as a means to limit the legal complications of dental practice. © 2014 Australian Dental Association.

  4. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Boustani, Ehsan [Nuclear Science and Technology Research Institute (NSTRI), Tehran (Iran, Islamic Republic of); Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.; Khakshournia, Samad [Amirkabir University of Technology, Tehran (Iran, Islamic Republic of). Energy Engineering and Physics Dept.

    2016-12-15

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  5. Impact of mesh points number on the accuracy of deterministic calculations of control rods worth for Tehran research reactor

    International Nuclear Information System (INIS)

    Boustani, Ehsan; Amirkabir University of Technology, Tehran; Khakshournia, Samad

    2016-01-01

    In this paper two different computational approaches, a deterministic and a stochastic one, were used for calculation of the control rods worth of the Tehran research reactor. For the deterministic approach the MTRPC package composed of the WIMS code and diffusion code CITVAP was used, while for the stochastic one the Monte Carlo code MCNPX was applied. On comparing our results obtained by the Monte Carlo approach and those previously reported in the Safety Analysis Report (SAR) of Tehran research reactor produced by the deterministic approach large discrepancies were seen. To uncover the root cause of these discrepancies, some efforts were made and finally was discerned that the number of spatial mesh points in the deterministic approach was the critical cause of these discrepancies. Therefore, the mesh optimization was performed for different regions of the core such that the results of deterministic approach based on the optimized mesh points have a good agreement with those obtained by the Monte Carlo approach.

  6. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting.

    Science.gov (United States)

    Torres-Sospedra, Joaquín; Moreira, Adriano

    2017-11-27

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities.

  7. Asymptotic safety, emergence and minimal length

    International Nuclear Information System (INIS)

    Percacci, Roberto; Vacca, Gian Paolo

    2010-01-01

    There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that (1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and (2) there is a sense in which asymptotic safety implies a minimal length. In doing so we also discuss possible signatures of asymptotic safety in scattering experiments.

  8. Guidelines for mixed waste minimization

    Energy Technology Data Exchange (ETDEWEB)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization.

  9. Guidelines for mixed waste minimization

    International Nuclear Information System (INIS)

    Owens, C.

    1992-02-01

    Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization

  10. Minimal massive 3D gravity

    NARCIS (Netherlands)

    Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K.

    2014-01-01

    We present an alternative to topologically massive gravity (TMG) with the same 'minimal' bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new 'minimal massive gravity'

  11. Minimal changes in health status questionnaires: distinction between minimally detectable change and minimally important change

    Directory of Open Access Journals (Sweden)

    Knol Dirk L

    2006-08-01

    Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.

  12. Treatment of Pseudoarthrosis After Minimally Invasive Hallux Valgus ...

    African Journals Online (AJOL)

    Journal of Surgical Technique and Case Report | Jan-Jun 2014 | Vol-6 | Issue-1. 39. Treatment of Pseudoarthrosis After Minimally Invasive Hallux. Valgus Correction. Marco Cianforlini, Cristina Rosini, Mario Marinelli, Luigi de Palma. INTRODUCTION. Minimally invasive subcapital osteotomy of the first metatarsal is ...

  13. Waste minimization handbook, Volume 1

    International Nuclear Information System (INIS)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility's life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996

  14. Waste minimization handbook, Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Boing, L.E.; Coffey, M.J.

    1995-12-01

    This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.

  15. Deterministic versus Stochastic Sensitivity Analysis in Investment Problems : An Environmental Case Study

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.; Kleijnen, J.P.C.

    2001-01-01

    Sensitivity analysis in investment problems is an important tool to determine which factors can jeopardize the future of the investment.Information on the probability distribution of those factors that affect the investment is mostly lacking.In those situations the analysts have two options: (i)

  16. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    Science.gov (United States)

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  17. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with

  18. A new modelling of the multigroup scattering cross section in deterministic codes for neutron transport

    International Nuclear Information System (INIS)

    Calloo, A.A.

    2012-01-01

    In reactor physics, calculation schemes with deterministic codes are validated with respect to a reference Monte Carlo code. The remaining biases are attributed to the approximations and models induced by the multigroup theory (self-shielding models and expansion of the scattering law using Legendre polynomials) to represent physical phenomena (resonant absorption and scattering anisotropy respectively). This work focuses on the relevance of a polynomial expansion to model the scattering law. Since the outset of reactor physics, the latter has been expanded on a truncated Legendre polynomial basis. However, the transfer cross sections are highly anisotropic, with non-zero values for a very small range of the cosine of the scattering angle. Besides, the finer the energy mesh and the lighter the scattering nucleus, the more exacerbated is the peaked shape of this cross section. As such, the Legendre expansion is less suited to represent the scattering law. Furthermore, this model induces negative values which are non-physical. In this work, various scattering laws are briefly described and the limitations of the existing model are pointed out. Hence, piecewise-constant functions have been used to represent the multigroup scattering cross section. This representation requires a different model for the diffusion source. The discrete ordinates method which is widely employed to solve the transport equation has been adapted. Thus, the finite volume method for angular discretization has been developed and implemented in Paris environment which hosts the S n solver, Snatch. The angular finite volume method has been compared to the collocation method with Legendre moments to ensure its proper performance. Moreover, unlike the latter, this method is adapted for both the Legendre moments and the piecewise-constant functions representations of the scattering cross section. This hybrid-source method has been validated for different cases: fuel cell in infinite lattice

  19. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  20. The Technological Development of Minimally Invasive Spine Surgery

    Science.gov (United States)

    Snyder, Laura A.; O'Toole, John; Eichholz, Kurt M.; Perez-Cruet, Mick J.; Fessler, Richard

    2014-01-01

    Minimally invasive spine surgery has its roots in the mid-twentieth century with a few surgeons and a few techniques, but it has now developed into a large field of progressive spinal surgery. A wide range of techniques are now called “minimally invasive,” and case reports are submitted constantly with new “minimally invasive” approaches to spinal pathology. As minimally invasive spine surgery has become more mainstream over the past ten years, in this paper we discuss its history and development. PMID:24967347

  1. Cost-minimization analysis of subcutaneous abatacept in the treatment of rheumatoid arthritis in Spain

    Directory of Open Access Journals (Sweden)

    R. Ariza

    2014-07-01

    Full Text Available Objective: To compare the cost of treating rheumatoid arthritis patients that have failed an initial treatment with methotrexate, with subcutaneous aba - tacept versus other first-line biologic disease-modifying antirheumatic drugs. Method: Subcutaneous abatacept was considered comparable to intravenous abatacept, adalimumab, certolizumab pegol, etanercept, golimumab, infliximab and tocilizumab, based on indirect comparison using mixed treatment analysis. A cost-minimization analysis was therefore considered appropriate. The Spanish Health System perspective and a 3 year time horizon were selected. Pharmaceutical and administration costs (, 2013 of all available first-line biological disease-modifying antirheumatic drugs were considered. Administration costs were obtained from a local costs database. Patients were considered to have a weight of 70 kg. A 3% annual discount rate was applied. Deterministic and probabilistic sensitivity analyses were performed. Results: Subcutaneous abatacept proved in the base case to be less costly than all other biologic antirrheumatic drugs (ranging from -831.42 to -9,741.69 versus infliximab and tocilizumab, respectively. Subcutaneous abatacept was associated with a cost of 10,760.41 per patient during the first year of treatment and 10,261.29 in subsequent years. The total 3-year cost of subcutaneous abatacept was 29,953.89 per patient. Sensitivity analyses proved the model to be robust. Subcutaneous abatacept remained cost-saving in 100% of probabilistic sensitivity analysis simulations versus adalimumab, certolizumab, etanercept and golimumab, in more than 99.6% versus intravenous abatacept and tocilizumab and in 62.3% versus infliximab. Conclusions: Treatment with subcutaneous abatacept is cost-saving versus intravenous abatacept, adalimumab, certolizumab, etanercept, golimumab, infliximab and tocilizumab in the management of rheumatoid arthritis patients initiating

  2. Stochastic Simulation of Integrated Circuits with Nonlinear Black-Box Components via Augmented Deterministic Equivalents

    Directory of Open Access Journals (Sweden)

    MANFREDI, P.

    2014-11-01

    Full Text Available This paper extends recent literature results concerning the statistical simulation of circuits affected by random electrical parameters by means of the polynomial chaos framework. With respect to previous implementations, based on the generation and simulation of augmented and deterministic circuit equivalents, the modeling is extended to generic and ?black-box? multi-terminal nonlinear subcircuits describing complex devices, like those found in integrated circuits. Moreover, based on recently-published works in this field, a more effective approach to generate the deterministic circuit equivalents is implemented, thus yielding more compact and efficient models for nonlinear components. The approach is fully compatible with commercial (e.g., SPICE-type circuit simulators and is thoroughly validated through the statistical analysis of a realistic interconnect structure with a 16-bit memory chip. The accuracy and the comparison against previous approaches are also carefully established.

  3. How the growth rate of host cells affects cancer risk in a deterministic way

    Science.gov (United States)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  4. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  5. Exponential power spectra, deterministic chaos and Lorentzian pulses in plasma edge dynamics

    International Nuclear Information System (INIS)

    Maggs, J E; Morales, G J

    2012-01-01

    Exponential spectra have been observed in the edges of tokamaks, stellarators, helical devices and linear machines. The observation of exponential power spectra is significant because such a spectral character has been closely associated with the phenomenon of deterministic chaos by the nonlinear dynamics community. The proximate cause of exponential power spectra in both magnetized plasma edges and nonlinear dynamics models is the occurrence of Lorentzian pulses in the time signals of fluctuations. Lorentzian pulses are produced by chaotic behavior in the separatrix regions of plasma E × B flow fields or the limit cycle regions of nonlinear models. Chaotic advection, driven by the potential fields of drift waves in plasmas, results in transport. The observation of exponential power spectra and Lorentzian pulses suggests that fluctuations and transport at the edge of magnetized plasmas arise from deterministic, rather than stochastic, dynamics. (paper)

  6. Deterministic entanglement purification and complete nonlocal Bell-state analysis with hyperentanglement

    International Nuclear Information System (INIS)

    Sheng Yubo; Deng Fuguo

    2010-01-01

    Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.

  7. Spatial and spectral detection of protein monolayers with deterministic aperiodic arrays of metal nanoparticles

    Science.gov (United States)

    Lee, Sylvanus Y.; Amsden, Jason J.; Boriskina, Svetlana V.; Gopinath, Ashwin; Mitropolous, Alexander; Kaplan, David L.; Omenetto, Fiorenzo G.; Negro, Luca Dal

    2010-01-01

    Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. PMID:20566892

  8. A Deterministic Safety Assessment of a Pyro-processed Waste Repository

    International Nuclear Information System (INIS)

    Lee, Youn Myoung; Jeong, Jong Tae; Choi, Jong Won

    2012-01-01

    A GoldSim template program for a safety assessment of a hybrid-typed repository system, called 'A-KRS', in which two kinds of pyro-processed radioactive wastes, low-level metal wastes and ceramic high-level wastes that arise from the pyro-processing of PWR nuclear spent fuels are disposed of, has been developed. This program is ready both for a deterministic and probabilistic total system performance assessment which is able to evaluate nuclide release from the repository and farther transport into the geosphere and biosphere under various normal, disruptive natural and manmade events, and scenarios. The A-KRS has been deterministically assessed with 5 various normal and abnormal scenarios associated with nuclide release and transport in and around the repository. Dose exposure rates to the farming exposure group have been evaluated in accordance with all the scenarios and then compared among other.

  9. Minimal Gromov-Witten rings

    International Nuclear Information System (INIS)

    Przyjalkowski, V V

    2008-01-01

    We construct an abstract theory of Gromov-Witten invariants of genus 0 for quantum minimal Fano varieties (a minimal class of varieties which is natural from the quantum cohomological viewpoint). Namely, we consider the minimal Gromov-Witten ring: a commutative algebra whose generators and relations are of the form used in the Gromov-Witten theory of Fano varieties (of unspecified dimension). The Gromov-Witten theory of any quantum minimal variety is a homomorphism from this ring to C. We prove an abstract reconstruction theorem which says that this ring is isomorphic to the free commutative ring generated by 'prime two-pointed invariants'. We also find solutions of the differential equation of type DN for a Fano variety of dimension N in terms of the generating series of one-pointed Gromov-Witten invariants

  10. Annual Waste Minimization Summary Report

    International Nuclear Information System (INIS)

    Haworth, D.M.

    2011-01-01

    This report summarizes the waste minimization efforts undertaken by National Security TechnoIogies, LLC, for the U. S. Department of Energy, National Nuclear Security Administration Nevada Site Office (NNSA/NSO), during calendar year 2010. The NNSA/NSO Pollution Prevention Program establishes a process to reduce the volume and toxicity of waste generated by NNSA/NSO activities and ensures that proposed methods of treatment, storage, and/or disposal of waste minimize potential threats to human health and the environment.

  11. Deterministic and stochastic control of chimera states in delayed feedback oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, V. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Zakharova, A.; Schöll, E. [Institut für Theoretische Physik, TU Berlin, Hardenbergstraße 36, 10623 Berlin (Germany); Maistrenko, Y. [Institute of Mathematics and Center for Medical and Biotechnical Research, NAS of Ukraine, Tereschenkivska Str. 3, 01601 Kyiv (Ukraine)

    2016-06-08

    Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.

  12. Using reputation systems and non-deterministic routing to secure wireless sensor networks.

    Science.gov (United States)

    Moya, José M; Vallejo, Juan Carlos; Fraga, David; Araujo, Alvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano

    2009-01-01

    Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.

  13. Deterministic Seirs Epidemic Model for Modeling Vital Dynamics, Vaccinations, and Temporary Immunity

    OpenAIRE

    Marek B. Trawicki

    2017-01-01

    In this paper, the author proposes a new SEIRS model that generalizes several classical deterministic epidemic models (e.g., SIR and SIS and SEIR and SEIRS) involving the relationships between the susceptible S, exposed E, infected I, and recovered R individuals for understanding the proliferation of infectious diseases. As a way to incorporate the most important features of the previous models under the assumption of homogeneous mixing (mass-action principle) of the individuals in the popula...

  14. Autogenic succession and deterministic recovery following disturbance in soil bacterial communities

    DEFF Research Database (Denmark)

    Jurburg, Stephanie D.; Nunes, Ines Marques; Stegen, James C.

    2017-01-01

    slowed down, and a stability phase (after 29 days), during which the community tended towards its original composition. Phylogenetic turnover patterns indicated that the community experienced stronger deterministic selection during recovery. Thus, soil bacterial communities, despite their extreme...... to understand the successional trajectory of soil bacterial communities following disturbances and the mechanisms controlling these dynamics at a scale relevant for these organisms, we subjected soil microcosms to a heat disturbance and followed the community composition of active bacteria over 50 days...

  15. Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan-Mariano de Goyeneche

    2009-05-01

    Full Text Available Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios.

  16. A combined deterministic and probabilistic procedure for safety assessment of components with cracks - Handbook.

    Energy Technology Data Exchange (ETDEWEB)

    Dillstroem, Peter; Bergman, Mats; Brickstad, Bjoern; Weilin Zang; Sattari-Far, Iradj; Andersson, Peder; Sund, Goeran; Dahlberg, Lars; Nilsson, Fred (Inspecta Technology AB, Stockholm (Sweden))

    2008-07-01

    SSM has supported research work for the further development of a previously developed procedure/handbook (SKI Report 99:49) for assessment of detected cracks and tolerance for defect analysis. During the operative use of the handbook it was identified needs to update the deterministic part of the procedure and to introduce a new probabilistic flaw evaluation procedure. Another identified need was a better description of the theoretical basis to the computer program. The principal aim of the project has been to update the deterministic part of the recently developed procedure and to introduce a new probabilistic flaw evaluation procedure. Other objectives of the project have been to validate the conservatism of the procedure, make the procedure well defined and easy to use and make the handbook that documents the procedure as complete as possible. The procedure/handbook and computer program ProSACC, Probabilistic Safety Assessment of Components with Cracks, has been extensively revised within this project. The major differences compared to the last revision are within the following areas: It is now possible to deal with a combination of deterministic and probabilistic data. It is possible to include J-controlled stable crack growth. The appendices on material data to be used for nuclear applications and on residual stresses are revised. A new deterministic safety evaluation system is included. The conservatism in the method for evaluation of the secondary stresses for ductile materials is reduced. A new geometry, a circular bar with a circumferential surface crack has been introduced. The results of this project will be of use to SSM in safety assessments of components with cracks and in assessments of the interval between the inspections of components in nuclear power plants

  17. A property of the value multifunction of the deterministic mean-field game

    Science.gov (United States)

    Averboukh, Yurii

    2017-11-01

    The note is concerned with the theory of many players differential games examined within the framework of mean field approach. The results presented in the note are as follows. First, we show that the solution to the deterministic mean field game can be nonunique. Second, we present a property of the multifunction of the mean field game that describes the value multifunction using its value in the intermediate time.

  18. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    Science.gov (United States)

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  19. Faster Deterministic Volume Estimation in the Oracle Model via Thin Lattice Coverings

    NARCIS (Netherlands)

    D.N. Dadush (Daniel)

    2015-01-01

    htmlabstractWe give a 2O(n)(1+1/")n time and poly(n)-space deterministic algorithm for computing a (1+")n approximation to the volume of a general convex body K, which comes close to matching the (1+c/")n/2 lower bound for volume estimation in the oracle model by Bárány and Füredi (STOC 1986,

  20. Deterministic fabrication of dielectric loaded waveguides coupled to single nitrogen vacancy centers in nanodiamonds

    DEFF Research Database (Denmark)

    Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.

    We report on the fabrication of dielectric-loaded-waveguides which are excited by single-nitrogen-vacancy (NV) centers in nanodiamonds. The waveguides are deterministically written onto the pre-characterized nanodiamonds by using electron beam lithography of hydrogen silsesquioxane (HSQ) resist...... on silver-coated silicon substrate. Change in lifetime for NV-centers is observed after fabrication of waveguides and an antibunching in correlation measurement confirms that nanodiamonds contain single NV-centers....

  1. Deterministic Construction of Plasmonic Heterostructures in Well-Organized Arrays for Nanophotonic Materials.

    Science.gov (United States)

    Liu, Xiaoying; Biswas, Sushmita; Jarrett, Jeremy W; Poutrina, Ekaterina; Urbas, Augustine; Knappenberger, Kenneth L; Vaia, Richard A; Nealey, Paul F

    2015-12-02

    Plasmonic heterostructures are deterministically constructed in organized arrays through chemical pattern directed assembly, a combination of top-down lithography and bottom-up assembly, and by the sequential immobilization of gold nanoparticles of three different sizes onto chemically patterned surfaces using tailored interaction potentials. These spatially addressable plasmonic chain nanostructures demonstrate localization of linear and nonlinear optical fields as well as nonlinear circular dichroism. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Ground motion following selection of SRS design basis earthquake and associated deterministic approach

    International Nuclear Information System (INIS)

    1991-03-01

    This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart

  3. What is Quantum Mechanics? A Minimal Formulation

    Science.gov (United States)

    Friedberg, R.; Hohenberg, P. C.

    2018-03-01

    This paper presents a minimal formulation of nonrelativistic quantum mechanics, by which is meant a formulation which describes the theory in a succinct, self-contained, clear, unambiguous and of course correct manner. The bulk of the presentation is the so-called "microscopic theory", applicable to any closed system S of arbitrary size N, using concepts referring to S alone, without resort to external apparatus or external agents. An example of a similar minimal microscopic theory is the standard formulation of classical mechanics, which serves as the template for a minimal quantum theory. The only substantive assumption required is the replacement of the classical Euclidean phase space by Hilbert space in the quantum case, with the attendant all-important phenomenon of quantum incompatibility. Two fundamental theorems of Hilbert space, the Kochen-Specker-Bell theorem and Gleason's theorem, then lead inevitably to the well-known Born probability rule. For both classical and quantum mechanics, questions of physical implementation and experimental verification of the predictions of the theories are the domain of the macroscopic theory, which is argued to be a special case or application of the more general microscopic theory.

  4. On balanced minimal repeated measurements designs

    Directory of Open Access Journals (Sweden)

    Shakeel Ahmad Mir

    2014-10-01

    Full Text Available Repeated Measurements designs are concerned with scientific experiments in which each experimental unit is assigned more than once to a treatment either different or identical. This class of designs has the property that the unbiased estimators for elementary contrasts among direct and residual effects are obtainable. Afsarinejad (1983 provided a method of constructing balanced Minimal Repeated Measurements designs p < t , when t is an odd or prime power, one or more than one treatment may occur more than once in some sequences and  designs so constructed no longer remain uniform in periods. In this paper an attempt has been made to provide a new method to overcome this drawback. Specifically, two cases have been considered                RM[t,n=t(t-t/(p-1,p], λ2=1 for balanced minimal repeated measurements designs and  RM[t,n=2t(t-t/(p-1,p], λ2=2 for balanced  repeated measurements designs. In addition , a method has been provided for constructing              extra-balanced minimal designs for special case RM[t,n=t2/(p-1,p], λ2=1.

  5. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    Science.gov (United States)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  6. Strategies for the preparation of large cluster states using non-deterministic gates

    International Nuclear Information System (INIS)

    Rohde, Peter P; Barrett, Sean D

    2007-01-01

    The cluster state model for quantum computation has paved the way for schemes that allow scalable quantum computing, even when using non-deterministic quantum gates. Here the initial step is to prepare a large entangled state using non-deterministic gates. A key question in this context is the relative efficiencies of different 'strategies', i.e. in what order should the non-deterministic gates be applied, in order to maximize the size of the resulting cluster states? In this paper we consider this issue in the context of 'large' cluster states. Specifically, we assume an unlimited resource of qubits and ask what the steady state rate at which 'large' clusters are prepared from this resource is, given an entangling gate with particular characteristics. We measure this rate in terms of the number of entangling gate operations that are applied. Our approach works for a variety of different entangling gate types, with arbitrary failure probability. Our results indicate that strategies whereby one preferentially bonds together clusters of identical length are considerably more efficient than those in which one does not. Additionally, compared to earlier analytic results, our numerical study offers substantially improved resource scaling

  7. Human Resources Readiness as TSO for Deterministic Safety Analysis on the First NPP in Indonesia

    International Nuclear Information System (INIS)

    Sony Tjahyani, D. T.

    2010-01-01

    In government regulation no. 43 year 2006 it is mentioned that preliminary safety analysis report and final safety analysis report are one of requirements which should be applied in construction and operation licensing for commercial power reactor (NPPs). The purpose of safety analysis report is to confirm the adequacy and efficiency of provisions within the defence in depth of nuclear reactor. Deterministic analysis is used on the safety analysis report. One of the TSO task is to evaluate this report based on request of operator or regulatory body. This paper discusses about human resources readiness as TSO for deterministic safety analysis on the first NPP in Indonesia. The assessment is done by comparing the analysis step on SS-23 and SS-30 with human resources status of BATAN currently. The assessment results showed that human resources for deterministic safety analysis are ready as TSO especially to review preliminary safety analysis report and to revise final safety analysis report in licensing on the first NPP in Indonesia. Otherwise, to prepare the safety analysis report is still needed many competency human resources. (author)

  8. Simulating the formation of keratin filament networks by a piecewise-deterministic Markov process.

    Science.gov (United States)

    Beil, Michael; Lück, Sebastian; Fleischer, Frank; Portet, Stéphanie; Arendt, Wolfgang; Schmidt, Volker

    2009-02-21

    Keratin intermediate filament networks are part of the cytoskeleton in epithelial cells. They were found to regulate viscoelastic properties and motility of cancer cells. Due to unique biochemical properties of keratin polymers, the knowledge of the mechanisms controlling keratin network formation is incomplete. A combination of deterministic and stochastic modeling techniques can be a valuable source of information since they can describe known mechanisms of network evolution while reflecting the uncertainty with respect to a variety of molecular events. We applied the concept of piecewise-deterministic Markov processes to the modeling of keratin network formation with high spatiotemporal resolution. The deterministic component describes the diffusion-driven evolution of a pool of soluble keratin filament precursors fueling various network formation processes. Instants of network formation events are determined by a stochastic point process on the time axis. A probability distribution controlled by model parameters exercises control over the frequency of different mechanisms of network formation to be triggered. Locations of the network formation events are assigned dependent on the spatial distribution of the soluble pool of filament precursors. Based on this modeling approach, simulation studies revealed that the architecture of keratin networks mostly depends on the balance between filament elongation and branching processes. The spatial distribution of network mesh size, which strongly influences the mechanical characteristics of filament networks, is modulated by lateral annealing processes. This mechanism which is a specific feature of intermediate filament networks appears to be a major and fast regulator of cell mechanics.

  9. Probabilistic approach in treatment of deterministic analyses results of severe accidents

    International Nuclear Information System (INIS)

    Krajnc, B.; Mavko, B.

    1996-01-01

    Severe accidents sequences resulting in loss of the core geometric integrity have been found to have small probability of the occurrence. Because of their potential consequences to public health and safety, an evaluation of the core degradation progression and the resulting effects on the containment is necessary to determine the probability of a significant release of radioactive materials. This requires assessment of many interrelated phenomena including: steel and zircaloy oxidation, steam spikes, in-vessel debris cooling, potential vessel failure mechanisms, release of core material to the containment, containment pressurization from steam generation, or generation of non-condensable gases or hydrogen burn, and ultimately coolability of degraded core material. To asses the answer from the containment event trees in the sense of weather certain phenomenological event would happen or not the plant specific deterministic analyses should be performed. Due to the fact that there is a large uncertainty in the prediction of severe accidents phenomena in Level 2 analyses (containment event trees) the combination of probabilistic and deterministic approach should be used. In fact the result of the deterministic analyses of severe accidents are treated in probabilistic manner due to large uncertainty of results as a consequence of a lack of detailed knowledge. This paper discusses approach used in many IPEs, and which assures that the assigned probability for certain question in the event tree represent the probability that the event will or will not happen and that this probability also includes its uncertainty, which is mainly result of lack of knowledge. (author)

  10. Quantitative diffusion tensor deterministic and probabilistic fiber tractography in relapsing-remitting multiple sclerosis

    International Nuclear Information System (INIS)

    Hu Bing; Ye Binbin; Yang Yang; Zhu Kangshun; Kang Zhuang; Kuang Sichi; Luo Lin; Shan Hong

    2011-01-01

    Purpose: Our aim was to study the quantitative fiber tractography variations and patterns in patients with relapsing-remitting multiple sclerosis (RRMS) and to assess the correlation between quantitative fiber tractography and Expanded Disability Status Scale (EDSS). Material and methods: Twenty-eight patients with RRMS and 28 age-matched healthy volunteers underwent a diffusion tensor MR imaging study. Quantitative deterministic and probabilistic fiber tractography were generated in all subjects. And mean numbers of tracked lines and fiber density were counted. Paired-samples t tests were used to compare tracked lines and fiber density in RRMS patients with those in controls. Bivariate linear regression model was used to determine the relationship between quantitative fiber tractography and EDSS in RRMS. Results: Both deterministic and probabilistic tractography's tracked lines and fiber density in RRMS patients were less than those in controls (P < .001). Both deterministic and probabilistic tractography's tracked lines and fiber density were found negative correlations with EDSS in RRMS (P < .001). The fiber tract disruptions and reductions in RRMS were directly visualized on fiber tractography. Conclusion: Changes of white matter tracts can be detected by quantitative diffusion tensor fiber tractography, and correlate with clinical impairment in RRMS.

  11. High-fidelity global optimization of shape design by dimensionality reduction, metamodels and deterministic particle swarm

    Science.gov (United States)

    Chen, Xi; Diez, Matteo; Kandasamy, Manivannan; Zhang, Zhiguo; Campana, Emilio F.; Stern, Frederick

    2015-04-01

    Advances in high-fidelity shape optimization for industrial problems are presented, based on geometric variability assessment and design-space dimensionality reduction by Karhunen-Loève expansion, metamodels and deterministic particle swarm optimization (PSO). Hull-form optimization is performed for resistance reduction of the high-speed Delft catamaran, advancing in calm water at a given speed, and free to sink and trim. Two feasible sets (A and B) are assessed, using different geometric constraints. Dimensionality reduction for 95% confidence is applied to high-dimensional free-form deformation. Metamodels are trained by design of experiments with URANS; multiple deterministic PSOs achieve a resistance reduction of 9.63% for A and 6.89% for B. Deterministic PSO is found to be effective and efficient, as shown by comparison with stochastic PSO. The optimum for A has the best overall performance over a wide range of speed. Compared with earlier optimization, the present studies provide an additional resistance reduction of 6.6% at 1/10 of the computational cost.

  12. Temperature regulates deterministic processes and the succession of microbial interactions in anaerobic digestion process.

    Science.gov (United States)

    Lin, Qiang; De Vrieze, Jo; Li, Chaonan; Li, Jiaying; Li, Jiabao; Yao, Minjie; Hedenec, Petr; Li, Huan; Li, Tongtong; Rui, Junpeng; Frouz, Jan; Li, Xiangzhen

    2017-10-15

    Temperature plays crucial roles in microbial interactions that affect the stability and performance of anaerobic digestion. In this study, the microbial interactions and their succession in the anaerobic digestion process were investigated at three levels, represented by (1) present and (2) active micro-organisms, and (3) gene expressions under a temperature gradient from 25 to 55 °C. Network topological features indicated a global variation in microbial interactions at different temperatures. The variations of microbial interactions in terms of network modularity and deterministic processes based on topological features, corresponded well with the variations of methane productions, but not with temperatures. A common successional pattern of microbial interactions was observed at different temperatures, which showed that both deterministic processes and network modularity increased over time during the digestion process. It was concluded that the increase in temperature-mediated network modularity and deterministic processes on shaping the microbial interactions improved the stability and efficiency of anaerobic digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Neutronics comparative analysis of plate-type research reactor using deterministic and stochastic methods

    International Nuclear Information System (INIS)

    Liu, Shichang; Wang, Guanbo; Wu, Gaochen; Wang, Kan

    2015-01-01

    Highlights: • DRAGON and DONJON are applied and verified in calculations of research reactors. • Continuous-energy Monte Carlo calculations by RMC are chosen as the references. • “ECCO” option of DRAGON is suitable for the calculations of research reactors. • Manual modifications of cross-sections are not necessary with DRAGON and DONJON. • DRAGON and DONJON agree well with RMC if appropriate treatments are applied. - Abstract: Simulation of the behavior of the plate-type research reactors such as JRR-3M and CARR poses a challenge for traditional neutronics calculation tools and schemes for power reactors, due to the characteristics of complex geometry, highly heterogeneity and large leakage of the research reactors. Two different theoretical approaches, the deterministic and the stochastic methods, are used for the neutronics analysis of the JRR-3M plate-type research reactor in this paper. For the deterministic method the neutronics codes DRAGON and DONJON are used, while the continuous-energy Monte Carlo code RMC (Reactor Monte Carlo code) is employed for the stochastic approach. The goal of this research is to examine the capability of the deterministic code system DRAGON and DONJON to reliably simulate the research reactors. The results indicate that the DRAGON and DONJON code system agrees well with the continuous-energy Monte Carlo simulation on both k eff and flux distributions if the appropriate treatments (such as the ECCO option) are applied

  14. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Haeck, Wim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, Donald Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); White, Morgan Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Saller, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Favorite, Jeffrey A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-12

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in the details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.

  15. Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?

    Science.gov (United States)

    Kubota, Noriaki

    2018-03-01

    The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.

  16. Development of a model for unsteady deterministic stresses adapted to the multi-stages turbomachines simulation; Developpement d'un modele de tensions deterministes instationnaires adapte a la simulation de turbomachines multi-etagees

    Energy Technology Data Exchange (ETDEWEB)

    Charbonnier, D.

    2004-12-15

    The physical phenomena observed in turbomachines are generally three-dimensional and unsteady. A recent study revealed that a three-dimensional steady simulation can reproduce the time-averaged unsteady phenomena, since the steady flow field equations integrate deterministic stresses. The objective of this work is thus to develop an unsteady deterministic stresses model. The analogy with turbulence makes it possible to write transport equations for these stresses. The equations are implemented in steady flow solver and e model for the energy deterministic fluxes is also developed and implemented. Finally, this work shows that a three-dimensional steady simulation, by taking into account unsteady effects with transport equations of deterministic stresses, increases the computing time by only approximately 30 %, which remains very interesting compared to an unsteady simulation. (author)

  17. Activity recognition from minimal distinguishing subsequence mining

    Science.gov (United States)

    Iqbal, Mohammad; Pao, Hsing-Kuo

    2017-08-01

    Human activity recognition is one of the most important research topics in the era of Internet of Things. To separate different activities given sensory data, we utilize a Minimal Distinguishing Subsequence (MDS) mining approach to efficiently find distinguishing patterns among different activities. We first transform the sensory data into a series of sensor triggering events and operate the MDS mining procedure afterwards. The gap constraints are also considered in the MDS mining. Given the multi-class nature of most activity recognition tasks, we modify the MDS mining approach from a binary case to a multi-class one to fit the need for multiple activity recognition. We also study how to select the best parameter set including the minimal and the maximal support thresholds in finding the MDSs for effective activity recognition. Overall, the prediction accuracy is 86.59% on the van Kasteren dataset which consists of four different activities for recognition.

  18. Innovations in minimally invasive facial treatments.

    Science.gov (United States)

    Jurado, José Roberto Parisi; Lima, Leila Freire Rego; Olivetti, Isabela Peixoto; Arroyo, Helena Hotz; de Oliveira, Ingrid Helena Lopes

    2013-06-01

    Patients are seeking healthier lives, and at the same time their concern about having a beautiful face and maintaining a youthful appearance over time has increased. Traditionally, surgeries based on tissue resection and resurfacing were the focus in facial rejuvenation. Over the last decade, minimally invasive procedures have expanded exponentially because of the variety of cosmetic products available on the market and because patients are looking for a better appearance with nonincision methods. The understanding of the aging process, facial anatomy, and ideal proportions is extremely important for successful rejuvenation procedures. Also, neuromodulators, chemical peels, filler properties, correct indications, and effectiveness must be well known by the injector for favorable results. Therefore, knowledge of all facial cosmetic options and an adequate facial analysis are essential for a better performance. In this article, the authors review some different product options and show cases of minimally invasive cosmetic procedures for the face currently used. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. Probabilistic Properties of Rectilinear Steiner Minimal Trees

    Directory of Open Access Journals (Sweden)

    V. N. Salnikov

    2015-01-01

    Full Text Available This work concerns the properties of Steiner minimal trees for the manhattan plane in the context of introducing a probability measure. This problem is important because exact algorithms to solve the Steiner problem are computationally expensive (NP-hard and the solution (especially in the case of big number of points to be connected has a diversity of practical applications. That is why the work considers a possibility to rank the possible topologies of the minimal trees with respect to a probability of their usage. For this, the known facts about the structural properties of minimal trees for selected metrics have been analyzed to see their usefulness for the problem in question. For the small amount of boundary (fixed vertices, the paper offers a way to introduce a probability measure as a corollary of proved theorem about some structural properties of the minimal trees.This work is considered to further the previous similar activity concerning a problem of searching for minimal fillings, and it is a door opener to the more general (complicated task. The stated method demonstrates the possibility to reach the final result analytically, which gives a chance of its applicability to the case of the bigger number of boundary vertices (probably, with the use of computer engineering.The introducing definition of an essential Steiner point allowed a considerable restriction of the ambiguity of initial problem solution and, at the same time, comparison of such an approach with more classical works in the field concerned. The paper also lists main barriers of classical approaches, preventing their use for the task of introducing a probability measure.In prospect, application areas of the described method are expected to be wider both in terms of system enlargement (the number of boundary vertices and in terms of other metric spaces (the Euclidean case is of especial interest. The main interest is to find the classes of topologies with significantly

  20. Correction of Sunken Upper-Eyelid Deformity in Young Asians by Minimally-Invasive Double-Eyelid Procedure and Simultaneous Orbital Fat Pad Repositioning: A One-Year Follow-up Study of 250 Cases.

    Science.gov (United States)

    Chen, Chen-Chia; Chen, Sheng-Ni; Huang, Chien-Lin

    2015-05-01

    Double-eyelid procedure to construct a supratarsal fold is the most common aesthetic surgery in young Asian adults. More complex surgical procedures, such as fat grafting or filler injection, are often indicated during traditional, long-incision, double-eyelid procedures to achieve better aesthetic results for patients with hollowness of the upper eyelids. The authors sought to determine the efficacy of minimally-invasive double-eyelid procedures with concurrent repositioning of the orbital fat pads to correct sunken upper eyelids in young Asian adults. The study included 250 patients treated between June 2008 and July 2103. Preoperatively, all patients complained of upper-eyelid hollowness and had positive findings on a lower-eyelid compression test. All patients underwent a minimally-invasive double-eyelid procedure plus repositioning of orbital fat. After the minimum follow-up period of 1 year, the overall patient satisfaction rate was 76%. The relapse rate was 10% within the first year, and the complication rate was 8%. This minimally-invasive combination procedure may be an option for young Asian adults who have single upper eyelids and sunken eyes. The surgery resulted in a natural double eyelid and more youthful orbital appearance in the majority of patients in this study. Proper patient selection and evaluation, including lower eyelid compression testing, are essential to achieve long-term correction. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.