Minimization of Deterministic Fuzzy Tree Automata
S. Moghari
2014-03-01
Full Text Available Until now, some methods for minimizing deterministic fuzzy finite tree automata (DFFTA and weighted tree automata have been established by researchers. Those methods are language preserving, but the behavior of original automata and minimized one may be different. This paper, considers both language preserving and behavior preserving in minimization process. We drive Myhill-Nerode kind theorems corresponding to each proposed method and introduce PTIME algorithms for behaviorally and linguistically minimization. The proposed minimization algorithms are based on two main steps. The first step includes finding dependency between equivalency of states, according to the set of transition rules of DFFTA, and making merging dependency graph (MDG. The second step is refinement of MDG and making minimization equivalency set (MES. Additionally, behavior preserving minimization of DFFTA requires a pre-processing for modifying fuzzy membership grade of rules and final states, which is called normalization.
Nonlinear Markov processes: Deterministic case
Frank, T.D. [Center for the Ecological Study of Perception and Action, Department of Psychology, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269 (United States)], E-mail: till.frank@uconn.edu
2008-10-06
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution.
Nonlinear Markov processes: Deterministic case
Deterministic Markov processes that exhibit nonlinear transition mechanisms for probability densities are studied. In this context, the following issues are addressed: Markov property, conditional probability densities, propagation of probability densities, multistability in terms of multiple stationary distributions, stability analysis of stationary distributions, and basin of attraction of stationary distribution
Sochi, Taha
2016-09-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.
Sochi, Taha
2014-01-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtoni...
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study
King John R
2010-03-01
Full Text Available Abstract Background Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. Results In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Conclusions Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs
Jooyong Yi
2013-09-01
Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.
McArt, J A A; Nydam, D V; Overton, M W
2015-03-01
The purpose of this study was to develop a deterministic economic model to estimate the costs associated with (1) the component cost per case of hyperketonemia (HYK) and (2) the total cost per case of HYK when accounting for costs related to HYK-attributed diseases. Data from current literature was used to model the incidence and risks of HYK (defined as a blood β-hydroxybutyrate concentration≥1.2 mmol/L), displaced abomasa (DA), metritis, disease associations, milk production, culling, and reproductive outcomes. The component cost of HYK was estimated based on 1,000 calvings per year; the incidence of HYK in primiparous and multiparous animals; the percent of animals receiving clinical treatment; the direct costs of diagnostics, therapeutics, labor, and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. Costs attributable to DA and metritis were estimated based on the incidence of each disease in the first 30 DIM; the number of cases of each disease attributable to HYK; the direct costs of diagnostics, therapeutics, discarded milk during treatment and the withdrawal period, veterinary service (DA only), and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. The component cost per case of HYK was estimated at $134 and $111 for primiparous and multiparous animals, respectively; the average component cost per case of HYK was estimated to be $117. Thirty-four percent of the component cost of HYK was due to future reproductive losses, 26% to death loss, 26% to future milk production losses, 8% to future culling losses, 3% to therapeutics, 2% to labor, and 1% to diagnostics. The total cost per case of HYK was estimated at $375 and $256 for primiparous and multiparous animals, respectively; the average total cost per case of HYK was $289. Forty-one percent of the total cost of HYK was due to the component cost of HYK, 33% to costs
Krepki, R.; Obermayer, K. [Technische Univ. Berlin (DE). Forschungsgruppe Computergestuetzte Informationssysteme (CIS); Pu, Y.; Meng, H. [State Univ. of New York, Buffalo, NY (United States). Dept. of Mechanical and Aerospace Engineering
2000-12-01
Recently we have presented a new particle tracking algorithm for the interrogation of 2D-PTV data [Kuzmanowski et al. (1998); Stellmacher and Obermayer (2000) Exp Fluids 28: 506 -518], which estimates particle correspondences and local flow-field parameters simultaneously. The new method is based on an algorithm recently proposed by Gold et al. [Pattern Recognition (1998) 31:1019-1031], and has two advantages: (1) It allows not only local velocity but also other local components of the flow field such as rotation and shear to be determine; and (2) it allows flow-field parameters also to be reliably determined in regions of high velocity gradients (e.g., vortices or shear flow).In this contribution we extend this algorithm to the interrogation of 3D holographic particle image velocimetry (PIV) data. Benchmarks with cross-correlation and nearest-neighbor methods show that the algorithm retains the superior performance which we have observed for the 2D case. Because PTV methods scale with the square of the number of particles rather than exponentially with the dimension of the interrogation cell, the new method is much faster than cross-correlation-based methods without sacrificing accuracy, and it is well adapted to the low particle seeding densities of holographic PIV methods. (orig.)
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series
2016-01-01
Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL); postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes); no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n = 1), 41–60 (n = 1), and >60 (n = 55); scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy. PMID:27144170
Probabilistic and Deterministic Seismic Hazard Assessment: A Case Study in Babol
H.R. Tavakoli
2013-01-01
Full Text Available The risk of earthquake ground motion parameters in seismic design of structures and Vulnerabilityand risk assessment of these structures against earthquake damage are important. The damages caused by theearthquake engineering and seismology of the social and economic consequences are assessed. This paperdetermined seismic hazard analysis in Babol via deterministic and probabilistic methods. Deterministic andprobabilistic methods seem to be practical tools for mutual control of results and to overcome the weaknessof approach alone. In the deterministic approach, the strong-motion parameters are estimated for the maximumcredible earthquake, assumed to occur at the closest possible distance from the site of interest, withoutconsidering the likelihood of its occurrence during a specified exposure period. On the other hand, theprobabilistic approach integrates the effects of all earthquakes expected to occur at different locations duringa specified life period, with the associated uncertainties and randomness taken into account. The calculatedbedrock horizontal and vertical peak ground acceleration (PGA for different years return period of the studyarea are presented.
The concept of fractal geometry has proved to be a useful and fruitful tool for the description of complex systems in various fields of science and technology. Among the diverse types of fractals there are the deterministic ones that enable an easy explanation of basic features of fractal behaviour. (author)
Barbouchi, Meriem; Chokmani, Karem; Ben Aissa, Nadhira; Lhissou, Rachid; El Harti, Abderrazak; Abdelfattah, Riadh
2013-04-01
Soil salinization hazard in semi-arid regions such as Central Morocco is increasingly affecting arable lands and this is due to combined effects of anthropogenic activities (development of irrigation) and climate change (Multiplying drought episodes). In a rational strategy of fight against this hazard, salinity mapping is a key step to ensure effective spatiotemporal monitoring. The objective of this study is to test the effectiveness of geostatistical approach in mapping soil salinity compared to more forward deterministic interpolation methods. Three soil salinity sampling campaigns (27 September, 24 October and 19 November 2011) were conducted over the irrigated area of the Tadla plain, situated between the High and Middle Atlasin Central Morocco. Each campaign was made of 38 surface soil samples (upper 5 cm). From each sample the electrical conductivity (EC) was determined in saturated paste extract and used subsequently as proxy of soil salinity. The potential of deterministic interpolation methods (IDW) and geostatistical techniques (Ordinary Kriging) in mapping surface soil salinity was evaluated in a GIS environment through cross-validation technique. Field measurements showed that the soil salinity was generally low except during the second campaign where a significant increase in EC values was recorded. Interpolation results showed a better performance with geostatistical approach compared to deterministic one. Indeed, for all the campaigns, cross-validation yielded lower RMSE and bias for Kriging than IDW. However, the performance of the two methods was dependent on the range and the structure of the spatial variability of salinity. Indeed, Kriging showed better accuracy for the second campaign in comparison with the two others. This could be explained by the wider range of values of soil salinity during this campaign, which has resulted in a greater range of spatial dependence and has a better modeling of the spatial variability of salinity, which 'was
Coefficient of reversibility and two particular cases of deterministic many body systems
We discuss the importance of a new measure of chaos in study of nonlinear dynamic systems, the - coefficient of reversibility-. This is defined as the probability of returning in the same point of phasic space. Is very interesting to compare this coefficient with other measures like fractal dimension or Liapunov exponent. We have also studied two very interesting many body systems, both having any number of particles but a deterministic evolution. One system is composed by n particles initially at rest, having the same mass and interacting through harmonic bi-particle forces, other is composed by two types of particles (with mass m1 and mass m2) initially at rest and interacting too through harmonic bi-particle forces
Sukanti Rout
2015-04-01
Full Text Available In this study an updated deterministic seismic hazard contour map of Bhubaneswar (20°12'0"N to 20°23'0"N latitude and 85°44'0"E to 85° 54'0"E longitude one of the major city of India with tourist importance, has been prepared in the form of spectral acceleration values. For assessing the seismic hazard, the study area has been divided into small grids of size 30˝×30˝ (approximately 1.0 km×1.0 km, and the hazard parameters in terms of spectral acceleration at bedrock level, PGA are calculated as the center of each of these grid cells by considering the regional Seismo-tectonic activity within 400 km radius around the city center. The maximum credible earthquake in terms of moment magnitude of 7.2 has been used for calculation of hazard parameter, results in PGA value of 0.017g towards the northeast side of the city and the corresponding maximum spectral acceleration as 0.0501g for a predominant period of 0.05s at bedrock level.
Using CSP To Improve Deterministic 3-SAT
Kutzkov, Konstantin
2010-01-01
We show how one can use certain deterministic algorithms for higher-value constraint satisfaction problems (CSPs) to speed up deterministic local search for 3-SAT. This way, we improve the deterministic worst-case running time for 3-SAT to O(1.439^n).
Asinari, P.
2011-03-01
Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).
The title subject is easily explained. The deterministic effect was defined by ICRP recommendation 1990. The effect comes from a tissue injury derived from death of its stem cells induced by the acute high dose radiation, and leads to sterility in germ line and organ disorders in somatic cells. Clinically, the effect is unobservable at the lower dose than a threshold where the number of dead stem cells is small. The threshold is practically defined to be the dose at which the clinical symptom is observable in 1% of exposed humans (ICRP 2008). Restriction of the exposed dose to less than the threshold is important from the aspect of radiation protection. For practical risk assessment, defined are total low dose of 200 mSv apart from the dose rate, rate of 0.1 mSv/min apart from the total, and dose and dose rate effect factor (DDREF) of 3 (UNSCEAR 1993). Dividing stem cells are sensitive to radiation, and the threshold is variable dependently on the population of those cells in organs: e.g., the acute threshold doses of the testicle are 0.15 and 3.5-6.0 Gy for the temporary and complete infertility, respectively; ovary, 2.5-6.0 Gy for complete infertility; lens, 5.0 Gy for cataract; and bone marrow, 0.5 Gy for hematopoietic reduction. Fetal exposure at organogenesis (3rd-8th week of gestation) results in malformation with threshold 0.1-0.2 Gy, and at later than 9th week, lowered IQ and metal retardation of offspring with 0.1 Gy. Death of stem cells is not always specific to radiation as it occurs by anoxia and virus infection. Skin is sensitive to radiation as its stem cells exit in epidermal base layer and thereby tends to be injured even by IVR (interventional radiology). Exposed cells/tissues undergo the stochastic effect even when the deterministic effect is not evidently apparent, which is conceivably related with the secondary cancer formation derived from radiotherapy. (T.T.)
Kimmeier, Francesco; Bouzelboudjen, Mahmoud; Ababou, Rachid; Ribeiro, Luis
2014-01-01
In the framework of waste storage in geological formations at shallow or greater depths and accidental pollution, the numerical simulation of groundwater flow and contaminant transport represents an important instrument to predict and quantify the pollution as a function of time and space. The numerical simulation problem, and the required hydrogeologic data, are often approached in a deterministic fashion. However, deterministic models do not allow to evaluate the uncertainty of results. Fur...
A NEW CASE FOR IMAGE COMPRESSION USING LOGIC FUNCTION MINIMIZATION
Behrouz Zolfaghari
2011-05-01
Full Text Available Sum of minterms is a canonical form for representing logic functions. There are classical methods such as Karnaugh map or Quine–McCluskey tabulation for minimizing a sum of products. This minimization reduces the minterms to smaller products called implicants. If minterms are represented by bit strings, the bit strings shrink through the minimization process. This can be considered as a kind of data compression provided that there is a way for retrieving the original bit strings from the compressed strings. This paper proposes implements and evaluates an image compression method called YALMIC (Yet Another Logic Minimization Based Image Compression which depends on logic function minimization. This method considers adjacent pixels of the image as disjointed minterms constructing a logic function and compresses the 24-bit color images through minimizing the function. We compare the compression ratio of the proposed method to those of existing methods and show that YALMIC improves the compression ratio by about 25% on average.
Deterministic multidimensional nonuniform gap sampling
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Inferring deterministic causal relations
Daniusis, Povilas; Janzing, Dominik; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard
2012-01-01
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the ...
Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report
Chandrashekar, B.
2012-01-01
Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure...
Deterministic Graphical Games Revisited
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro; Sørensen, Troels Bjerre
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...
Minimally Invasive Approach to Eliminate Pyogenic Granuloma: A Case Report
B. Chandrashekar
2012-01-01
Full Text Available Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma.
Minimally invasive approach to eliminate pyogenic granuloma: a case report.
Chandrashekar, B
2012-01-01
Pyogenic granuloma is one of the inflammatory hyperplasia seen in the oral cavity. The term is a misnomer because it is not related to infection and arises in response to various stimuli such as low-grade local irritation, traumatic injury, or hormonal factors. It is most commonly seen in females in their second decade of life due to vascular effects of hormones. Although excisional surgery is the treatment of choice for it, this paper presents the safest and most minimally invasive procedure for the regression of pyogenic granuloma. PMID:22567459
Modeling of deterministic chaotic systems
The success of deterministic modeling of a physical system relies on whether the solution of the model would approximate the dynamics of the actual system. When the system is chaotic, situations can arise where periodic orbits embedded in the chaotic set have distinct number of unstable directions and, as a consequence, no model of the system produces reasonably long trajectories that are realized by nature. We argue and present physical examples indicating that, in such a case, though the model is deterministic and low dimensional, statistical quantities can still be reliably computed. copyright 1999 The American Physical Society
Minimally invasive approaches in pancreatic pseudocyst: a Case report
Rohollah Y
2009-09-01
Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: According to importance of post operative period, admission duration, post operative pain, and acceptable rate of complications, minimally invasive approaches with endoscope in pancreatic pseudocyst management becomes more popular, but the best choice of procedure and patient selection is currently not completely established. During past decade endoscopic procedures are become first choice in most authors' therapeutic plans, however, open surgery remains gold standard in pancreatic pseudocyst treatment."n"nMethods: we present here a patient with pancreatic pseudocyst unresponsive to conservative management that is intervened endoscopically before 6th week, and review current literatures to depict a schema to management navigation."n"nResults: A 16 year old male patient presented with two episodes of acute pancreatitis with abdominal pain, nausea and vomiting. Hyperamilasemia, pancreatic ascites and a pseudocyst were found in our preliminary investigation. Despite optimal conservative management, including NPO (nil per os and total parentral nutrition, after four weeks, clinical and para-clinical findings deteriorated. Therefore, ERCP and trans-papillary cannulation with placement of 7Fr stent was
无
2007-01-01
A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease,and psoriasis. The possible pathological link between 3 diseases is discussed.
Inferring deterministic causal relations
Daniusis, Povilas; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard
2012-01-01
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.
Optimal Deterministic Auctions with Correlated Priors
Papadimitriou, Christos; Pierrakos, George
2010-01-01
We revisit the problem of designing the profit-maximizing single-item auction, solved by Myerson in his seminal paper for the case in which bidder valuations are independently distributed. We focus on general joint distributions, seeking the optimal deterministic incentive compatible auction. We give a geometric characterization of the optimal auction, resulting in a duality theorem and an efficient algorithm for finding the optimal deterministic auction in the two-bidder case and an NP-compl...
Fanning, D M
2009-02-03
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
Fanning, D M
2012-02-01
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
Sandon, Luiz Henrique Dias; Choi, Gun; Park, EunSoo; Lee, Hyung-Chang
2016-01-01
Background Thoracic disc surgeries make up only a small number of all spine surgeries performed, but they can have a considerable number of postoperative complications. Numerous approaches have been developed and studied in an attempt to reduce the morbidity associated with the procedure; however, we still encounter cases that develop serious and unexpected outcomes. Case Presentation This case report presents a patient with abducens nerve palsy after minimally invasive surgery for thoracic d...
Deterministic uncertainty analysis
This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs
Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case
Bettoni, Dario; Colombo, Mattia; Liberati, Stefano, E-mail: bettoni@sissa.it, E-mail: mattia.colombo@studenti.unitn.it, E-mail: liberati@sissa.it [SISSA, Via Bonomea 265, Trieste, 34136 (Italy)
2014-02-01
Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales.
Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case
Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales
Minimal TestCase Generation for Object-Oriented Software with State Charts
Ranjita Kumari Swain; Prafulla Kumar Behera; Durga Prasad Mohapatra
2012-01-01
Today statecharts are a de facto standard in industry for modeling system behavior. Test data generation is one of the key issues in software testing. This paper proposes an reduction approach to test data generation for the state-based software testing. In this paper, first state transition graph is derived from state chart diagram. Then, all the required information are extracted from the state chart diagram. Then, test cases are generated. Lastly, a set of test cases are minimized by calcu...
The cointegrated vector autoregressive model with general deterministic terms
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
Deterministic dense coding with partially entangled states
The utilization of a d-level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d. In general, we numerically find that the maximal alphabet size is any integer in the range [d,d2] with the possible exception of d2-1. We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states
Deterministic hierarchical networks
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
Dakwar, Elias; Rifkin, Stephen I; Volcan, Ildemaro J; Goodrich, J Allan; Uribe, Juan S
2011-06-01
Minimally invasive spine surgery is increasingly used to treat various spinal pathologies with the goal of minimizing destruction of the surrounding tissues. Rhabdomyolysis (RM) is a rare but known complication of spine surgery, and acute renal failure (ARF) is in turn a potential complication of severe RM. The authors report the first known case series of RM and ARF following minimally invasive lateral spine surgery. The authors retrospectively reviewed data in all consecutive patients who underwent a minimally invasive lateral transpsoas approach for interbody fusion with the subsequent development of RM and ARF at 2 institutions between 2006 and 2009. Demographic variables, patient home medications, preoperative laboratory values, and anesthetic used during the procedure were reviewed. All patient data were recorded including the operative procedure, patient positioning, postoperative hospital course, operative time, blood loss, creatine phosphokinase (CPK), creatinine, duration of hospital stay, and complications. Five of 315 consecutive patients were identified with RM and ARF after undergoing minimally invasive lateral transpsoas spine surgery. There were 4 men and 1 woman with a mean age of 66 years (range 60-71 years). The mean body mass index was 31 kg/m2 and ranged from 25 to 40 kg/m2. Nineteen interbody levels had been fused, with a range of 3-6 levels per patient. The mean operative time was 420 minutes and ranged from 315 to 600 minutes. The CPK ranged from 5000 to 56,000 U/L, with a mean of 25,861 U/L. Two of the 5 patients required temporary hemodialysis, while 3 required only aggressive fluid resuscitation. The mean duration of the hospital stay was 12 days, with a range of 3-25 days. Rhabdomyolysis is a rare but known potential complication of spine surgery. The authors describe the first case series associated with the minimally invasive lateral approach. Surgeons must be aware of the possibility of postoperative RM and ARF, particularly in
The human ECG nonlinear deterministic versus stochastic aspects
Kantz, H; Kantz, Holger; Schreiber, Thomas
1998-01-01
We discuss aspects of randomness and of determinism in electrocardiographic signals. In particular, we take a critical look at attempts to apply methods of nonlinear time series analysis derived from the theory of deterministic dynamical systems. We will argue that deterministic chaos is not a likely explanation for the short time variablity of the inter-beat interval times, except for certain pathologies. Conversely, densely sampled full ECG recordings possess properties typical of deterministic signals. In the latter case, methods of deterministic nonlinear time series analysis can yield new insights.
Trefan, Gyorgy
1993-01-01
The goal of this thesis is to contribute to the ambitious program of the foundation of developing statistical physics using chaos. We build a deterministic model of Brownian motion and provide a microscopic derivation of the Fokker-Planck equation. Since the Brownian motion of a particle is the result of the competing processes of diffusion and dissipation, we create a model where both diffusion and dissipation originate from the same deterministic mechanism--the deterministic interaction of that particle with its environment. We show that standard diffusion which is the basis of the Fokker-Planck equation rests on the Central Limit Theorem, and, consequently, on the possibility of deriving it from a deterministic process with a quickly decaying correlation function. The sensitive dependence on initial conditions, one of the defining properties of chaos insures this rapid decay. We carefully address the problem of deriving dissipation from the interaction of a particle with a fully deterministic nonlinear bath, that we term the booster. We show that the solution of this problem essentially rests on the linear response of a booster to an external perturbation. This raises a long-standing problem concerned with Kubo's Linear Response Theory and the strong criticism against it by van Kampen. Kubo's theory is based on a perturbation treatment of the Liouville equation, which, in turn, is expected to be totally equivalent to a first-order perturbation treatment of single trajectories. Since the boosters are chaotic, and chaos is essential to generate diffusion, the single trajectories are highly unstable and do not respond linearly to weak external perturbation. We adopt chaotic maps as boosters of a Brownian particle, and therefore address the problem of the response of a chaotic booster to an external perturbation. We notice that a fully chaotic map is characterized by an invariant measure which is a continuous function of the control parameters of the map
Reisch, Robert; Koechlin, Nicolas O; Marcus, Hani J
2016-09-01
Despite their predominantly histologically benign nature, intradural tumors may become symptomatic by virtue of their space-occupying effect, causing severe neurological deficits. The gold standard treatment is total excision of the lesion; however, extended dorsal and dorsolateral approaches may cause late complications due to iatrogenic destruction of the posterolateral elements of the spine. In this article, we describe our concept of minimally invasive spinal tumor surgery. Two illustrative cases demonstrate the feasibility and safety of keyhole fenestrations exposing the spinal canal. PMID:25336048
Deterministic manufacturing of large sapphire windows
Lambropoulus, Teddy; Fess, Ed; DeFisher, Scott
2013-06-01
There is a need for precisely figured large sapphire windows with dimensions of up to 20 inches with thicknesses of 0.25 inches that will operate in the 1- to 5-micron wavelength range. In an effort to reduce manufacturing cost during grinding and polishing, OptiPro Systems is developing technologies that provide an optimized deterministic approach to making them. This development work is focusing on two main areas of research. The first is optimizing existing technologies, like deterministic microgrinding and UltraForm Finishing (UFF), for shaping operations and precision controlled sub-aperture polishing. The second area of research consists of a new large aperture deterministic polishing process currently being developed at OptiPro called UltraSmooth Finishing (USF). The USF process utilizes deterministic control with a large aperture polishing tool. This presentation will discuss the challenges associated with manufacturing large sapphire windows and present results on the work that is being performed to minimize manufacturing costs associated with them.
Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.
2012-01-01
Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is
White sponge naevus with minimal clinical and histological changes: report of three cases.
Lucchese, Alberta; Favia, Gianfranco
2006-05-01
White sponge naevus (WSN) is a rare autosomal dominant disorder that predominantly affects non-cornified stratified squamous epithelia: oral mucosa, oesophagus, anogenital area. It has been shown to be related to keratin defects, because of mutations in the genes encoding mucosal-specific keratins K4 and K13. We illustrate three cases diagnosed as WSN, following the clinical and histological criteria, with unusual appearance. They presented with minimal clinical and histological changes that could be misleading in the diagnosis. The patients showed diffuse irregular plaques with a range of presentations from white to rose coloured mucosae involving the entire oral cavity. In one case the lesion was also present in the vaginal area. The histological findings included epithelial thickening, parakeratosis and extensive vacuolization of the suprabasal keratinocytes, confirming WSN diagnosis. Clinical presentation and histopathology of WSN are discussed in relation to the differential diagnosis of other oral leukokeratoses. PMID:16630298
Fighting with deterministic disturbances
Tibabishev, V N
2011-01-01
Consider the problem of interference mitigation in the identification of the dynamics of multidimensional control systems in the class of linear stationary models for single realizations of the observed signals. A concepts uncorrelated processes is not verifiable. Is entered the concept of system components of the signal measured on a semiring. Properties of signals are defined for systems of sets of linearly dependent and linearly independent measured signals. Frequency method is found to deal with noise on the set of deterministic functions. Example is considered to identify the dynamic characteristics of the aircraft on the data obtained in the regime of one automatic landing.
Deterministic Graphical Games Revisited
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro;
2012-01-01
as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison......Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them, such...
Deterministic Global Optimization
Scholz, Daniel
2012-01-01
This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization
Deterministic Walks in Random Media
Deterministic walks over a random set of N points in one and two dimensions (d=1,2 ) are considered. Points ('cities') are randomly scattered in Rd following a uniform distribution. A walker ('tourist'), at each time step, goes to the nearest neighbor city that has not been visited in the past τ steps. Each initial city leads to a different trajectory composed of a transient part and a final p -cycle attractor. Transient times (for d=1,2 ) follow an exponential law with a τ -dependent decay time but the density of p cycles can be approximately described by D(p)∝p-α (τ) . For τmuchgt1 and τ/Nmuchlt1 , the exponent is independent of τ . Some analytical results are given for the d=1 case
Minimally invasive surgery for superior mesenteric artery syndrome: A case report.
Yao, Si-Yuan; Mikami, Ryuichi; Mikami, Sakae
2015-12-01
Superior mesenteric artery (SMA) syndrome is defined as a compression of the third portion of the duodenum by the abdominal aorta and the overlying SMA. SMA syndrome associated with anorexia nervosa has been recognized, mainly among young female patients. The excessive weight loss owing to the eating disorder sometimes results in a reduced aorto-mesenteric angle and causes duodenal obstruction. Conservative treatment, including psychiatric and nutritional management, is recommended as initial therapy. If conservative treatment fails, surgery is often required. Currently, traditional open bypass surgery has been replaced by laparoscopic duodenojejunostomy as a curative surgical approach. However, single incision laparoscopic approach is rarely performed. A 20-year-old female patient with a diagnosis of anorexia nervosa and SMA syndrome was prepared for surgery after failed conservative management. As the patient had body image concerns, a single incision laparoscopic duodenojejunostomy was performed to achieve minimal scarring. As a result, good perioperative outcomes and cosmetic results were achieved. We show the first case of a young patient with SMA syndrome who was successfully treated by single incision laparoscopic duodenojejunostomy. This minimal invasive surgery would be beneficial for other patients with SMA syndrome associated with anorexia nervosa, in terms of both surgical and cosmetic outcomes. PMID:26668518
Deterministic analyses of severe accident issues
Severe accidents in light water reactors involve complex physical phenomena. In the past there has been a heavy reliance on simple assumptions regarding physical phenomena alongside of probability methods to evaluate risks associated with severe accidents. Recently GE has developed realistic methodologies that permit deterministic evaluations of severe accident progression and of some of the associated phenomena in the case of Boiling Water Reactors (BWRs). These deterministic analyses indicate that with appropriate system modifications, and operator actions, core damage can be prevented in most cases. Furthermore, in cases where core-melt is postulated, containment failure can either be prevented or significantly delayed to allow sufficient time for recovery actions to mitigate severe accidents
The deterministic and statistical Burgers equation
Fournier, J.-D.; Frisch, U.
Fourier-Lagrangian representations of the UV-region inviscid-limit solutions of the equations of Burgers (1939) are developed for deterministic and random initial conditions. The Fourier-mode amplitude behavior of the deterministic case is characterized by complex singularities with fast decrease, power-law preshocks with k indices of about -4/3, and shocks with k to the -1. In the random case, shocks are associated with a k to the -2 spectrum which overruns the smaller wavenumbers and appears immediately under Gaussian initial conditions. The use of the Hopf-Cole solution in the random case is illustrated in calculations of the law of energy decay by a modified Kida (1979) method. Graphs and diagrams of the results are provided.
Deterministically delayed pseudofractal networks
On the basis of pseudofractal networks (PFNs), we propose a family of delayed pseudofractal networks (DPFNs) with a special feature that newly added edges delay producing new nodes, differing from the evolution algorithms of PFNs where all existing edges simultaneously generate new nodes. We obtain analytical formulae for degree distribution, clustering coefficient (C) and average path length (APL). We compare DPFNs and PFNs, and show that the exponent of the degree distribution of DPFNs is smaller than that of PFNs, meaning that the heterogeneity of this kind of delayed network is higher. Compared to PFNs, small-world features of DPFNs are more prominent (larger C and smaller APL). We also find that the delay strengthens the scale-free and small-world characteristics of DPFNs. In addition, we calculate and compare the mean first passage time (MFPT) numerically, revealing that the MFPT of DPFNs is shorter. Our study may help with a deeper understanding of various deterministically growing delayed networks
Minimal access direct spondylolysis repair using a pedicle screw-rod system: a case series
Mohi Eldin Mohamed
2012-11-01
Full Text Available Abstract Introduction Symptomatic spondylolysis is always challenging to treat because the pars defect causing the instability needs to be stabilized while segmental fusion needs to be avoided. Direct repair of the pars defect is ideal in cases of spondylolysis in which posterior decompression is not necessary. We report clinical results using segmental pedicle-screw-rod fixation with bone grafting in patients with symptomatic spondylolysis, a modification of a technique first reported by Tokuhashi and Matsuzaki in 1996. We also describe the surgical technique, assess the fusion and analyze the outcomes of patients. Case presentation At Cairo University Hospital, eight out of twelve Egyptian patients’ acute pars fractures healed after conservative management. Of those, two young male patients underwent an operative procedure for chronic low back pain secondary to pars defect. Case one was a 25-year-old Egyptian man who presented with a one-year history of axial low back pain, not radiating to the lower limbs, after falling from height. Case two was a 29-year-old Egyptian man who presented with a one-year history of axial low back pain and a one-year history of mild claudication and infrequent radiation to the leg, never below the knee. Utilizing a standardized mini-access fluoroscopically-guided surgical protocol, fixation was established with two titanium pedicle screws place into both pedicles, at the same level as the pars defect, without violating the facet joint. The cleaned pars defect was grafted; a curved titanium rod was then passed under the base of the spinous process of the affected vertebra, bridging the loose fragment, and attached to the pedicle screw heads, to uplift the spinal process, followed by compression of the defect. The patients were discharged three days after the procedure, with successful fusion at one-year follow-up. No rod breakage or implant-related complications were reported. Conclusions Where there is no
Wang, Wei-Lien; Torres-Cabala, Carlos; Curry, Jonathan L; Ivan, Doina; McLemore, Michael; Tetzlaff, Michael; Zembowicz, Artur; Prieto, Victor G; Lazar, Alexander J
2015-06-01
Atypical fibroxanthoma (AFX) is a dermal mesenchymal neoplasm arising in sun-damaged skin, primarily of the head and neck region of older men. Conservative excision cures most. However, varying degrees of subcutaneous involvement can lead to a more aggressive course and rare metastases. Thus, AFX involving the subcutis are termed pleomorphic dermal sarcomas or other monikers by some to recognize the more threatening natural history. We reviewed cases of "metastatic AFX" from our institution and from the files of a consultative dermatopathology practice. Nine of 152 patients with AFX were identified at a single institution (2000-2011). Two additional patients were identified from the files of a consultative practice. Clinical, radiological, and pathological features were reviewed and cases with histologically verified metastases identified. Median age was 67 (range, 45-91) years, all male, and involving the head and neck region. Two cases had no documented involvement of the subcutis, and 2 cases had only superficial subcutis involvement. Median time to metastases was 13 (range, 8-49) months. Three patients developed solitary regional lymph node metastases while 8 had widespread metastases. Five patients developed local recurrence within 8 months, and all 5 developed widespread metastasis. With median follow-up of 26 (range, 10-145) months, 6 died of disease (median, 19 months; range, 10-35 months), 4 were alive and well, and 1 was alive with disease. AFX has very rare metastatic potential, even those without or with minimal subcutis involvement, and can lead to mortality. Most metastasis and local recurrence occurred within 1 year of presentation. Solitary regional metastases were associated with better outcomes than those with multiple distant metastases. Patients with repeated local recurrences portended more aggressive disease including development of distant metastases. PMID:25590287
Deterministic behavioural models for concurrency
Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn
1993-01-01
This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...
An Approach to Composition Based on a Minimal Techno Case Study
Bougaïeff, Nicolas
2013-01-01
This dissertation examines key issues relating to minimal techno, a sub-genre of electronic dance music (EDM) that emerged in the early 1990s. These key issues are the aesthetics, composition, performance, and technology of minimal techno, as well as the economics of EDM production. The study aims to answer the following question. What is the musical and social significance of minimal techno production and performance? The study is conducted in two parts. The history of minimal music is ...
Automated gauging stations are used to monitor the hydro-ecological effects of nuclear power stations. These stations continuously measure four physical chemical parameters: water temperature, dissolved oxygen content, pH and electrical conductivity. Every hour, they provide the results of water quality measurements on samples taken upstream, downstream and at the site of the power plants. This work proposes a series of tools for critically analysing and validating the collected data. They should provide a means of detecting the abnormal values, discontinuities and recording drifts most frequently observed. Using conventional statistical tests, the procedure developed compares the measured value with other information: other measurements or models forecasts. The models are based either on the internal properties of the time-dependent series of each variable considered, or on relationships with external hydro-meteorological variables: air temperature, solar radiation and flow rate. These links can be expressed either by a totally or partially deterministic model, or by a statistical model, both of which require prior calibration using past data. In particular, the models of Box and Jenkins, neural networks or deterministic models such as CALNAT or an adaptation of Biomox (EDF-Chatou) have been used. These methods and tools were developed and applied with a cross-validation procedure covering five years of data records for the river Loire at Dampierre (1990-1994). (author)
Ghirardini, G; Mohamed, M; Bartolamasi, A; Malmusi, S; Dalla Vecchia, E; Algeri, I; Zanni, A; Renzi, A; Cavicchioni, O; Braconi, A; Pazzoni, F; Alboni, C
2013-01-01
The objective of our study was to evaluate surgical outcome of minimally invasive vaginal hysterectomy (MIVH), using the bipolar vessel sealing system (BVSS; BiClamp®). The design was a retrospective analysis (Canadian Task-force Classification II-3). The setting was a secondary care hospital. Records of patients who underwent vaginal hysterectomy for benign indications in our centre between November 2005 and March 2011 were reviewed. The demographic patients' data, indications for surgery, patient history with regard to previous surgery, duration of surgery, blood loss (postoperative hemoglobin drop '∆Hb'), perioperative complications, and length of inpatient stay were collected from the medical records. The intervention was vaginal hysterectomy using BVSS (BiClamp®). Results showed that the mean duration of surgery was 48.9 ± 15.3 min (95% CI, 49.2-52.5). The mean duration of hospital stay was 3.2 ± 1.2 days (95% CI, 2.8-3.2). The mean ∆Hb was 1.4 ± 1.8 g/dl. Overall, conversion to laparotomy was required in three cases (0.6%). Only one haemoperitoneum occurred (0.2%) and this is the only case who required blood transfusion. The main indication for VH was uterine prolapse in 52.0% (n = 260) of cases; uterine fibroids in 37.4% (n = 187); adenomyosis uteri in 4.2% (n = 21); cervical dysplasia in 22 patients (4.4%) and in 2% (n = 10) of patients, endometrial hyperplasia and other pathologies were the indications for VH. It was concluded that electrosurgical bipolar vessel sealing by (BiClamp®) can provide a safe and feasible alternative to sutures in vaginal hysterectomy, resulting in reduced operative time and blood loss, with acceptable surgical outcomes. PMID:23259887
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of
Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E.; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece); Medical Diagnostic Center ' Ayios Therissos,' P.O. Box 28405, Nicosia 2033, Cyprus and Department of Medical Physics, Medical School, Democritus University of Thrace, Panepistimioupolis, Dragana 68100, Alexandroupolis (Greece); Department of Medical Physics, University Hospital of Heraklion, P.O. Box 1352, Heraklion 71110, Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece)
2013-01-15
Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of
Submicroscopic Deterministic Quantum Mechanics
Krasnoholovets, V
2002-01-01
So-called hidden variables introduced in quantum mechanics by de Broglie and Bohm have changed their initial enigmatic meanings and acquired quite reasonable outlines of real and measurable characteristics. The start viewpoint was the following: All the phenomena, which we observe in the quantum world, should reflect structural properties of the real space. Thus the scale 10^{-28} cm at which three fundamental interactions (electromagnetic, weak, and strong) intersect has been treated as the size of a building block of the space. The appearance of a massive particle is associated with a local deformation of the cellular space, i.e. deformation of a cell. The mechanics of a moving particle that has been constructed is deterministic by its nature and shows that the particle interacts with cells of the space creating elementary excitations called "inertons". The further study has disclosed that inertons are a substructure of the matter waves which are described by the orthodox wave \\psi-function formalism. The c...
Height-Deterministic Pushdown Automata
Nowotka, Dirk; Srba, Jiri
We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class of...... regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Operational State Complexity of Deterministic Unranked Tree Automata
Xiaoxue Piao
2010-08-01
Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.
Deterministic methods in radiation transport
Rice, A.F.; Roussin, R.W. (eds.)
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community
Regret Bounds for Deterministic Gaussian Process Bandits
de Freitas, Nando; Zoghi, Masrour
2012-01-01
This paper analyses the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of $O(\\frac{1}{\\sqrt{t}})$, where t is the number of observations. To complement their result, we attack the deterministic case and attain a much faster exponential convergence rate. Under some regularity assumptions, we show that the regret decreases asymptotically according to $O(e^{-\\frac{\\tau t}{(\\ln t)^{d/4}}})$ with high probability. Here, d is the dimension of the search space and $\\tau$ is a constant that depends on the behaviour of the objective function near its global maximum.
Deterministic Real-time Thread Scheduling
Yun, Heechul; Sha, Lui
2011-01-01
Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.
Self-organized criticality in deterministic systems with disorder
Rios, Paolo De Los; Valleriani, Angelo; Vega, Jose Luis
1997-01-01
Using the Bak-Sneppen model of biological evolution as our paradigm, we investigate in which cases noise can be substituted with a deterministic signal without destroying Self-Organized Criticality (SOC). If the deterministic signal is chaotic the universality class is preserved; some non-universal features, such as the threshold, depend on the time correlation of the signal. We also show that, if the signal introduced is periodic, SOC is preserved but in a different universality class, as lo...
Minimally Invasive Antral Membrane Balloon Elevation (MIAMBE: A 3 cases report
Roberto Arroyo
2013-12-01
Full Text Available ABSTRACT Long-standing partial edentulism in the posterior segment of an atrophic maxilla is a challenging treatment. Sinus elevation via Cadwell Luc has several anatomical restrictions, post-operative discomfort and the need of complex surgical techniques. The osteotome approach is a significantly safe and efficient tecnique, as a variation of this technique the "minimal invasive antral membrane balloon elevation" (MIAMBE has been developed, which use a hydraulic system. We present three cases in which the system was used MIAMBE for tooth replacement in the posterior. This procedure seems to be a relatively simple and safe solution for the insertion of endo-osseus implants in the posterior atrophic maxilla. RESUMEN El edentulismo parcial de larga data en el segmento posterior en un maxilar atrófico supone un reto terapéutico. La elevación de seno vía Cadwell Luc presenta restricciones anatómicas, incomodidades post-operatorias y la necesidad de técnicas quirúrgicas complejas. El enfoque con osteotomos tiene una eficacia y seguridad considerable, como una variación a esta se ha desarrollado la "elevación mínimamente invasiva mediante globo de la membrana antral" (MIAMBE, que utiliza un sistema hidráulico. Se presentan tres casos en los que se utilizó el sistema MIAMBE para el reemplazo de dientes en el sector posterior. Este procedimiento parece ser una solución relativamente sencilla y segura para inserción de implates endo-óseos en el caso de un maxilar atrófico posterior.
Stephen Faddegon
2013-04-01
Full Text Available Background and Purpose Horseshoe kidney is an uncommon renal anomaly often associated with ureteropelvic junction (UPJ obstruction. Advanced minimally invasive surgical (MIS reconstructive techniques including laparoscopic and robotic surgery are now being utilized in this population. However, fewer than 30 cases of MIS UPJ reconstruction in horseshoe kidneys have been reported. We herein report our experience with these techniques in the largest series to date. Materials and Methods We performed a retrospective chart review of nine patients with UPJ obstruction in horseshoe kidneys who underwent MIS repair at our institution between March 2000 and January 2012. Four underwent laparoscopic, two robotic, and one laparoendoscopic single-site (LESS dismembered pyeloplasty. An additional two pediatric patients underwent robotic Hellstrom repair. Perioperative outcomes and treatment success were evaluated. Results Median patient age was 18 years (range 2.5-62 years. Median operative time was 136 minutes (range 109-230 min. and there were no perioperative complications. After a median follow-up of 11 months, clinical (symptomatic success was 100%, while radiographic success based on MAG-3 renogram was 78%. The two failures were defined by prolonged t1/2 drainage, but neither patient has required salvage therapy as they remain asymptomatic with stable differential renal function. Conclusions MIS repair of UPJ obstruction in horseshoe kidneys is feasible and safe. Although excellent short-term clinical success is achieved, radiographic success may be lower than MIS pyeloplasty in heterotopic kidneys, possibly due to inherent differences in anatomy. Larger studies are needed to evaluate MIS pyeloplasty in this population.
Minimally invasive transforaminal lumbar interbody fusion Results of 23 consecutive cases
Amit Jhala
2014-01-01
Conclusion: The study demonstrates a good clinicoradiological outcome of minimally invasive TLIF. It is also superior in terms of postoperative back pain, blood loss, hospital stay, recovery time as well as medication use.
Minimally invasive video-assisted thyroidectomy: experience of 200 cases in a single center
Haitao, Zheng; Jie, Xu; Lixin, Jiang
2014-01-01
Introduction Minimally invasive techniques in thyroid surgery including video-assisted technique originally described by Miccoli have been accepted in several continents for more than 10 years. Aim To analyze our preliminary results from minimally invasive video-assisted thyroidectomy (MIVAT) and to evaluate the feasibility and effects of this method in a general department over a 4-year period. Material and methods Initial experience was presented based on a series of 200 patients selected f...
Yunzhi ZHOU
2010-01-01
Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications
Waste minimization in a research and development environment - a case history
Brookhaven National Laboratory (BNL) research and development activities generate small and variable waste streams that present a unique minimization challenge. This paper describes how B ampersand V Waste Science and TEchnology Corp. successfully planned and organized an assessment of these waste streams. It describes the procedures chosen to collect and evaluate data and the procedure adopted to determine the feasibility of waste minimization methods and program elements. The paper gives a brief account of the implementation of the assessment and summarizes the assessment results and recommendations. Also, the paper briefly describes a manual developed to train staff on materials handling and storage methods and a general information brochure to educate employees and visiting researchers. Both documents covered handling, storage, and disposal procedures that could be used to eliminate or minimize hazardous waste discharges to the environment
Deterministic computation of functional integrals
A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the
Deterministic dynamic behaviour
The dynamic load as the second given quantity in a dynamic analysis is less problematic as a rule than structure mapping, except for those cases where it cannot be specified completely independently directly for the structure but interacts with the non-structure environment or is influenced by the latter. In these cases, one should always check whether the study cannot be simplified by separate investigations of the two types of problems. The determination of the system response from the given quantities 'structure' and 'load' is the central function of dynamic analysis, although the importance and problems of the mapping steps should not be underestimated. The paper focuses on some aspects of this problem. The available methods are classified as modal and non-modal (direct) methods. In the first of these, the eigenvectors of the system are used as generalizing coordinates, while the degrees of freedom describing the model are used in the latter. The criteria for assessing methods of calculation are the accuracy and numerical stability of their solutions as well as their simplicity of use. Response quantities are represented (and calculated) in the form of response-time functions, frequency response functions, spectral density functions (or stochastic parameters derived from these), response spectra. The multitude of problems in dynamic studies requires also a multitude of possible approaches, and the selection of the most appropriate method for a given case is no slight task. (orig./GL)
Savaş Karyağar; Karyağar, Sevda S; Orhan Yalçın; Enis Yüney; Mehmet Mülazımoğlu; Tevfik Özpaçacı; Oğuzhan Karatepe; Yaşar Özdenkaya
2013-01-01
Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP), conducted without the intra-operative quick parathyroid hormone (QPTH) measurement in the cases of solitary parathyroid adenomas (SPA) detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS) in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years) between Febru...
Objective: To investigate the effectiveness, technical points and complications of the minimally-invasive treatment for iatrogenic intravenous foreign bodies. Methods: Five patients with iatrogenic intravenous foreign bodies due to the fracture or shift of venous catheter were enrolled in this study. By using grasping device, which was inserted into the target vein via right femoral vein, the foreign bodies within the venous system were successfully eliminated. Results: The vascular foreign bodies were successfully removed in all five patients, with a success rate of 100%. No operation-related complications, such as vascular rupture, pulmonary embolism, etc. occurred. Conclusion: As a minimally-invasive technique, the use grasping device for removing the iatrogenic vascular foreign bodies has higher success rate; thus, major surgical procedures can be avoided. (authors)
Minimally invasive video-assisted thyroidectomy: seven-year experience with 240 cases
Barczyński, Marcin; Konturek, Aleksander; Stopa, Małgorzata; Papier, Aleksandra; Nowak, Wojciech
2012-01-01
Introduction Minimally invasive video-assisted thyroidectomy (MIVAT) has gained acceptance in recent years as an alternative to conventional thyroid surgery. Aim Assessment of our 7-year experience with MIVAT. Material and methods A retrospective study of 240 consecutive patients who underwent MIVAT at our institution between 01/2004 and 05/2011 was conducted. The inclusion criterion was a single thyroid nodule below 30 mm in diameter within the thyroid of 25 ml or less in volume. The exclusi...
del Vecchio, Jorge Javier; Ghioldi, Mauricio; Raimondi, Nicolás; De Elias, Manuel
2016-01-01
Fracture dislocations involving the Lisfranc joint are rare; they represent only 0.2% of all the fractures. There is no consensus about the surgical management of these lesions in the medical literature. However, both anatomical reduction and tarsometatarsal stabilization are essential for a good outcome. In this clinical study, five consecutive patients with a diagnosis of Lisfranc low-energy lesion were treated with a novel surgical technique characterized by minimal osteosynthesis performed through a minimally invasive approach. According to the radiological criteria established, the joint reduction was anatomical in four patients, almost anatomical in one patient (#4), and nonanatomical in none of the patients. At the final follow-up, the AOFAS score for the midfoot was 96 points (range, 95–100). The mean score according to the VAS (Visual Analog Scale) at the end of the follow-up period was 1.4 points over 10 (range, 0–3). The surgical technique described in this clinical study is characterized by the use of implants with the utilization of a novel approach to reduce joint and soft tissue damage. We performed a closed reduction and minimally invasive stabilization with a bridge plate and a screw after achieving a closed anatomical reduction. PMID:27340569
Zietek, Pawel; Karaczun, Maciej; Kruk, Bartosz; Szczypior, Karina
2016-01-01
Achilles injury is a common musculoskeletal disorder. Bilateral rupture of the Achilles tendon, however, is much less common and usually occurs spontaneously. Complete, traumatic, and bilateral ruptures are rare and typically require long periods of immobilization before the patient can return to full weightbearing. A 52-year-old male was hospitalized for bilateral traumatic rupture to both Achilles tendons. No risk factors for tendon rupture were found. Blood samples revealed no peripheral blood pathologic features. Both tendons were repaired with percutaneous, minimally invasive surgery using the Achillon(®) tendon suture system. Rehabilitation was begun 4 weeks later. An ankle-foot orthosis was prescribed to provide ankle support with an adjustable range of movement, and active plantar flexion was set at 0° to 30°. The patient remained non-weightbearing with the ankle-foot orthosis device and performed active range-of-motion exercises. At 8 weeks after surgery, we recommended that he begin walking with partial weightbearing using a foot-tibial orthosis with the range of motion set to 45° plantar flexion and 15° dorsiflexion. At 10 weeks postoperatively, he was encouraged to return to full weightbearing on both feet. Beginning rehabilitation as soon as possible after minimally invasive surgery, compared with 6 weeks of immobilization after surgery, provided a rapid resumption to full weightbearing. We emphasize the clinical importance of a safe, simple treatment program that can be followed for a patient with damage to the Achilles tendons. To our knowledge, ours is the first report of minimally invasive repair of bilateral simultaneous traumatic rupture of the Achilles tendon. PMID:26002678
Mika Oki
2011-10-01
Full Text Available BACKGROUND: Dengue infection is endemic in many regions throughout the world. While insecticide fogging targeting the vector mosquito Aedes aegypti is a major control measure against dengue epidemics, the impact of this method remains controversial. A previous mathematical simulation study indicated that insecticide fogging minimized cases when conducted soon after peak disease prevalence, although the impact was minimal, possibly because seasonality and population immunity were not considered. Periodic outbreak patterns are also highly influenced by seasonal climatic conditions. Thus, these factors are important considerations when assessing the effect of vector control against dengue. We used mathematical simulations to identify the appropriate timing of insecticide fogging, considering seasonal change of vector populations, and to evaluate its impact on reducing dengue cases with various levels of transmission intensity. METHODOLOGY/PRINCIPAL FINDINGS: We created the Susceptible-Exposed-Infectious-Recovered (SEIR model of dengue virus transmission. Mosquito lifespan was assumed to change seasonally and the optimal timing of insecticide fogging to minimize dengue incidence under various lengths of the wet season was investigated. We also assessed whether insecticide fogging was equally effective at higher and lower endemic levels by running simulations over a 500-year period with various transmission intensities to produce an endemic state. In contrast to the previous study, the optimal application of insecticide fogging was between the onset of the wet season and the prevalence peak. Although it has less impact in areas that have higher endemicity and longer wet seasons, insecticide fogging can prevent a considerable number of dengue cases if applied at the optimal time. CONCLUSIONS/SIGNIFICANCE: The optimal timing of insecticide fogging and its impact on reducing dengue cases were greatly influenced by seasonality and the level of
DEVELOPMENT OF OPTIMIZATION STRATEGIES COMBINING RANDOM AND DETERMINISTIC METHODS
Mimoun Younes
2012-01-01
Full Text Available The optimal allocation of powers is one of main functions of the manufacturing operation and control of electrical energy. The overall objective is to determine optimal production units in order to minimize production cost while the system operates in its safe limit. This article proposes a hybridization between deterministic and stochastic approaches (Method of Davidon-Fletcher-Powell and genetics to improve the optimization of the cost function nodes.
Minimal access surgery in Castleman disease in a child, a case report
Jan F. Svensson
2015-07-01
Full Text Available This case report describes a child with Castleman disease. We present an overview of the disease, the investigation leading to the diagnosis, the laparoscopic approach for surgical treatment and the follow up. This rare entity must be considered in cases of long-standing abdominal pain, cross-sectional imaging is beneficial and we support the use of laparoscopic intervention in the treatment of unifocal abdominal Castleman disease.
Case study: a minimally invasive approach to the treatment of Klippel-Trenaunay syndrome.
Latessa, Victoria; Frasier, Krista
2007-12-01
Klippel-Trenaunay syndrome (KTS) is a congenital developmental disorder characterized by port wine stain, venous abnormalities, soft tissue, and bony deformities of the affected extremity. It is usually diagnosed in early childhood and has many long-term sequelae. Patients not only have physical health problems but also must learn to cope with psychosocial factors that will affect their self-esteem and interpersonal relationships. This article describes the syndrome of KTS and the minimally invasive techniques used in the treatment of superficial varicosities in patients with reasonably mild KTS with an intact deep venous system. Treating the varicosities relatively early to avoid the long-term complications of chronic venous insufficiency may improve the quality of life, maintain limb function, and decrease the risk of long-term venous complications. PMID:18036494
Monte Carlo methods are typically used for simulating radiation fields around gamma-ray spectrometers and pulse-height tallies within those spectrometers. Deterministic codes that discretize the linear Boltzmann transport equation can offer significant advantages in computational efficiency for calculating radiation fields, but stochastic codes remain the most dependable tools for calculating the response within spectrometers. For a deterministic field solution to become useful to radiation detection analysts, it must be coupled to a method for calculating spectrometer response functions. This coupling is done in the RADSAT toolbox. Previous work has been successful using a Monte Carlo boundary sphere around a handheld detector. It is desirable to extend this coupling to larger detector systems such as the portal monitors now being used to screen vehicles crossing borders. Challenges to providing an accurate Monte Carlo boundary condition from the deterministic field solution include the greater possibility of large radiation gradients along the detector and the detector itself perturbing the field solution, unlike smaller detector systems. The method of coupling the deterministic results to a stochastic code for large detector systems can be described as spatially defined rectangular patches that minimize gradients. The coupled method was compared to purely stochastic simulation data of identical problems, showing the methods produce consistent detector responses while the purely stochastic run times are substantially longer in some cases, such as highly shielded geometries. For certain cases, this method has the ability to faithfully emulate large sensors in a more reasonable amount of time than other methods.
Deterministic Soluble Model of Coarsening
Frachebourg, L.; Krapivsky, P. L.
1996-01-01
We investigate a 3-phase deterministic one-dimensional phase ordering model in which interfaces move ballistically and annihilate upon colliding. We determine analytically the autocorrelation function A(t). This is done by computing generalized first-passage type probabilities P_n(t) which measure the fraction of space crossed by exactly n interfaces during the time interval (0,t), and then expressing the autocorrelation function via P_n's. We further reveal the spatial structure of the syste...
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Deterministic Small-World Networks
Comellas, Francesc; Sampels, Michael
2001-01-01
Many real life networks, such as the World Wide Web, transportation systems, biological or social networks, achieve both a strong local clustering (nodes have many mutual neighbors) and a small diameter (maximum distance between any two nodes). These networks have been characterized as small-world networks and modeled by the addition of randomness to regular structures. We show that small-world networks can be constructed in a deterministic way. This exact approach permits a direct calculatio...
Yun Niu; Tieju Liu; Xuchen Cao; Xiumin Ding; Li Wei; Yuxia Gao; Jun Liu
2009-01-01
OBJECTIVE To evaluate core needle biopsy (CNB) as a mini-mally invasive method to examine breast lesions and discuss the clinical significance of subsequent immunohistochemistry (IHC)analysis.METHODS The clinical data and pathological results of 235 pa-tients with breast lesions, who Received CNB before surgery, were analyzed and compared. Based on the results of CNB done before surgery, 87 out of 204 patients diagnosed as invasive carcinoma were subjected to immunodetection for p53, c-erbB-2, ER and PR.The morphological change of cancer tissues in response to chemo-therapy was also evaluated.RESULTS In total of 235 cases receiving CNB examination, 204 were diagnosed as invasive carcinoma, reaching a 100% consistent rate with the surgical diagnosis. Sixty percent of the cases diag-nosed as non-invasive carcinoma by CNB was identified to have the presence of invading elements in surgical specimens, and simi-larly, 50% of the cases diagnosed as atypical ductal hyperplasia by CNB was confirmed to be carcinoma by the subsequent result of excision biopsy. There was no significant difference between the CNB biopsy and regular surgical samples in positive rate of im-munohistochemistry analysis (p53, c-erbB-2, ER and PR; P > 0.05).However, there was significant difference in the expression rate of p53 and c-erbB-2 between the cases with and without morphologi-cal change in response to chemotherapy (P < 0.05). In most cases with p53 and c-erbB-2 positive, there was no obvious morphologi-cal change after chemotherapy. CONCLUSION CNB is a cost-effective diagnostic method with minimal invasion for breast lesions, although it still has some limi-tations. Immunodetection on CNB tissue is expected to have great significance in clinical applications.
Reddy
2015-07-01
Full Text Available CONTEXT: The approximate incidence of periprosthetic supracondylar femur fractures after total knee arthroplasty ranges from 0.3 to 2.5 percent. Various methods of treatment of these fractures have been suggested in the past, such as conservative management, open reduction and plate fixation and intramedullary nailing. However, there were complications like pain, stiffness, infection and delayed union. Minimally invasive plate osteosynthesis (MIPO is a relatively newer technique in the treatment of distal femoral fractures, as it preserves the periosteal blood supply an d bone perfusion as well as minimizes soft tissue dissection. AIM: To evaluate the effectiveness of MIPO technique in the treatment of periprosthetic distal femoral fracture. SETTINGS AND DESIGN : In this study, we present a case report of a 54 year old female patient who sustained type 2 (Rorabeck et al. classification periprosthetic distal femoral fractures after TKA. Her fracture fixation was done with distal femoral locking plates using minimally invasive technique. METHODS AND MATERIAL : We evaluated the clinical (using Oxford knee scoring system and radiological outcomes of the patient till six months post - operatively. Radiologically, the fracture showed complete union and she regained her full range of knee motion by the end of three months. CONCLUSION: We conclude that MIPO can be considered as an effective surgical treatment option in the management of periprosthetic distal femoral fractures after TKA
Recovery From Vegetative State to Minimally Conscious State: A Case Report.
Jang, SungHo; Kim, SeongHo; Lee, HanDo
2016-05-01
In this study, we attempted to demonstrate the change of the ascending reticular activating system (ARAS) concurrent with the recovery from a vegetative state (VS) to a minimally conscious state (MCS) in a patient with brain injury. A 54-year-old male patient had suffered from head trauma and underwent cardiopulmonary resuscitation immediately after head trauma. At 10 months after onset, the patient exhibited impaired consciousness, with a Coma Recovery Scale-Revised (CRS-R) score of 7 (auditory function: 1, visual function: 2, motor function: 1, verbal function: 1, communication: 0, and arousal: 2) and underwent the ventriculoperitoneal shunt operation for hydrocephalus. After the operation, he began comprehensive rehabilitative therapy. At post-op 2 and 8 weeks, his CRS-R score had recovered to 15 (3/3/4/1/1/3) and 17 (3/3/4/2/2/3), respectively. In terms of configuration on diffusion tensor tractography (DTT), there was no significant change in the lower portion of the ARAS. Regarding the change of neural connectivity of the thalamic intralaminar nucleus, increased neural connectivities to the hypothalamus, basal forebrain, prefrontal cortex, anterior cingulate cortex, and parietal cortex were observed in both hemispheres on post-op DTTs compared with pre-op DTT. We report on a patient with brain injury who showed change of the ARAS concurrent with the recovery from a VS and a MCS. PMID:26829084
Streamflow disaggregation: a nonlinear deterministic approach
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Deterministic prediction of localized corrosion damage
The accumulation of damage due to localized corrosion [pitting, stress corrosion cracking (SCC), corrosion fatigue (CF), crevice corrosion (CC), and erosion-corrosion (EC)] in complex industrial systems, such as power plants, refineries, desalination systems, etc., poses a threat to continued safe and economic operation, primarily because of the sudden, catastrophic nature of the resulting failures. Of particular interest in managing these forms of damage is the development of robust algorithms that can be used to predict the integrated damage as a function of time and as a function of the operating conditions of the system. Because complex systems of the same design rapidly become unique, due to differences in operating histories, and because failures are rare events, there is generally insufficient data on any given system to derive reliable empirical models that capture the impact of all (or even some) of the important independent variables. Accordingly, the models should be, to the greatest extent possible, deterministic with the output being constrained by the natural laws. In this paper, I outline the theory of the initiation of damage, in the from of pitting on aluminum in chloride solution, and then describe the deterministic prediction of the accumulation of damage from SCC in Type 304 SS components in the primary coolant circuits of Boiling Water (Nuclear) Reactors (BWRs). These cases have been selected to illustrate the various phases through which localized corrosion damage occurs
Chaotic dynamics and control of deterministic ratchets
Deterministic ratchets, in the inertial and also in the overdamped limit, have a very complex dynamics, including chaotic motion. This deterministically induced chaos mimics, to some extent, the role of noise, changing, on the other hand, some of the basic properties of thermal ratchets; for example, inertial ratchets can exhibit multiple reversals in the current direction. The direction depends on the amount of friction and inertia, which makes it especially interesting for technological applications such as biological particle separation. We overview in this work different strategies to control the current of inertial ratchets. The control parameters analysed are the strength and frequency of the periodic external force, the strength of the quenched noise that models a non-perfectly-periodic potential, and the mass of the particles. Control mechanisms are associated with the fractal nature of the basins of attraction of the mean velocity attractors. The control of the overdamped motion of noninteracting particles in a rocking periodic asymmetric potential is also reviewed. The analysis is focused on synchronization of the motion of the particles with the external sinusoidal driving force. Two cases are considered: a perfect lattice without disorder and a lattice with noncorrelated quenched noise. The amplitude of the driving force and the strength of the quenched noise are used as control parameters
Bottleneck Paths and Trees and Deterministic Graphical Games
Chechik, Shiri; Kaplan, Haim; Thorup, Mikkel; Zamir, Or; Zwick, Uri
2016-01-01
Gabow and Tarjan showed that the Bottleneck Path (BP) problem, i.e., finding a path between a given source and a given target in a weighted directed graph whose largest edge weight is minimized, as well as the Bottleneck spanning tree (BST) problem, i.e., finding a directed spanning tree rooted at a given vertex whose largest edge weight is minimized, can both be solved deterministically in O(m * log^*(n)) time, where m is the number of edges and n is the number of vertices in the graph. We p...
A Mathematical Programming Approach to a Deterministic Kanban System
Gabriel R. Bitran; Li Chang
1987-01-01
In this paper we present a mathematical programming model for the Kanban system in a deterministic multi-stage capacitated assembly-tree-structure production setting. We discuss solution procedures to the problem and address three special cases of practical interest.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Minimal invasive surgery for unicameral bone cyst using demineralized bone matrix: a case series
Cho Hwan
2012-07-01
Full Text Available Abstract Background Various treatments for unicameral bone cyst have been proposed. Recent concern focuses on the effectiveness of closed methods. This study evaluated the effectiveness of demineralized bone matrix as a graft material after intramedullary decompression for the treatment of unicameral bone cysts. Methods Between October 2008 and June 2010, twenty-five patients with a unicameral bone cyst were treated with intramedullary decompression followed by grafting of demineralized bone matrix. There were 21 males and 4 female patients with mean age of 11.1 years (range, 3–19 years. The proximal metaphysis of the humerus was affected in 12 patients, the proximal femur in five, the calcaneum in three, the distal femur in two, the tibia in two, and the radius in one. There were 17 active cysts and 8 latent cysts. Radiologic change was evaluated according to a modified Neer classification. Time to healing was defined as the period required achieving cortical thickening on the anteroposterior and lateral plain radiographs, as well as consolidation of the cyst. The patients were followed up for mean period of 23.9 months (range, 15–36 months. Results Nineteen of 25 cysts had completely consolidated after a single procedure. The mean time to healing was 6.6 months (range, 3–12 months. Four had incomplete healing radiographically but had no clinical symptom with enough cortical thickness to prevent fracture. None of these four cysts needed a second intervention until the last follow-up. Two of 25 patients required a second intervention because of cyst recurrence. All of the two had a radiographical healing of cyst after mean of 10 additional months of follow-up. Conclusions A minimal invasive technique including the injection of DBM could serve as an excellent treatment method for unicameral bone cysts.
Deterministic Circular Self Test Path
WEN Ke; HU Yu; LI Xiaowei
2007-01-01
Circular self test path (CSTP) is an attractive technique for testing digital integrated circuits(IC) in the nanometer era, because it can easily provide at-speed test with small test data volume and short test application time. However, CSTP cannot reliably attain high fault coverage because of difficulty of testing random-pattern-resistant faults. This paper presents a deterministic CSTP (DCSTP) structure that consists of a DCSTP chain and jumping logic, to attain high fault coverage with low area overhead. Experimental results on ISCAS'89 benchmarks show that 100% fault coverage can be obtained with low area overhead and CPU time, especially for large circuits.
A deterministic width function model
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Objective: To evaluate the benefits, efficacy and safety of local cervical plexus block in the performance of carotid endarterectomy, in the absence of sophisticated cerebral perfusion monitoring. Place and Duration of Study: This study was carried out at Combined Military Hospital (CMH) Lahore, Pakistan from January 2012 to May 2013. Study Design: Quasi-experimental study. Patients and Methods: A total of 45 cases of ASA II and ASA III physical status were operated for carotid endarterectomy under local block of cervical plexus. After thorough preanaesthetic assessment, the patients physical conditions were optimized before surgery. Premedication was given with midazolam and sedated during operation with small doses of propofol. Local anaesthesia (LA) was completed by injecting bupivacaine in cervical plexuses 2, C3 and C4 areas. During operation vital signs and adequacy of cerebral perfusion were monitored by keeping the patient awake and making clinical neurological observations. Verbal contact was maintained with the patient. Breathing patterns and motor power were assessed in contralateral upper and lower limbs. Postoperatively patients were interviewed and analgesia during operation was assessed with visual analogue scale. Surgeon's satisfaction regarding intraoperative analgesia was also noted. Patients who required added sedation or local anesthetic agent were also noted. Average duration of surgery time was two hours and average stay of the patients in hospital was five days. Results: Out of 45 patients, 37 patients (82%) had smooth and comfortable anaesthesia and analgesia. In only 1 patient (2.2%) LA had to be converted into general anaesthesia (GA). In 3 cases (7%) LA was supplemented. One patient (2.2%) developed hoarseness and difficulty in breathing and 1 patient (2.2%) developed hemiparesis intra-operatively; while 1 patient (2.2%) developed hypotension in the immediate postoperative period. One patient (2.2%) developed haematoma at infiltration
Bertl, Kristina; Gotfredsen, Klaus; Jensen, Simon S;
2016-01-01
OBJECTIVES: To report two cases of adverse reaction after mucosal hyaluronan (HY) injection around implant-supported crowns, with the aim to augment the missing interdental papilla. MATERIAL AND METHODS: Two patients with single, non-neighbouring, implants in the anterior maxilla, who were treated...... within the frames of a randomized controlled clinical trial testing the effectiveness of HY gel injection to reconstruct missing papilla volume at single implants, presented an adverse reaction. Injection of HY was performed bilaterally using a 3-step technique: (i) creation of a reservoir in the mucosa...... directly above the mucogingival junction, (ii) injection into the attached gingiva/mucosa below the missing papilla, and (iii) injection 2-3 mm apically to the papilla tip. The whole-injection session was repeated once after approximately 4 weeks. RESULTS: Both patients presented with swelling and extreme...
Central limit behavior of deterministic dynamical systems
Tirnakli, Ugur; Beck, Christian; Tsallis, Constantino
2007-04-01
We investigate the probability density of rescaled sums of iterates of deterministic dynamical systems, a problem relevant for many complex physical systems consisting of dependent random variables. A central limit theorem (CLT) is valid only if the dynamical system under consideration is sufficiently mixing. For the fully developed logistic map and a cubic map we analytically calculate the leading-order corrections to the CLT if only a finite number of iterates is added and rescaled, and find excellent agreement with numerical experiments. At the critical point of period doubling accumulation, a CLT is not valid anymore due to strong temporal correlations between the iterates. Nevertheless, we provide numerical evidence that in this case the probability density converges to a q -Gaussian, thus leading to a power-law generalization of the CLT. The above behavior is universal and independent of the order of the maximum of the map considered, i.e., relevant for large classes of critical dynamical systems.
Ooi, Adrian; Ng, Jonathan; Chui, Christopher; Goh, Terence; Tan, Bien Keem
2016-01-01
Background. Injuries to the elbow have led to consequences varying from significant limitation in function to loss of the entire upper limb. Soft tissue reconstruction with durable and pliable coverage balanced with the ability to mobilize the joint early to optimize rehabilitation outcomes is paramount. Methods. Methods of flap reconstruction have evolved from local and pedicled flaps to perforator-based flaps and free tissue transfer. Here we performed a review of 20 patients who have undergone flap reconstruction of the elbow at our institution. Discussion. 20 consecutive patients were identified and included in this study. Flap types include local (n = 5), regional pedicled (n = 7), and free (n = 8) flaps. The average size of defect was 138 cm2 (range 36–420 cm2). There were no flap failures in our series, and, at follow-up, the average range of movement of elbow flexion was 100°. Results. While the pedicled latissimus dorsi flap is the workhorse for elbow soft tissue coverage, advancements in microvascular knowledge and surgery have brought about great benefit, with the use of perforator flaps and free tissue transfer for wound coverage. Conclusion. We present here our case series on elbow reconstruction and an abbreviated algorithm on flap choice, highlighting our decision making process in the selection of safe flap choice for soft tissue elbow reconstruction. PMID:27313886
Piecewise deterministic Markov processes: an analytic approach
Alkurdi, Taleb Salameh Odeh
2013-01-01
The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of stochastic process in continuous time in which parts of the sample trajectories are deterministic. Accordingly, questions concerning theses processes may be approached starting from either side. The a...
Minas D. Leventis
2016-01-01
Full Text Available Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP granules coated with poly(lactic-co-glycolic acid (PLGA were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques.
Leventis, Minas D.; Fairbairn, Peter; Kakar, Ashish; Leventis, Angelos D.; Margaritis, Vasileios; Lückerath, Walter; Horowitz, Robert A.; Rao, Bappanadu H.; Lindner, Annette; Nagursky, Heiner
2016-01-01
Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP) granules coated with poly(lactic-co-glycolic acid) (PLGA) were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone) in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material) while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques. PMID:27190516
Weidenbach, C.
1994-01-01
Minimal resolution restricts the applicability of resolution and factorization to minimal literals. Minimality is an abstract criterion. It is shown that if the minimality criterion satisfies certain properties minimal resolution is sound and complete. Hyper resolution, ordered resolution and lock resolution are known instances of minimal resolution. We also introduce new instances of the general completeness result, correct some mistakes in existing literature and give some general redundanc...
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.
2014-02-01
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
Integrated Deterministic-Probabilistic Safety Assessment Methodologies
IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)
The Deterministic Dendritic Cell Algorithm
Greensmith, Julie
2010-01-01
The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to anal- yse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Zhang, Z. J.; Man, Z. X.
2004-01-01
Several theoretical Deterministic Secure Direct Bidirectional Communication protocols are generalized to improve their capacities by introducing the superdense-coding in the case of high-dimension quantum states.
Savaş Karyağar
2013-04-01
Full Text Available Objective: In this study, our aim was to study the efficiency of gamma probe guided minimally invasive parathyroidectomy (GP-MIP, conducted without the intra-operative quick parathyroid hormone (QPTH measurement in the cases of solitary parathyroid adenomas (SPA detected with USG and dual phase 99mTc-MIBI parathyroid scintigraphy (PS in the preoperative period. Material and Methods: This clinical study was performed in 31 SPA patients (27 female, 4 male; mean age 51±11years between February 2006 and January 2009. All patients were operated within 30 days after the detection of the SPA with dual phase 99mTc-MIBI PS and USG. The GP-MIP was done 90-120 min after the iv injection of 740 MBq 99mTc-MIBI. In all cases, except 1 patient, the GP-MIP was performed under local anesthesia; due to the enormity of size of SPA, then general anesthesia is chosen. Results: The operation time was 30-60 min, mean 38,2±7 min. In the first postoperative day, there was a more than 50% decrease in PTH levels in all patients and all but one had normal serum calcium levels. Transient hypocalcemia was detected in one patient. Conclusion: GP-MIP without intra-operative QPTH measurement is a suitable method in the surgical treatment of SPA detected by dual phase 99mTc-MIBI PS and USG.
Piecewise deterministic Markov processes : an analytic approach
Alkurdi, Taleb Salameh Odeh
2013-01-01
The subject of this thesis, piecewise deterministic Markov processes, an analytic approach, is on the border between analysis and probability theory. Such processes can either be viewed as random perturbations of deterministic dynamical systems in an impulsive fashion, or as a particular kind of sto
Control rod worth calculations using deterministic and stochastic methods
Varvayanni, M. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Savva, P., E-mail: melina@ipta.demokritos.g [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece); Catsaros, N. [NCSR ' DEMOKRITOS' , PO Box 60228, 15310 Aghia Paraskevi (Greece)
2009-11-15
Knowledge of the efficiency of a control rod to absorb excess reactivity in a nuclear reactor, i.e. knowledge of its reactivity worth, is very important from many points of view. These include the analysis and the assessment of the shutdown margin of new core configurations (upgrade, conversion, refuelling, etc.) as well as several operational needs, such as calibration of the control rods, e.g. in case that reactivity insertion experiments are planned. The control rod worth can be assessed either experimentally or theoretically, mainly through the utilization of neutronic codes. In the present work two different theoretical approaches, i.e. a deterministic and a stochastic one are used for the estimation of the integral and the differential worth of two control rods utilized in the Greek Research Reactor (GRR-1). For the deterministic approach the neutronics code system SCALE (modules NITAWL/XSDRNPM) and CITATION is used, while the stochastic one is made using the Monte Carlo code TRIPOLI. Both approaches follow the procedure of reactivity insertion steps and their results are tested against measurements conducted in the reactor. The goal of this work is to examine the capability of a deterministic code system to reliably simulate the worth of a control rod, based also on comparisons with the detailed Monte Carlo simulation, while various options are tested with respect to the deterministic results' reliability.
Deterministic transformations of bipartite pure states
Highlights: • A new method for deterministic bipartite entanglement transformation is presented. • Solution in lower dimensions is used to obtain transformation in higher dimensions. • Transformation of states in 3×3 dimensions by a single measurement is presented. • Transformation of states in n×n dimensions by three-outcome measurements is presented. - Abstract: We propose an explicit protocol for the deterministic transformations of bipartite pure states in any dimension using deterministic transformations in lower dimensions. As an example, explicit solutions for the deterministic transformations of 3⊗3 pure states by a single measurement are obtained, and an explicit protocol for the deterministic transformations of n⊗n pure states by three-outcome measurements is presented
Deterministic transformations of bipartite pure states
Torun, Gokhan, E-mail: torung@itu.edu.tr; Yildiz, Ali, E-mail: yildizali2@itu.edu.tr
2015-01-23
Highlights: • A new method for deterministic bipartite entanglement transformation is presented. • Solution in lower dimensions is used to obtain transformation in higher dimensions. • Transformation of states in 3×3 dimensions by a single measurement is presented. • Transformation of states in n×n dimensions by three-outcome measurements is presented. - Abstract: We propose an explicit protocol for the deterministic transformations of bipartite pure states in any dimension using deterministic transformations in lower dimensions. As an example, explicit solutions for the deterministic transformations of 3⊗3 pure states by a single measurement are obtained, and an explicit protocol for the deterministic transformations of n⊗n pure states by three-outcome measurements is presented.
Constructing stochastic models from deterministic process equations by propensity adjustment
Wu Jialiang
2011-11-01
Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
Dierkes, Ulrich; Sauvigny, Friedrich; Jakob, Ruben; Kuster, Albrecht
2010-01-01
Minimal Surfaces is the first volume of a three volume treatise on minimal surfaces (Grundlehren Nr. 339-341). Each volume can be read and studied independently of the others. The central theme is boundary value problems for minimal surfaces. The treatise is a substantially revised and extended version of the monograph Minimal Surfaces I, II (Grundlehren Nr. 295 & 296). The first volume begins with an exposition of basic ideas of the theory of surfaces in three-dimensional Euclidean space, followed by an introduction of minimal surfaces as stationary points of area, or equivalently
Linear systems control deterministic and stochastic methods
Hendricks, Elbert; Sørensen, Paul Haase
2008-01-01
Linear Systems Control provides a very readable graduate text giving a good foundation for reading more rigorous texts. There are multiple examples, problems and solutions. This unique book successfully combines stochastic and deterministic methods.
Cell sorting by deterministic cell rolling
Choi, Sungyoung; Karp, Jeffrey M.; Karnik, Rohit
2011-01-01
This communication presents the concept of “deterministic cell rolling”, which leverages transient cell-surface molecular interactions that mediate cell rolling to sort cells with high purity and efficiency in a single step.
Regret Bounds for Deterministic Gaussian Process Bandits
De Freitas, Nando; Smola, Alex; Zoghi, Masrour
2012-01-01
This paper analyses the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al., 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, (Srinivas et al., 2010) proved that the regret vanishes at the approximate rate of $O(\\frac{1}{\\sqrt{t}})$, where t is the number of observations. To complement their result, we attack the deterministic c...
A Deterministic and Polynomial Modified Perceptron Algorithm
Olof Barr
2006-01-01
Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.
Deterministic algorithm with agglomerative heuristic for location problems
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Comparison of Deterministic and Stochastic Models of the lac Operon Genetic Network
Stamatakis, M.; Mantzaris, N. V.
2009-01-01
The lac operon has been a paradigm for genetic regulation with positive feedback, and several modeling studies have described its dynamics at various levels of detail. However, it has not yet been analyzed how stochasticity can enrich the system's behavior, creating effects that are not observed in the deterministic case. To address this problem we use a comparative approach. We develop a reaction network for the dynamics of the lac operon genetic switch and derive corresponding deterministic...
Linear Finite-Field Deterministic Networks With Many Sources and One Destination
Butt, M. Majid; Caire, Giuseppe; Müller, Ralf R.
2010-01-01
We find the capacity region of linear finite-field deterministic networks with many sources and one destination. Nodes in the network are subject to interference and broadcast constraints, specified by the linear finite-field deterministic model. Each node can inject its own information as well as relay other nodes' information. We show that the capacity region coincides with the cut-set region. Also, for a specific case of correlated sources we provide necessary and sufficient conditions for...
Risk-based and deterministic regulation
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose
Linear embedding of free energy minimization
Moussa, Jonathan E.
2016-01-01
Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-r...
Deterministic finishing of aspheric optical components
Lambropoulos, Teddy; Fess, Ed; DeFisher, Scott
2013-09-01
Manufacturing aspheric optics can present challenges depending on the complexity of their shape. This is especially true during the finishing stage. To tackle this challenge, OptiPro Systems has developed two technologies for deterministic optical polishing: UltraForm Finishing (UFF) and UltraSmooth Finishing (USF). UFF is a deterministic sub aperture polishing process that polishes spherical, aspheric, and free form surface geometries. In contrast, the USF process is a deterministic mid to large size aperture polishing process that works with a conforming lap. These two technologies have the ability to tackle a wide range of optical shapes by removing sub-surface damage, removing various mid-spatial frequency artifacts that might be left from a grinding process, and correct the optic's figure error in a controlled fashion. This presentation will describe these technologies, present performance information as to their capabilities, and show how OptiPro is developing these technologies to push the state of the art in manufacturing.
Effect of Uncertainty on Deterministic Runway Scheduling
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Exploiting Deterministic TPG for Path Delay Testing
李晓维
2000-01-01
Detection of path delay faults requires two-pattern tests. BIST technique provides a low-cost test solution. This paper proposes an approach to designing a cost-effective deterministic test pattern generator (TPG) for path delay testing. Given a set of pre-generated test-pairs with pre-determined fault coverage, a deterministic TPG is synthesized to apply the given test-pair set in a limited test time. To achieve this objective, configurable linear feedback shift register (LFSR) structures are used. Techniques are developed to synthesize such a TPG, which is used to generate an unordered deterministic test-pair set. The resulting TPG is very efficient in terms of hardware size and speed performance. Simulation of academic benchmark circuits has given good results when compared to alternative solutions.
Nine challenges for deterministic epidemic models
Mick Roberts
2015-03-01
Full Text Available Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems.
A new deterministic model for chaotic reversals
Gissinger, Christophe
2011-01-01
In this article, we present a new chaotic system of three coupled ordinary differential equations, limited to quadratic terms. A wide variety of dynamical regimes are reported. For some parameters, chaotic reversals of the amplitudes are produced by crisis-induced intermittency, following a mechanism different from what is generally observed in similar deterministic models. Despite its simplicity, this system therefore generates a rich dynamics, able to model more complex physical systems. In particular, a comparison with reversals of the magnetic field of the Earth shows a surprisingly good agreement, and highlights the relevance of deterministic chaos to describe geomagnetic field dynamics.
Introducing Synchronisation in Deterministic Network Models
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...
Stochastic versus deterministic systems of differential equations
Ladde, G S
2003-01-01
This peerless reference/text unfurls a unified and systematic study of the two types of mathematical models of dynamic processes-stochastic and deterministic-as placed in the context of systems of stochastic differential equations. Using the tools of variational comparison, generalized variation of constants, and probability distribution as its methodological backbone, Stochastic Versus Deterministic Systems of Differential Equations addresses questions relating to the need for a stochastic mathematical model and the between-model contrast that arises in the absence of random disturbances/flu
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III (,; ); Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic doping and the exploration of spin qubits
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures
Hwangbo, Soonho; Lee, In-Beum [POSTECH, Pohang (Korea, Republic of); Han, Jeehoon [University of Wisconsin-Madison, Madison (United States)
2014-10-15
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network.
Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network
Mohammadi Mohammad; Hossaini Mohammad Farouq; Mirzapour Bahman; Hajiantilaki Nabiollah
2015-01-01
In order to increase the safety of working environment and decrease the unwanted costs related to over-break in tunnel excavation projects, it is necessary to minimize overbreak percentage. Thus, based on regression analysis and fuzzy inference system, this paper tries to develop predictive models to estimate overbreak caused by blasting at the Alborz Tunnel. To develop the models, 202 datasets were utilized, out of which 182 were used for constructing the models. To validate and compare the obtained results, determination coefficient (R2) and root mean square error (RMSE) indexes were chosen. For the fuzzy model, R2 and RMSE are equal to 0.96 and 0.55 respectively, whereas for regression model, they are 0.41 and 1.75 respectively, proving that the fuzzy predictor performs, significantly, better than the statistical method. Using the developed fuzzy model, the percentage of overbreak was minimized in the Alborz Tunnel.
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto;
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
Chaotic behaviour of deterministic systems
In these Proceedings many dissipative as well as conservative systems are discussed. In some cases one would like to understand chaotic behavior in order to avoid it, e.g. for certain applications of mechanical and electrical engineering, celestial mechanics (satellite orbits), population dynamics, the storage rings of high energy physics, hydrodynamics, plasma physics (fusion), biophysics, etc. In other cases one would like to obtain chaotic behavior, e.g. for certain applications of classical (and quantum) statistical mechanics, hydrodynamics (turbulence), chemical kinetics, etc. Many diverse and notorious problems of Nonlinear Dynamics exist in one or another of those fields. The basic mathematical tools used in the study of chaotic behavior are introduced in the opening lecture. Chaotic behavior in conservative systems is discussed. For the simplest class of dissipative systems, 'Mappings of the Interval', a well developed theory is treated. More complicated dissipative systems with many degrees of freedom can often be reduced thanks to Bifurcation theory. The mathematical basis for chaotic behavior in difference- and differential equations is treated in detail. Finally, the lectures on full fledged real turbulence, show us how many outstanding problems still remain to be explained from first principles. Nevertheless it is exciting to detect progress on these old problems of chaotic behavior and see some agreement with experiment. (Auth.)
Topologically Ordered Graph Clustering via Deterministic Annealing
Rossi, Fabrice; Villa-Vialaneix, Nathalie
2009-01-01
This paper proposes an organized generalization of Newman and Girvan's modularity measure for graph clustering. Optimized via a deterministic annealing scheme, this measure produces topologically ordered graph partitions that lead to faithful and readable graph representations on a 2 dimensional SOM like planar grid.
Deterministic geologic processes and stochastic modeling
Recent outcrop sampling at Yucca Mountain, Nevada, has produced significant new information regarding the distribution of physical properties at the site of a potential high-level nuclear waste repository. Consideration of the spatial distribution of measured values and geostatistical measures of spatial variability indicates that there are a number of widespread deterministic geologic features at the site that have important implications for numerical modeling of such performance aspects as ground water flow and radionuclide transport. These deterministic features have their origin in the complex, yet logical, interplay of a number of deterministic geologic processes, including magmatic evolution; volcanic eruption, transport, and emplacement; post-emplacement cooling and alteration; and late-stage (diagenetic) alteration. Because of geologic processes responsible for formation of Yucca Mountain are relatively well understood and operate on a more-or-less regional scale, understanding of these processes can be used in modeling the physical properties and performance of the site. Information reflecting these deterministic geologic processes may be incorporated into the modeling program explicitly, using geostatistical concepts such as soft information, or implicitly, through the adoption of a particular approach to modeling. It is unlikely that any single representation of physical properties at the site will be suitable for all modeling purposes. Instead, the same underlying physical reality will need to be described many times, each in a manner conducive to assessing specific performance issues
Deterministic Kalman filtering in a behavioral framework
Fagnani, F; Willems, JC
1997-01-01
The purpose of this paper is to obtain a deterministic version of the Kalman filtering equations. We will use a behavioral description of the plant, specifically, an image representation. The resulting algorithm requires a matrix spectral factorization. We also show that the filter can be implemente
DETERMINISTIC HOMOGENIZATION OF QUASILINEAR DAMPED HYPERBOLIC EQUATIONS
Gabriel Nguetseng; Hubert Nnang; Nils Svanstedt
2011-01-01
Deterministic homogenization is studied for quasilinear monotone hyperbolic problems with a linear damping term.It is shown by the sigma-convergence method that the sequence of solutions to a class of multi-scale highly oscillatory hyperbolic problems converges to the solution to a homogenized quasilinear hyperbolic problem.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
The primary impediment that prevents nuclear proliferation is the lack of access to fissile materials. Thus, a recognized objective internationally has been to minimize the use of HEU and reduce the number of locations with HEU present. Yet, nearing the 30 year anniversary of this objective, the number of HEU-fuelled research facilities in operation remains high, HEU is still being used in large quantities, and significant quantities of HEU is still to be found in a large number of unsecured locations worldwide. This paper identifies the most important indicators for measuring progress for the historical and future national and international efforts for research reactor conversion and decommissioning of vulnerable facilities
Chu, Yi-Zen
2013-01-01
We show how, for certain classes of curved spacetimes, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d >= 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d+1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the "wave equation" perpendicular to the hyperboloid -- followed by a one dimensional integral -- needs to be evaluated. A t...
Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A
2016-12-15
This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate. PMID:27451177
Existence of optimal nonanticipating controls in piecewise deterministic control problems
Seierstad, Atle
2008-01-01
Abstract Optimal nonanticipating controls are shown to exist in nonautonomous piecewise deterministic control problems with hard terminal restrictions. The assumptions needed are completely analogous to those needed to obtain optimal controls in deterministic control problems. The proof is based on well-known results on existence of deterministic optimal controls.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Fan, Yong; Zhao, Yanhui; Pang, Lan; Kang, Yingxing; Kang, Boxiong; Liu, Yongyong; Fu, Jie; Xia, Bowei; Wang, Chen; Zhang, Youcheng
2016-01-01
Abstract Laparoscopic pancreatic surgery is one of the most sophisticated and advanced applications of laparoscopy in the current surgical practice. The adoption of laparoscopic pancreaticoduodenectomy (LPD) has been relatively slow due to the technical challenges. The aim of this study is to review and characterize our successful LPD experiences in patients with distal bile duct carcinoma, periampullary adenocarcinoma, pancreas head cancer, and duodenal cancer and evaluate the clinical outcomes of LPD for its potential in oncologic surgery applications. We retrospectively analyzed the clinical data from 14 patients who underwent LPD from August 2013 to February 2015 in our institute. We presented our LPD experience with no cases converted to open surgery in all 14 cases, which included 10 cases of laparoscopic digestive tract reconstruction and 4 cases of open digestive tract reconstructions. There were no deaths during the perioperative period and no case of gastric emptying disorder or postoperative bleeding. The other clinical indexes were comparable to or better than open surgery. Based on our experience, LPD could be potentially safe and feasible for the treatment of early pancreas head cancer, distal bile duct carcinoma, periampullary adenocarcinoma, and duodenal cancer. The master of LPD procedure requires technical expertise but it can be accomplished with a short learning curve. PMID:27124014
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Understanding deterministic diffusion by correlated random walks
Low-dimensional periodic arrays of scatterers with a moving point particle are ideal models for studying deterministic diffusion. For such systems the diffusion coefficient is typically an irregular function under variation of a control parameter. Here we propose a systematic scheme of how to approximate deterministic diffusion coefficients of this kind in terms of correlated random walks. We apply this approach to two simple examples which are a one-dimensional map on the line and the periodic Lorentz gas. Starting from suitable Green-Kubo formulae we evaluate hierarchies of approximations for their parameter-dependent diffusion coefficients. These approximations converge exactly yielding a straightforward interpretation of the structure of these irregular diffusion coefficients in terms of dynamical correlations. (author)
Risk from deterministic effects of ionising radiation
This publication provides a review of information for assessing deterministic effects on human health likely to arise from serious overexposure to ionising radiation. It updates information in previous Board publications NRPB-R226 and NRPB-M246. It constitutes a parallel document to Documents of the NRPB, 4, No. 4 (1993), which deals with stochastic effects. These two documents, together with Documents of the NRPB, 6, No. 1 (1995), which deals specifically with stochastic risk at low dose rates, give the current Board view on all health consequences of exposure to ionising radiation. Little new primary information on deterministic effects has become available in recent years. However, advances in techniques for data analysis have been made and are incorporated in the present report. These are presented in a form suitable for use in modelling the consequences to populations of serious radiological incidents. (author)
Microscopy with a Deterministic Single Ion Source
Jacob, Georg; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T; Poschinger, Ulrich G; Schmidt-Kaler, Ferdinand; Singer, Kilian
2015-01-01
We realize a single particle microscope by using deterministically extracted laser cooled $^{40}$Ca$^+$ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions with a resolution of 5.8$\\;\\pm\\;$1.0$\\,$nm and a minimum two-sample deviation of the beam position of 1.5$\\,$nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to much higher signal to noise ratios as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1$\\,\\mu$m circular hole structure to an accuracy of 2.7$\\,$nm using only 579 probe particles.
Formal validation of a deterministic MAC protocol
Godary-Dejean K.; Andreu D.
2013-01-01
This article deals with the formal validation of a medium access protocol. This protocol has been designed to meet the specific requirements of an implantable network-based neuroprosthese. This article presents the modeling of STIMAP with Time Petri Nets (TPN), and the verification of the deterministic medium access it provides, using timed model checking. Doing so, we show that existent formal methods and tools are not perfectly suitable for the validation of real system, espe- cially when s...
Deterministic quantum teleportation between distant atomic objects
Krauter, H.; D Salart; Muschik, C. A.; Petersen, J. M.; Shen, Heng; Fernholz, T.; Polzik, E. S.
2013-01-01
Quantum teleportation is a key ingredient of quantum networks and a building block for quantum computation. Teleportation between distant material objects using light as the quantum information carrier has been a particularly exciting goal. Here we demonstrate a new element of the quantum teleportation landscape, the deterministic continuous variable (cv) teleportation between distant material objects. The objects are macroscopic atomic ensembles at room temperature. Entanglement required for...
Deterministic MST Sparsification in the Congested Clique
Korhonen, Janne H.
2016-01-01
We give a simple deterministic constant-round algorithm in the congested clique model for reducing the number of edges in a graph to $n^{1+\\varepsilon}$ while preserving the minimum spanning forest, where $\\varepsilon > 0$ is any constant. This implies that in the congested clique model, it is sufficient to improve MST and other connectivity algorithms on graphs with slightly superlinear number of edges to obtain a general improvement. As a byproduct, we also obtain a simple alternative proof...
Deterministic definition of the capital risk
Anna Szczypinska; Piotrowski, Edward W.
2008-01-01
In this paper we propose a look at the capital risk problem inspired by deterministic, known from classical mechanics, problem of juggling. We propose capital equivalents to the Newton's laws of motion and on this basis we determine the most secure form of credit repayment with regard to maximisation of profit. Then we extend the Newton's laws to models in linear spaces of arbitrary dimension with the help of matrix rates of return. The matrix rates describe the evolution of multidimensional ...
Deterministically Deterring Timing Attacks in Deterland
Wu, Weiyi; Ford, Bryan
2015-01-01
The massive parallelism and resource sharing embodying today's cloud business model not only exacerbate the security challenge of timing channels, but also undermine the viability of defenses based on resource partitioning. We propose hypervisor-enforced timing mitigation to control timing channels in cloud environments. This approach closes "reference clocks" internal to the cloud by imposing a deterministic view of time on guest code, and uses timing mitigators to pace I/O and rate-limit po...
This paper analyzes how to measure progress in the minimization of HEU-fueled research reactors with respect to the International Fuel Cycle Evaluation (INFCE) completed in 1978, and the establishment of new objectives towards 2020. All HEU-fueled research facilities converted, commissioned or decommissioned after 1978, in total more than 310 facilities, are included. More than 130 HEU-fuelled facilities still remain in operation today. The most important measure has been facility shut-down, accounting for 62% of the reduction in U-235 consumption from 1978 to 2007. Presently, only three regions worldwide use significant amounts of HEU; North-America, Russia with the Newly Independent States, and Europe. Projected HEU consumption in 2020 will drop to less 50 kg as the current HEU-fueled steady-state reactors are shut-down or converted. However. if the current lack of concern for HEU in life-time cores is not changed, in particular in Russia, 50-100 such facilities may continue to be in operation in 2020. (author)
Chu, Yi-Zen [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2014-09-15
Motivated by the desire to understand the causal structure of physical signals produced in curved spacetimes – particularly around black holes – we show how, for certain classes of geometries, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d ≥ 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d + 1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the “wave equation” perpendicular to the hyperboloid – followed by a one-dimensional integral – needs to be evaluated. A topological obstruction to the general construction is also discussed by utilizing it to derive a generalized Green's function of the Laplacian on the (d ≥ 2)-dimensional sphere.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Using deterministic codes to accelerate continuous energy Monte-Carlo standards calculations
Deterministic codes are usually used for critical parameters or one dimension geometry calculations. Advantages of the use of deterministic codes are speed of the calculation and the absence of standard deviation on the keff results. Nevertheless, the deterministic results are affected by several intrinsic uncertainties as energetic condensation or self-shielding. So the way to proceed at CEA expert criticality group (CEA/SERMA/CP2C) is to always check the main results (minimum critical or maximal permissible values and un-moderated values) with a punctual Monte Carlo calculation. These last years, in particular cases (pure actinide fissile media, exotic reflectors), large discrepancies have been observed between the keff calculated by the CRISTAL V1 route reference (continuous energy Monte Carlo code TRIPOLI-4) and the keff target (by the standard route APOLLO2-Sn). The problematic for these cases was how to transpose the keff discrepancies observed between standard and reference routes to the dimensions (mass, thickness...) or how to reduce the keff discrepancies using optimized options of the deterministic code. One solution to transpose discrepancies is to iterate on dimensions using a punctual Monte Carlo code to achieve the desired keff eigenvalue. But, the amount of time for obtaining a good standard deviation and also the desired keff eigenvalue inside the Monte Carlo calculation uncertainty can quickly increase. The principle of the method presented in this paper is that the discrepancy between deterministic code and Monte-Carlo code, calculated at the same dimension, is low variable with the dimension. Therefore, correcting the keff eigenvalue on which the deterministic code converge with the discrepancy observed, leads to a dimension nearer to the true dimension (i.e. the dimension where Monte-Carlo code keff calculation is close to the keff eigenvalue). If the keff eigenvalue is outside the Monte Carlo uncertainty, the discrepancy is recalculated and
Lin, Chia-Hsiang; Ma, Wing-Kin; Li, Wei-Chiang; Chi, Chong-Yung; Ambikapathi, ArulMurugan
2015-10-01
In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are verified by numerical simulations.
Farm-level nonparametric analysis of cost-minimization and profit-maximization behavior
Allen M. Featherstone; Moghnieh, Ghassan A.; Goodwin, Barry K.
1995-01-01
This study investigates non-parametrically the optimizing behavior of a sample of 289 Kansas farms under profit-maximization and cost-minimization hypotheses. The study uses both deterministic and stochastic non-parametric tests. The deterministic results do not support strict adherence to either optimization hypothesis. The stochastic tests suggest that all 289 farms fail the profit-maximization hypothesis, whereas 171 farms failed the cost-minimization hypothesis. Allowing for non-regressiv...
A deterministic method for transient, three-dimensional neutron transport
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems
This book presents an overview of waste minimization. Covers applications of technology to waste reduction, techniques for implementing programs, incorporation of programs into R and D, strategies for private industry and the public sector, and case studies of programs already in effect
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)
Linear embedding of free energy minimization
Moussa, Jonathan E
2016-01-01
Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-resolved expectation values. Numerical results on small systems are encouraging, but a lack of size consistency necessitates further development for large systems.
Adachi, Koichi; Yamaguchi, Atsushi; Yuri, Koichi; Matsumoto, Harunobu; Kimura, Naoyuki; Okamura, Homare; Shiraishi, Manabu; Hori, Daijirou; Adachi, Hideo
2016-06-01
Standard full median sternotomy for total aortic arch replacement in patients with tracheostomy has higher risks for mediastinitis and graft infection. To avoid surgical site infection, it is necessary to keep a sufficient distance between the tracheostomy and the site of surgical skin incision. We herein report a case of a 74-year-old man with permanent tracheostomy after total laryngectomy, who underwent total aortic arch replacement for an aneurysm. Antero-lateral thoracotomy in the 2nd intercostal space with lower partial sternotomy( ALPS approach) provided an enough distance between the tracheostomy and the surgical field. It also provided a good view for surgical procedure and enabled the standard setup of cardiopulmonary bypass with ascending aortic cannulation, venous drainage from the right atrium and the left ventricular venting through the upper right pulmonary vein. The operation was completed in 345 minutes and the patient was discharged on the 11th postoperative day without any complications. PMID:27246136
Pisaniello, John D.; Tingey-Holyoak, Joanne; Burritt, Roger L.
2012-01-01
Small dam safety is generally being ignored. The potential for dam failure resulting in catastrophic consequences for downstream communities, property, and the environment, warrants exploration of the threats and policy issues associated with the management of small/farm dams. The paper achieves this through a comparative analysis of differing levels of dam safety assurance policy: absent, driven, strong, and model. A strategic review is undertaken to establish international dam safety policy benchmarks and to identify a best practice model. A cost-effective engineering/accounting tool is presented to assist the policy selection process and complement the best practice model. The paper then demonstrates the significance of the small-dam safety problem with a case study of four Australian States,policy-absent South Australia, policy-driven Victoria, policy-strong New South Wales, and policy-modelTasmania. Surveys of farmer behavior practices provide empirical evidence of the importance of policy and its proper implementation. Both individual and cumulative farm dam failure threats are addressed and, with supporting empirical evidence, the need for "appropriate" supervision of small dams is demonstrated. The paper adds to the existing international dam policy literature by identifying acceptable minimum level practice in private/farm dam safety assurance policy as well as updated international best practice policy guidelines while providing case study demonstration of how to apply the guidelines and empirical reinforcement of the need for "appropriate" policy. The policy guidelines, cost-effective technology, and comparative lessons presented can assist any jurisdiction to determine and implement appropriate dam safety policy.
Using deterministic methods for research reactor studies
As an alternative to prohibitive Monte Carlo simulations, deterministic methods can be used to simulate research reactors. Using various microscopic cross section libraries currently available in Canada, flux distributions were obtained from DRAGON cell and supercell transport calculations. Then, homogenization/condensation is done to produce few-group nuclear properties, and diffusion calculations were performed using DONJON core models. In this paper, the multigroup modular environment of the code DONJON is presented, and the various steps required in the modelling of SLOWPOKE hexagonal cores are described. Numerical simulations are also compared with experimental data available for the EPM Slowpoke reactor. (author)
Deterministic quantum computation with one photonic qubit
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Experimental Demonstration of Deterministic Entanglement Transformation
CHEN Geng; XU Jin-Shi; LI Chuan-Feng; GONG Ming; CHEN Lei; GUO Guang-Can
2009-01-01
According to Nielsen's theorem [Phys.Rev.Lett.83 (1999) 436]and as a proof of principle,we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system.The output states are verified with a quantum tomography process.We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated.Our results may be useful in quantum information processing.
Identifying left-right deterministic linear languages
CALERA RUBIO, JORGE; Oncina Carratalá, Jose
2004-01-01
Recently an algorithm to identify in the limit with polynomial time and data Left Deterministic Linear Languages (Left DLL) and, consequently Right DLL, was proposed. In this paper we show that the class of the Left-Right DLL formed by the union of both classes is also identifiable. To do that, we introduce the notion of n-negative characteristic sample, that is a sample that forces an inference algorithm to output an hypothesis of size bigger than n when strings from a non identifiable langu...
Deterministic and probabilistic approach to safety analysis
The examples discussed in this paper show that reliability analysis methods fairly well can be applied in order to interpret deterministic safety criteria in quantitative terms. For further improved extension of applied reliability analysis it has turned out that the influence of operational and control systems and of component protection devices should be considered with the aid of reliability analysis methods in detail. Of course, an extension of probabilistic analysis must be accompanied by further development of the methods and a broadening of the data base. (orig.)
Nine challenges for deterministic epidemic models
Roberts, Mick G; Andreasen, Viggo; Lloyd, Alun;
2015-01-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections......, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern...
Hamada, Yuta
2015-01-01
We propose a novel leptogenesis scenario at the reheating era. Our setup is minimal in the sense that, in addition to the standard model Lagrangian, we only consider an inflaton and higher dimensional operators. The lepton number asymmetry is produced not by the decay of a heavy particle, but by the scattering between the standard model particles. After the decay of an inflaton, the model is described within the standard model with higher dimensional operators. The Sakharov's three conditions are satisfied by the following way. The violation of the lepton number is realized by the dimension-5 operator. The complex phase comes from the dimension-6 four lepton operator. The universe is out of equilibrium before the reheating is completed. It is found that the successful baryogenesis is realized for the wide range of parameters, the inflaton mass and reheating temperature, depending on the cutoff scale. Since we only rely on the effective Lagrangian, our scenario can be applicable to all mechanisms to generate n...
Piazza, Federico
2015-01-01
The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...
Piazza, Federico; Schücker, Thomas
2016-04-01
The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.
Cohen Anders
2011-09-01
Full Text Available Abstract Introduction The purpose of this study was to describe procedural details of a minimally invasive presacral approach for revision of an L5-S1 Axial Lumbar Interbody Fusion rod. Case presentation A 70-year-old Caucasian man presented to our facility with marked thoracolumbar scoliosis, osteoarthritic changes characterized by high-grade osteophytes, and significant intervertebral disc collapse and calcification. Our patient required crutches during ambulation and reported intractable axial and radicular pain. Multi-level reconstruction of L1-4 was accomplished with extreme lateral interbody fusion, although focal lumbosacral symptoms persisted due to disc space collapse at L5-S1. Lumbosacral interbody distraction and stabilization was achieved four weeks later with the Axial Lumbar Interbody Fusion System (TranS1 Inc., Wilmington, NC, USA and rod implantation via an axial presacral approach. Despite symptom resolution following this procedure, our patient suffered a fall six weeks postoperatively with direct sacral impaction resulting in symptom recurrence and loss of L5-S1 distraction. Following seven months of unsuccessful conservative care, a revision of the Axial Lumbar Interbody Fusion rod was performed that utilized the same presacral approach and used a larger diameter implant. Minimal adhesions were encountered upon presacral re-entry. A precise operative trajectory to the base of the previously implanted rod was achieved using fluoroscopic guidance. Surgical removal of the implant was successful with minimal bone resection required. A larger diameter Axial Lumbar Interbody Fusion rod was then implanted and joint distraction was re-established. The radicular symptoms resolved following revision surgery and our patient was ambulating without assistance on post-operative day one. No adverse events were reported. Conclusions The Axial Lumbar Interbody Fusion distraction rod may be revised and replaced with a larger diameter rod using
Minimal Poems Written in 1979 Minimal Poems Written in 1979
Sandra Sirangelo Maggio
2008-01-01
The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title) reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, h...
Deterministic Aided STAP for Target Detection in Heterogeneous Situations
J.-F. Degurse
2013-01-01
Full Text Available Classical space-time adaptive processing (STAP detectors are strongly limited when facing highly heterogeneous environments. Indeed, in this case, representative target free data are no longer available. Single dataset algorithms, such as the MLED algorithm, have proved their efficiency in overcoming this problem by only working on primary data. These methods are based on the APES algorithm which removes the useful signal from the covariance matrix. However, a small part of the clutter signal is also removed from the covariance matrix in this operation. Consequently, a degradation of clutter rejection performance is observed. We propose two algorithms that use deterministic aided STAP to overcome this issue of the single dataset APES method. The results on realistic simulated data and real data show that these methods outperform traditional single dataset methods in detection and in clutter rejection.
Classification and unification of the microscopic deterministic traffic models
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko Dimitrov
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles. PMID:26565284
Deterministic Function Computation with Chemical Reaction Networks
Chen, Ho-Lin; Soloveichik, David
2012-01-01
We study the deterministic computation of functions on tuples of natural numbers by chemical reaction networks (CRNs). CRNs have been shown to be efficiently Turing-universal when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates. We introduce the notion of function, rather than predicate, computation by representing the output of a function f:N^k --> N^l by a count of some molecular species, i.e., if the CRN starts with n_1,...,n_k molecules of some "input" species X_1,...,X_k, the CRN is guaranteed to converge to having f(n_1,...,n_k) molecules of the "output" species Y_1,...,Y_l. We show that a function f:N^k --> N^l is deterministically computed by a CRN if and only if its graph {(x,y) \\in N^k x N^l | f(x) = y} is a semilinear set. Finally, we show that each semilinear function f can be computed on input x in expected time O(polylog |x|).
Focusing a deterministic single-ion beam
We focus down an ion beam consisting of single 40Ca+ ions to a spot size of a few micrometers using an einzel lens. Starting from a segmented linear Paul trap, we have implemented a procedure that allows us to deterministically load a predetermined number of ions by using the potential shaping capabilities of our segmented ion trap. For single-ion loading, an efficiency of 96.7(7)% has been achieved. These ions are then deterministically extracted out of the trap and focused down to a 1σ-spot radius of (4.6±1.3) μm at a distance of 257 mm from the trap center. Compared to previous measurements without ion optics, the einzel lens is focusing down the single-ion beam by a factor of 12. Due to the small beam divergence and narrow velocity distribution of our ion source, chromatic and spherical aberration at the einzel lens is vastly reduced, presenting a promising starting point for focusing single ions on their way to a substrate.
Plausible Suggestion for a Deterministic Wave Function
Schulz, P
2006-01-01
A deterministic axial vector model for photons is presented which is suitable also for particles. During a rotation around an axis the deterministic wave function a has the following form a = ws r exp(+-i wb t). ws is either the axial or scalar spin rotation frequency (the latter is proportional to the mass), r radius of the orbit (also amplitude of a vibration arising later from the interaction by fusing of two oppositely circling photons), wb orbital angular frequency (proportional to the velocity) and t time. "+" before the imaginary i means a right-handed and "-" a left-handed rotation. An interaction happens if particles (including the photons) meet themselves through collision and melt together. ----- Es wird ein deterministisches Drehvektor-Modell fuer Photonen vorgestellt, das auch fuer Teilchen geeignet ist. Bei einer Kreisbewegung um eine Achse hat die deterministische Wellenfunktion a die folgende Form a = ws r exp(+-i wb t). Dabei bedeuten ws entweder die axiale oder die skalare Spin-Kreisfrequenz...
Moment equations for a piecewise deterministic PDE
Bressloff, Paul C.; Lawley, Sean D.
2015-03-01
We analyze a piecewise deterministic PDE consisting of the diffusion equation on a finite interval Ω with randomly switching boundary conditions and diffusion coefficient. We proceed by spatially discretizing the diffusion equation using finite differences and constructing the Chapman-Kolmogorov (CK) equation for the resulting finite-dimensional stochastic hybrid system. We show how the CK equation can be used to generate a hierarchy of equations for the r-th moments of the stochastic field, which take the form of r-dimensional parabolic PDEs on {{Ω }r} that couple to lower order moments at the boundaries. We explicitly solve the first and second order moment equations (r = 2). We then describe how the r-th moment of the stochastic PDE can be interpreted in terms of the splitting probability that r non-interacting Brownian particles all exit at the same boundary; although the particles are non-interacting, statistical correlations arise due to the fact that they all move in the same randomly switching environment. Hence the stochastic diffusion equation describes two levels of randomness: Brownian motion at the individual particle level and a randomly switching environment. Finally, in the limit of fast switching, we use a quasi-steady state approximation to reduce the piecewise deterministic PDE to an SPDE with multiplicative Gaussian noise in the bulk and a stochastically-driven boundary.
... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...
Deterministic Safety Analysis for Nuclear Power Plants. Specific Safety Guide
The objective of this Safety Guide is to provide harmonized guidance to designers, operators, regulators and providers of technical support on deterministic safety analysis for nuclear power plants. It provides information on the utilization of the results of such analysis for safety and reliability improvements. The Safety Guide addresses conservative, best estimate and uncertainty evaluation approaches to deterministic safety analysis and is applicable to current and future designs. Contents: 1. Introduction; 2. Grouping of initiating events and associated transients relating to plant states; 3. Deterministic safety analysis and acceptance criteria; 4. Conservative deterministic safety analysis; 5. Best estimate plus uncertainty analysis; 6. Verification and validation of computer codes; 7. Relation of deterministic safety analysis to engineering aspects of safety and probabilistic safety analysis; 8. Application of deterministic safety analysis; 9. Source term evaluation for operational states and accident conditions; References.
Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the
Tanigaki, Nobuhiro, E-mail: tanigaki.nobuhiro@eng.nssmc.com [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (EUROPEAN OFFICE), Am Seestern 8, 40547 Dusseldorf (Germany); Ishida, Yoshihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., 46-59, Nakabaru, Tobata-ku, Kitakyushu, Fukuoka 804-8505 (Japan); Osada, Morihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (Head Office), Osaki Center Building 1-5-1, Osaki, Shinagawa-ku, Tokyo 141-8604 (Japan)
2015-03-15
Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the
Reliability of a hybrid renewable energy system (HRES) strongly depends on various uncertainties affecting the amount of power produced by the system. In the design of systems subject to uncertainties, both deterministic and nondeterministic design approaches can be adopted. In a deterministic design approach, the designer considers the presence of uncertainties and incorporates them indirectly into the design by applying safety factors. It is assumed that, by employing suitable safety factors and considering worst-case-scenarios, reliable systems can be designed. In fact, the multi-objective optimisation problem with two objectives of reliability and cost is reduced to a single-objective optimisation problem with the objective of cost only. In this paper the competence of deterministic design methods in size optimisation of reliable standalone wind–PV–battery, wind–PV–diesel and wind–PV–battery–diesel configurations is examined. For each configuration, first, using different values of safety factors, the optimal size of the system components which minimises the system cost is found deterministically. Then, for each case, using a Monte Carlo simulation, the effect of safety factors on the reliability and the cost are investigated. In performing reliability analysis, several reliability measures, namely, unmet load, blackout durations (total, maximum and average) and mean time between failures are considered. It is shown that the traditional methods of considering the effect of uncertainties in deterministic designs such as design for an autonomy period and employing safety factors have either little or unpredictable impact on the actual reliability of the designed wind–PV–battery configuration. In the case of wind–PV–diesel and wind–PV–battery–diesel configurations it is shown that, while using a high-enough margin of safety in sizing diesel generator leads to reliable systems, the optimum value for this margin of safety leading to a
Szkup Peter L
2012-03-01
Full Text Available Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA. This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery and can be a life-saving procedure. Case presentation In the first case, an anesthetist attempted ultrasound-guided access to the right internal jugular vein during the preoperative preparation of a 66-year-old Caucasian man. A 7-French (Fr triple-lumen catheter was inadvertently placed into his arterial system. In the second case, an emergency physician inadvertently placed a 7-Fr catheter into the subclavian artery of a 77-year-old Caucasian woman whilst attempting access to her right internal jugular vein. Both arterial punctures were successfully closed by means of a percutaneous closure device (Angio-Seal. No complications were observed. Conclusions Inadvertent subclavian arterial puncture can be successfully managed with no adverse clinical sequelae by using a percutaneous vascular closure device. This minimally invasive technique may be an option for patients with non-compressible arterial punctures. This report demonstrates two practical points that may help clinicians in decision-making during daily practice. First, it provides a practical solution to a well-known vascular complication. Second, it emphasizes a role for proper vascular ultrasound training for the non-radiologist.
In future planned accelerator-driven subcritical systems, as well as in some recent related experiments, the neutron source to be used will be a pulsed accelerator. For such cases the application of the Feynman-alpha method for measuring the reactivity is not straightforward. The dependence of the Feynman Y(T) curve (variance-to-mean minus unity) on the measurement time T will show quasi-periodic ripples, corresponding to the periodicity of the source intensity. Correspondingly, the analytical solution will become much more complicated. One can perform such a pulsed Feynman-alpha measurement in two different ways: either by synchronizing the start of each measurement block with the pulses (deterministic pulsing) or by not synchronizing (random pulsing). The variance-to-mean has been derived analytically for both cases and reported briefly in previous publications. However, two different methods were used and the two cases were reported separately. In this paper we give a unified treatment and a comparative analysis of the two cases. It is found that the stochastic pulsing leads to an analytic solution that is much simpler than that for the deterministic case, and the relationship between the pulsed and continuous source is much more straightforward than in the deterministic case. However, the amplitude of the ripples, constituting a deviation of the pulsed Feynman Y curve from the smooth curve corresponding to the traditional constant source case, is much larger for the stochastic pulsing than for the deterministic one. The reasons for this are also analyzed in the paper. The results are in agreement with recent measurements, made by other groups in the European Community-supported project MUSE
Primality deterministic and primality probabilistic tests
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Deterministic cavity quantum electrodynamics with trapped ions
We have employed radio-frequency trapping to localize a single 40Ca+-ion in a high-finesse optical cavity. By means of laser Doppler cooling, the position spread of the ion's wavefunction along the cavity axis was reduced to 42 nm, a fraction of the resonance wavelength of ionized calcium (λ = 397 nm). By controlling the position of the ion in the optical field, continuous and completely deterministic coupling of ion and field was realized. The precise three-dimensional location of the ion in the cavity was measured by observing the fluorescent light emitted upon excitation in the cavity field. The single-ion system is ideally suited to implement cavity quantum electrodynamics under cw conditions. To this end we operate the cavity on the D3/2-P1/2 transition of 40Ca+ (λ 866 nm). Applications include the controlled generation of single-photon pulses with high efficiency and two-ion quantum gates
Deterministic effects of interventional radiology procedures
The purpose of this paper is to describe deterministic radiation injuries reported to the Food and Drug Administration (FDA) that resulted from therapeutic, interventional procedures performed under fluoroscopic guidance, and to investigate the procedure or equipment-related factors that may have contributed to the injury. Reports submitted to the FDA under both mandatory and voluntary reporting requirements which described radiation-induced skin injuries from fluoroscopy were investigated. Serious skin injuries, including moist desquamation and tissues necrosis, have occurred since 1992. These injuries have resulted from a variety of interventional procedures which have required extended periods of fluoroscopy compared to typical diagnostic procedures. Facilities conducting therapeutic interventional procedures need to be aware of the potential for patient radiation injury and take appropriate steps to limit the potential for injury. (author)
Deterministic polishing from theory to practice
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Targeted activation in deterministic and stochastic systems
Eisenhower, Bryan; Mezić, Igor
2010-02-01
Metastable escape is ubiquitous in many physical systems and is becoming a concern in engineering design as these designs (e.g., swarms of vehicles, coupled building energetics, nanoengineering, etc.) become more inspired by dynamics of biological, molecular and other natural systems. In light of this, we study a chain of coupled bistable oscillators which has two global conformations and we investigate how specialized or targeted disturbance is funneled in an inverse energy cascade and ultimately influences the transition process between the conformations. We derive a multiphase averaged approximation to these dynamics which illustrates the influence of actions in modal coordinates on the coarse behavior of this process. An activation condition that predicts how the disturbance influences the rate of transition is then derived. The prediction tools are derived for deterministic dynamics and we also present analogous behavior in the stochastic setting and show a divergence from Kramers activation behavior under targeted activation conditions.
Mechanics From Newton's Laws to Deterministic Chaos
Scheck, Florian
2010-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...
Deterministic polarization chaos from a laser diode
Virte, Martin; Thienpont, Hugo; Sciamanna, Marc
2014-01-01
Fifty years after the invention of the laser diode and fourty years after the report of the butterfly effect - i.e. the unpredictability of deterministic chaos, it is said that a laser diode behaves like a damped nonlinear oscillator. Hence no chaos can be generated unless with additional forcing or parameter modulation. Here we report the first counter-example of a free-running laser diode generating chaos. The underlying physics is a nonlinear coupling between two elliptically polarized modes in a vertical-cavity surface-emitting laser. We identify chaos in experimental time-series and show theoretically the bifurcations leading to single- and double-scroll attractors with characteristics similar to Lorenz chaos. The reported polarization chaos resembles at first sight a noise-driven mode hopping but shows opposite statistical properties. Our findings open up new research areas that combine the high speed performances of microcavity lasers with controllable and integrated sources of optical chaos.
Deterministic seismic hazard macrozonation of India
Sreevalsa Kolathayar; T G Sitharam; K S Vipin
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Deterministic-random separation in nonstationary regime
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Simple deterministically constructed cycle reservoirs with regular jumps.
Rodan, Ali; Tiňo, Peter
2012-07-01
A new class of state-space models, reservoir models, with a fixed state transition structure (the "reservoir") and an adaptable readout from the state space, has recently emerged as a way for time series processing and modeling. Echo state network (ESN) is one of the simplest, yet powerful, reservoir models. ESN models are generally constructed in a randomized manner. In our previous study (Rodan & Tiňo, 2011), we showed that a very simple, cyclic, deterministically generated reservoir can yield performance competitive with standard ESN. In this contribution, we extend our previous study in three aspects. First, we introduce a novel simple deterministic reservoir model, cycle reservoir with jumps (CRJ), with highly constrained weight values, that has superior performance to standard ESN on a variety of temporal tasks of different origin and characteristics. Second, we elaborate on the possible link between reservoir characterizations, such as eigenvalue distribution of the reservoir matrix or pseudo-Lyapunov exponent of the input-driven reservoir dynamics, and the model performance. It has been suggested that a uniform coverage of the unit disk by such eigenvalues can lead to superior model performance. We show that despite highly constrained eigenvalue distribution, CRJ consistently outperforms ESN (which has much more uniform eigenvalue coverage of the unit disk). Also, unlike in the case of ESN, pseudo-Lyapunov exponents of the selected optimal CRJ models are consistently negative. Third, we present a new framework for determining the short-term memory capacity of linear reservoir models to a high degree of precision. Using the framework, we study the effect of shortcut connections in the CRJ reservoir topology on its memory capacity. PMID:22428595
C. BORGES, Rodrigo
2012-01-01
The duality between signal and noise is widely observed in the scientific context of the 20th century. Noise is retrospectively associated to nuisance, annoyance, and was even subjectively defined as a non-signal. Definition, anyway, takes noise away from its original meaning, turns it into signal and keeps this duality existing through time. This work treats the subject as a matter of perception, more specifically, as a matter of two different listening experiences for deterministic and non ...
Atomic routing in a deterministic queuing model
T.L. Werth
2014-03-01
We also consider the makespan objective (arrival time of the last user and show that optimal solutions and Nash equilibria in these games, where every user selfishly tries to minimize her travel time, can be found efficiently.
MIMO capacity for deterministic channel models: sublinear growth
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix. In the...
Chaos in discrete maps, deterministic scattering, and nondifferentiable functions
Arguments in favor of the nondifferentiability with respect to initial data of some functions associated with deterministic discrete-time dynamical systems are presented. A correspondence between a discrete-time dynamical system and a deterministic scattering model is found and used to interpret nondifferentiability conditions. A connection with random walks is also found
Recognition of deterministic ETOL languages in logarithmic space
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian par...
FP/FIFO scheduling: coexistence of deterministic and probabilistic QoS guarantees
Pascale Minet
2007-01-01
Full Text Available In this paper, we focus on applications having quantitative QoS (Quality of Service requirements on their end-to-end response time (or jitter. We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.
Astakhov, Sergey V., E-mail: s.v.astakhov@gmail.com [Saratov State University, 410012 Saratov (Russian Federation); Anishchenko, Vadim S., E-mail: wadim@info.sgu.ru [Saratov State University, 410012 Saratov (Russian Federation)
2012-11-01
The relation between Lyapunov exponents, the Kolmogorov–Sinai entropy (KS-entropy) and the Afraimovich–Pesin dimension (AP-dimension) has been numerically analyzed in one- and two-dimensional chaotic maps. In our simulations we show that without noise the AP-dimension corresponds to the KS-entropy. In the presence of noise, the AP-dimension corresponds to the relative metric entropy. Since in a deterministic case the relative metric entropy corresponds to the KS-entropy the obtained results enable us to conclude that for considered chaotic maps of different dimension the AP-dimension corresponds to the relative metric entropy in both deterministic and stochastic cases.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
In this work were presented 22 cases of radiation deterministic effect in patients submitted to catheterism procedures by means of X-fluoroscope. Evaluation of the results suggest that the most of patients receive potential skin entrance doses over 2 Gy and some of them may have received doses over 12 Gy. At these doses, radiation induced erythema, ulceration and necrosis are all possible complications if the same entrance skin surface is exposed for the duration of the procedure
Melkikh, Alexei V.
2004-01-01
The possibility of a complicated internal structure of an elementary particle was analyzed. In this case a particle may represent a quantum computer with many degrees of freedom. It was shown that the probability of new species formation by means of random mutations is negligibly small. Deterministic model of evolution is considered. According to this model DNA nucleotides can change their state under the control of elementary particle internal degrees of freedom.
M. Barbolini; Keylock, C. J.
2002-01-01
The purpose of the present paper is to propose a new method for avalanche hazard mapping using a combination of statistical and deterministic modelling tools. The methodology is based on frequency-weighted impact pressure, and uses an avalanche dynamics model embedded within a statistical framework. The outlined procedure provides a useful way for avalanche experts to produce hazard maps for the typical case of avalanche sites where histor...
Convertito, V.; Istituto Nazionale di Geofisica e Vulcanologia, Sezione OV, Napoli, Italia; Emolo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli; Zollo, A.; Dipartimento di Scienze Fisiche Universita` degli Studi “Federico II” di Napoli
2006-01-01
Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristi...
Deterministically Driven Avalanche Models of Solar Flares
Strugarek, Antoine; Joseph, Richard; Pirot, Dorian
2014-01-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick--slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy loading process. The model design leads to a systematic deficit of small scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global st...
Simple Deterministically Constructed Recurrent Neural Networks
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
A Deterministic Approach to Earthquake Prediction
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Deterministic Random Walks on Regular Trees
Cooper, Joshua; Friedrich, Tobias; Spencer, Joel; 10.1002/rsa.20314
2010-01-01
Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid $\\Z^d$ and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite $k$-ary tree ($k \\ge 3$), we show that for any deviation $D$ there is an initial configuration of chips such that after running the Propp model for a ...
Traffic chaotic dynamics modeling and analysis of deterministic network
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Minimal Poems Written in 1979 Minimal Poems Written in 1979
Sandra Sirangelo Maggio
2008-04-01
Full Text Available The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism. The reading of M. van der Slice's Minimal Poems Written in 1979 (the work, actually, has no title reminded me of a book I have seen a long time ago. called Truth, which had not even a single word printed inside. In either case we have a sample of how often excentricities can prove efficient means of artistic creativity, in this new literary trend known as Minimalism.
Secret Writing on Dirty Paper: A Deterministic View
El-Halabi, Mustafa; Georghiades, Costas
2011-01-01
Recently there has been a lot of success in using deterministic approach to provide approximate characterization of capacity for Gaussian networks. In this paper, we take a deterministic view and revisit the problem of wiretap channel with side information. A precise characterization of the secrecy capacity is obtained for a linear deterministic model, which naturally suggests a coding scheme which we show to achieve the secrecy capacity of the Gaussian model (dubbed as "secret writing on dirty paper") to within $(1/2)\\log3$ bits.
Equivalence relations between deterministic and quantum mechanical systems
Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale
Deterministic versus stochastic trends: Detection and challenges
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Longevity, Growth and Intergenerational Equity - The Deterministic Case
Andersen, Torben M.; Gestsson, Marias Halldór
Challenges raised by ageing (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from pre-savings to indexation of retirement ages have been justified in this way....... We develop an overlapping generations model in continuous time which encompasses different generations with different mortality rates and thus longevity. Allowing for both trend increases in longevity and productivity, we address the issue of intergenerational equity under a utilitarian criterion...... when future generations are better off in terms of both material and non-material well being. Increases in productivity and longevity are shown to have very different implications for intergenerational distribution....
Longevity, Growth and Intergenerational Equity: The Deterministic Case
Andersen, Torben M.; Gestsson, Marias Halldór
2016-01-01
Challenges raised by aging (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from presavings to indexation of retirement ages have been justified in this way. We...
Thumati, Prafulla; Reddy, K. Raghavendra
2013-01-01
Tooth wear and discoloration is a normal process in the life time of an individual. Severe wear and discoloration can result in cosmetic concern and loss of vertical dimension. These problems can best be treated by giving fixed prosthesis. This case provides the management using the concept of Minimally Invasive Cosmetic Dentistry (MICD) with Ceramopolymer as the restorative material. Computer Guided Occlusal Analysis (CGOA) was used for establishing uniform occlusal force distribution. Case ...
A Method to Separate Stochastic and Deterministic Information from Electrocardiograms
Gutíerrez, R M
2004-01-01
In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.
Pseudo-random number generator based on asymptotic deterministic randomness
A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks