Deterministic Discrepancy Minimization
Bansal, N.; Spencer, J.
2013-01-01
We derandomize a recent algorithmic approach due to Bansal (Foundations of Computer Science, FOCS, pp. 3–10, 2010) to efficiently compute low discrepancy colorings for several problems, for which only existential results were previously known. In particular, we give an efficient deterministic algori
Asinari, Pietro
2010-01-01
The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both ...
Sochi, Taha
2014-01-01
Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtoni...
D'Silva, Joseph; Loutherback, Kevin; Austin, Robert; Sturm, James
2013-03-01
Deterministic lateral displacement arrays have been used to separate circulating tumor cells (CTCs) from diluted whole blood at flow rates up to 10 mL/min (K. Loutherback et al., AIP Advances, 2012). However, the throughput is limited to 2 mL equivalent volume of undiluted whole blood due to clogging of the array. Since the concentration of CTCs can be as low as 1-10 cells/mL in clinical samples, processing larger volumes of blood is necessary for diagnostic and analytical applications. We have identified platelet activation by the micro-post array as the primary cause of this clogging. In this talk, we (i) show that clogging occurs at the beginning of the micro-post array and not in the injector channels because both acceleration and deceleration in fluid velocity are required for clogging to occur, and (ii) demonstrate how reduction in platelet concentration and decrease in platelet contact time within the device can be used in combination to achieve a 10x increase in the equivalent volume of undiluted whole blood processed. Finally, we discuss experimental efforts to separate the relative contributions of contact activated coagulation and shear-induced platelet activation to clogging and approaches to minimize these, such as surface treatment and post geometry design.
Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study
Directory of Open Access Journals (Sweden)
King John R
2010-03-01
Full Text Available Abstract Background Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. Results In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Conclusions Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.
A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs
Directory of Open Access Journals (Sweden)
Jooyong Yi
2013-09-01
Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.
极小Cayley图的确定性小世界网络模型%Deterministic small-world network model based on minimal Cayley graph
Institute of Scientific and Technical Information of China (English)
刘艳霞; 奚建清; 张芩
2014-01-01
小世界网络的确定性模型研究是复杂网络建模领域的重要分支，通过分析Cayley图的极小性与小世界特性的关联，提出一种基于极小Cayley图构造小世界网络的确定性模型。模型通过选择满足条件的极小Cayley图，恰当地扩展其生成集，构造出一类对称性强且结构规则的小世界网络。结果表明，和现有模型不同，该模型可根据需求构造常数度或非常数度网络，且生成网络不仅具有较高的聚集系数和低的网络直径，而且是节点对称的，在通信网络、结构化P2P 覆盖网络等实际领域的拓扑结构设计中具有重要应用。%The research on deterministic small-world network model is an important branch of complex network modeling. This paper analyzes the small-world property of the minimal Cayley graph and proposes a deterministic small-world network model based on minimal Cayley graph. The model constructs a class of small-world networks with high symmetry by selecting a minimal Cayley graph, and appropriately expands its generating set. Compared with the existing models, this model can be used flexibly to get small-world networks with const degree or variable degree, which is adaptable for the disign and analysis of the real networks such as communication network and P2P overlay network.
McArt, J A A; Nydam, D V; Overton, M W
2015-03-01
The purpose of this study was to develop a deterministic economic model to estimate the costs associated with (1) the component cost per case of hyperketonemia (HYK) and (2) the total cost per case of HYK when accounting for costs related to HYK-attributed diseases. Data from current literature was used to model the incidence and risks of HYK (defined as a blood β-hydroxybutyrate concentration≥1.2 mmol/L), displaced abomasa (DA), metritis, disease associations, milk production, culling, and reproductive outcomes. The component cost of HYK was estimated based on 1,000 calvings per year; the incidence of HYK in primiparous and multiparous animals; the percent of animals receiving clinical treatment; the direct costs of diagnostics, therapeutics, labor, and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. Costs attributable to DA and metritis were estimated based on the incidence of each disease in the first 30 DIM; the number of cases of each disease attributable to HYK; the direct costs of diagnostics, therapeutics, discarded milk during treatment and the withdrawal period, veterinary service (DA only), and death loss; and the indirect costs of future milk production losses, future culling losses, and reproduction losses. The component cost per case of HYK was estimated at $134 and $111 for primiparous and multiparous animals, respectively; the average component cost per case of HYK was estimated to be $117. Thirty-four percent of the component cost of HYK was due to future reproductive losses, 26% to death loss, 26% to future milk production losses, 8% to future culling losses, 3% to therapeutics, 2% to labor, and 1% to diagnostics. The total cost per case of HYK was estimated at $375 and $256 for primiparous and multiparous animals, respectively; the average total cost per case of HYK was $289. Forty-one percent of the total cost of HYK was due to the component cost of HYK, 33% to costs
Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis
Directory of Open Access Journals (Sweden)
Gisele Pinto de Oliveira
2016-01-01
Full Text Available ABSTRACT OBJECTIVE To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. METHODS The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring. Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System – Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. RESULTS Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. CONCLUSIONS The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used.
Probabilistic and Deterministic Seismic Hazard Assessment: A Case Study in Babol
Directory of Open Access Journals (Sweden)
H.R. Tavakoli
2013-01-01
Full Text Available The risk of earthquake ground motion parameters in seismic design of structures and Vulnerabilityand risk assessment of these structures against earthquake damage are important. The damages caused by theearthquake engineering and seismology of the social and economic consequences are assessed. This paperdetermined seismic hazard analysis in Babol via deterministic and probabilistic methods. Deterministic andprobabilistic methods seem to be practical tools for mutual control of results and to overcome the weaknessof approach alone. In the deterministic approach, the strong-motion parameters are estimated for the maximumcredible earthquake, assumed to occur at the closest possible distance from the site of interest, withoutconsidering the likelihood of its occurrence during a specified exposure period. On the other hand, theprobabilistic approach integrates the effects of all earthquakes expected to occur at different locations duringa specified life period, with the associated uncertainties and randomness taken into account. The calculatedbedrock horizontal and vertical peak ground acceleration (PGA for different years return period of the studyarea are presented.
Obendorf, Hartmut
2009-01-01
The notion of Minimalism is proposed as a theoretical tool supporting a more differentiated understanding of reduction and thus forms a standpoint that allows definition of aspects of simplicity. This book traces the development of minimalism, defines the four types of minimalism in interaction design, and looks at how to apply it.
Minimally Invasive Surgical Treatment of Acute Epidural Hematoma: Case Series
Directory of Open Access Journals (Sweden)
Weijun Wang
2016-01-01
Full Text Available Background and Objective. Although minimally invasive surgical treatment of acute epidural hematoma attracts increasing attention, no generalized indications for the surgery have been adopted. This study aimed to evaluate the effects of minimally invasive surgery in acute epidural hematoma with various hematoma volumes. Methods. Minimally invasive puncture and aspiration surgery were performed in 59 cases of acute epidural hematoma with various hematoma volumes (13–145 mL; postoperative follow-up was 3 months. Clinical data, including surgical trauma, surgery time, complications, and outcome of hematoma drainage, recovery, and Barthel index scores, were assessed, as well as treatment outcome. Results. Surgical trauma was minimal and surgery time was short (10–20 minutes; no anesthesia accidents or surgical complications occurred. Two patients died. Drainage was completed within 7 days in the remaining 57 cases. Barthel index scores of ADL were ≤40 (n=1, 41–60 (n=1, and >60 (n=55; scores of 100 were obtained in 48 cases, with no dysfunctions. Conclusion. Satisfactory results can be achieved with minimally invasive surgery in treating acute epidural hematoma with hematoma volumes ranging from 13 to 145 mL. For patients with hematoma volume >50 mL and even cerebral herniation, flexible application of minimally invasive surgery would help improve treatment efficacy.
Barbouchi, Meriem; Chokmani, Karem; Ben Aissa, Nadhira; Lhissou, Rachid; El Harti, Abderrazak; Abdelfattah, Riadh
2013-04-01
Soil salinization hazard in semi-arid regions such as Central Morocco is increasingly affecting arable lands and this is due to combined effects of anthropogenic activities (development of irrigation) and climate change (Multiplying drought episodes). In a rational strategy of fight against this hazard, salinity mapping is a key step to ensure effective spatiotemporal monitoring. The objective of this study is to test the effectiveness of geostatistical approach in mapping soil salinity compared to more forward deterministic interpolation methods. Three soil salinity sampling campaigns (27 September, 24 October and 19 November 2011) were conducted over the irrigated area of the Tadla plain, situated between the High and Middle Atlasin Central Morocco. Each campaign was made of 38 surface soil samples (upper 5 cm). From each sample the electrical conductivity (EC) was determined in saturated paste extract and used subsequently as proxy of soil salinity. The potential of deterministic interpolation methods (IDW) and geostatistical techniques (Ordinary Kriging) in mapping surface soil salinity was evaluated in a GIS environment through cross-validation technique. Field measurements showed that the soil salinity was generally low except during the second campaign where a significant increase in EC values was recorded. Interpolation results showed a better performance with geostatistical approach compared to deterministic one. Indeed, for all the campaigns, cross-validation yielded lower RMSE and bias for Kriging than IDW. However, the performance of the two methods was dependent on the range and the structure of the spatial variability of salinity. Indeed, Kriging showed better accuracy for the second campaign in comparison with the two others. This could be explained by the wider range of values of soil salinity during this campaign, which has resulted in a greater range of spatial dependence and has a better modeling of the spatial variability of salinity, which 'was
Directory of Open Access Journals (Sweden)
Sukanti Rout
2015-04-01
Full Text Available In this study an updated deterministic seismic hazard contour map of Bhubaneswar (20°12'0"N to 20°23'0"N latitude and 85°44'0"E to 85° 54'0"E longitude one of the major city of India with tourist importance, has been prepared in the form of spectral acceleration values. For assessing the seismic hazard, the study area has been divided into small grids of size 30˝×30˝ (approximately 1.0 km×1.0 km, and the hazard parameters in terms of spectral acceleration at bedrock level, PGA are calculated as the center of each of these grid cells by considering the regional Seismo-tectonic activity within 400 km radius around the city center. The maximum credible earthquake in terms of moment magnitude of 7.2 has been used for calculation of hazard parameter, results in PGA value of 0.017g towards the northeast side of the city and the corresponding maximum spectral acceleration as 0.0501g for a predominant period of 0.05s at bedrock level.
Asinari, P.
2011-03-01
Boltzmann equation is one the most powerful paradigms for explaining transport phenomena in fluids. Since early fifties, it received a lot of attention due to aerodynamic requirements for high altitude vehicles, vacuum technology requirements and nowadays, micro-electro-mechanical systems (MEMs). Because of the intrinsic mathematical complexity of the problem, Boltzmann himself started his work by considering first the case when the distribution function does not depend on space (homogeneous case), but only on time and the magnitude of the molecular velocity (isotropic collisional integral). The interest with regards to the homogeneous isotropic Boltzmann equation goes beyond simple dilute gases. In the so-called econophysics, a Boltzmann type model is sometimes introduced for studying the distribution of wealth in a simple market. Another recent application of the homogeneous isotropic Boltzmann equation is given by opinion formation modeling in quantitative sociology, also called socio-dynamics or sociophysics. The present work [1] aims to improve the deterministic method for solving homogenous isotropic Boltzmann equation proposed by Aristov [2] by two ideas: (a) the homogeneous isotropic problem is reformulated first in terms of particle kinetic energy (this allows one to ensure exact particle number and energy conservation during microscopic collisions) and (b) a DVM-like correction (where DVM stands for Discrete Velocity Model) is adopted for improving the relaxation rates (this allows one to satisfy exactly the conservation laws at macroscopic level, which is particularly important for describing the late dynamics in the relaxation towards the equilibrium).
Minimally invasive treatment of hepatic adenoma in special cases
Energy Technology Data Exchange (ETDEWEB)
Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil); Odisio, Bruno Calazans [MD Anderson Cancer Center, Houston (United States); Garcia, Rodrigo Gobbo [Hospital Israelita Albert Einstein, São Paulo, SP (Brazil)
2013-07-01
Hepatocellular adenoma is a rare benign tumor that was increasingly diagnosed in the 1980s and 1990s. This increase has been attributed to the widespread use of oral hormonal contraceptives and the broader availability and advances of radiological tests. We report two cases of patients with large hepatic adenomas who were subjected to minimally invasive treatment using arterial embolization. One case underwent elective embolization due to the presence of multiple adenomas and recent bleeding in one of the nodules. The second case was a victim of blunt abdominal trauma with rupture of a hepatic adenoma and clinical signs of hemodynamic shock secondary to intra-abdominal hemorrhage, which required urgent treatment. The development of minimally invasive locoregional treatments, such as arterial embolization, introduced novel approaches for the treatment of individuals with hepatic adenoma. The mortality rate of emergency resection of ruptured hepatic adenomas varies from 5 to 10%, but this rate decreases to 1% when resection is elective. Arterial embolization of hepatic adenomas in the presence of bleeding is a subject of debate. This observation suggests a role for transarterial embolization in the treatment of ruptured and non-ruptured adenomas, which might reduce the indication for surgery in selected cases and decrease morbidity and mortality. Magnetic resonance imaging showed a reduction of the embolized lesions and significant avascular component 30 days after treatment in the two cases in this report. No novel lesions were observed, and a reduction in the embolized lesions was demonstrated upon radiological assessment at a 12-month follow-up examination.
A case of minimal change disease in a Fabry patient.
Zarate, Yuri A; Patterson, Larry; Yin, Hong; Hopkin, Robert J
2010-03-01
Fabry disease is an X-linked lysosomal storage disorder caused by mutations of the GLA gene and deficiency in alpha-galactosidase A activity. Glycosphingolipids accumulation causes renal injury that manifests early during childhood as tubular dysfunction and later in adulthood as proteinuria and renal insufficiency. Nephrotic syndrome as the first evidence of Fabry-related kidney damage is rare. We report the case of a teenager with known Fabry disease and normal renal function who developed acute nephrotic syndrome. He was found to have typical glycosphingolipids accumulation with no other findings suggestive of alternative causes of nephrotic syndrome on kidney biopsy. After treatment with enzyme replacement therapy and oral steroids, he went into complete remission from nephrotic syndrome, a response that is atypical for Fabry disease patients who develop heavy proteinuria as a result of longstanding disease and chronic renal injury. The nephrotic syndrome in this patient appears to have developed secondary to minimal change disease. We recommend considering immunotherapy in addition to enzyme replacement therapy in those patients with confirmed Fabry disease and acute nephrotic syndrome with clinical and microscopic findings suggestive of minimal change disease.
Minimal change disease: A case report of an unusual relationship.
Edrees, Fahad; Black, Robert M; Leb, Laszlo; Rennke, Helmut
2016-01-01
Kidney injury associated with lymphoproliferative disorders is rare, and the exact pathogenetic mechanisms behind it are still poorly understood. Glomerular involvement presenting as a nephrotic syndrome has been reported, usually secondary to membranoproliferative glomerulonephritis. We report a case of a 63-year-old male who presented with bilateral leg swelling due to nephrotic syndrome and acute kidney injury. A kidney biopsy showed minimal change disease with light chain deposition; however, no circulating light chains were present. This prompted a bone marrow biopsy, which showed chronic lymphocytic leukemia (CLL) with deposition of the same kappa monoclonal light chains. Three cycles of rituximab and methylprednisolone resulted in remission of both CLL and nephrotic syndrome, without recurrence during a three-year follow-up.
Modeling of deterministic chaotic systems
Energy Technology Data Exchange (ETDEWEB)
Lai, Y. [Department of Physics and Astronomy and Department of Mathematics, The University of Kansas, Lawrence, Kansas 66045 (United States); Grebogi, C. [Institute for Plasma Research, University of Maryland, College Park, Maryland 20742 (United States); Grebogi, C.; Kurths, J. [Department of Physics and Astrophysics, Universitaet Potsdam, Postfach 601553, D-14415 Potsdam (Germany)
1999-03-01
The success of deterministic modeling of a physical system relies on whether the solution of the model would approximate the dynamics of the actual system. When the system is chaotic, situations can arise where periodic orbits embedded in the chaotic set have distinct number of unstable directions and, as a consequence, no model of the system produces reasonably long trajectories that are realized by nature. We argue and present physical examples indicating that, in such a case, though the model is deterministic and low dimensional, statistical quantities can still be reliably computed. {copyright} {ital 1999} {ital The American Physical Society}
Inferring deterministic causal relations
Daniusis, Povilas; Mooij, Joris; Zscheischler, Jakob; Steudel, Bastian; Zhang, Kun; Schoelkopf, Bernhard
2012-01-01
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2008-01-01
We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....
Deterministic Walks with Choice
Energy Technology Data Exchange (ETDEWEB)
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Uniform deterministic dictionaries
DEFF Research Database (Denmark)
Ruzic, Milan
2008-01-01
We present a new analysis of the well-known family of multiplicative hash functions, and improved deterministic algorithms for selecting “good” hash functions. The main motivation is realization of deterministic dictionaries with fast lookups and reasonably fast updates. The model of computation...
Minimally invasive approaches in pancreatic pseudocyst: a Case report
Directory of Open Access Journals (Sweden)
Rohollah Y
2009-09-01
Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: According to importance of post operative period, admission duration, post operative pain, and acceptable rate of complications, minimally invasive approaches with endoscope in pancreatic pseudocyst management becomes more popular, but the best choice of procedure and patient selection is currently not completely established. During past decade endoscopic procedures are become first choice in most authors' therapeutic plans, however, open surgery remains gold standard in pancreatic pseudocyst treatment."n"nMethods: we present here a patient with pancreatic pseudocyst unresponsive to conservative management that is intervened endoscopically before 6th week, and review current literatures to depict a schema to management navigation."n"nResults: A 16 year old male patient presented with two episodes of acute pancreatitis with abdominal pain, nausea and vomiting. Hyperamilasemia, pancreatic ascites and a pseudocyst were found in our preliminary investigation. Despite optimal conservative management, including NPO (nil per os and total parentral nutrition, after four weeks, clinical and para-clinical findings deteriorated. Therefore, ERCP and trans-papillary cannulation with placement of 7Fr stent was
Minimally invasive repair of Morgagni hernia - A multicenter case series.
Lamas-Pinheiro, R; Pereira, J; Carvalho, F; Horta, P; Ochoa, A; Knoblich, M; Henriques, J; Henriques-Coelho, T; Correia-Pinto, J; Casella, P; Estevão-Costa, J
2016-01-01
Children may benefit from minimally invasive surgery (MIS) in the correction of Morgagni hernia (MH). The present study aims to evaluate the outcome of MIS through a multicenter study. National institutions that use MIS in the treatment of MH were included. Demographic, clinical and operative data were analyzed. Thirteen patients with MH (6 males) were operated using similar MIS technique (percutaneous stitches) at a mean age of 22.2±18.3 months. Six patients had chromosomopathies (46%), five with Down syndrome (39%). Respiratory complaints were the most common presentation (54%). Surgery lasted 95±23min. In none of the patients was the hernia sac removed; prosthesis was never used. In the immediate post-operative period, 4 patients (36%) were admitted to intensive care unit (all with Down syndrome); all patients started enteral feeds within the first 24h. With a mean follow-up of 56±16.6 months, there were two recurrences (18%) at the same institution, one of which was repaired with an absorbable suture; both with Down syndrome. The application of MIS in the MH repair is effective even in the presence of comorbidities such as Down syndrome; the latter influences the immediate postoperative recovery and possibly the recurrence rate. Removal of hernia sac does not seem necessary. Non-absorbable sutures may be more appropriate.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease,and psoriasis. The possible pathological link between 3 diseases is discussed.
Lok, Ka-Ho; Hung, Hiu-Gong; Yip, Wai-Man; Li, Kin-Kong; Li, Kam-Fu; Szeto, Ming-Leung
2007-01-01
A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease, and psoriasis. The possible pathological link between 3 diseases is discussed.
Lok, Ka-Ho; Hung, Hiu-Gong; Yip, Wai-Man; Li, Kin-Kong; Li, Kam-Fu; Szeto, Ming-Leung
2007-08-07
A 43-year-old Chinese patient with a history of psoriasis developed fulminant ulcerative colitis after immunosuppressive therapy for steroid-resistant minimal change disease was stopped. Minimal change disease in association with inflammatory bowel disease is a rare condition. We here report a case showing an association between ulcerative colitis, minimal change disease, and psoriasis. The possible pathological link between 3 diseases is discussed.
Gharibi, Wajeb
2011-01-01
In this paper, we focus on nonlinear infinite-norm minimization problems that have many applications, especially in computer science and operations research. We set a reliable Lagrangian dual aproach for solving this kind of problems in general, and based on this method, we propose an algorithm for the mixed linear and nonlinear infinite-norm minimization cases with numerical results.
A case of cutaneous paragonimiasis presented with minimal pleuritis.
Singh, T Shantikumar; Devi, Kh Ranjana; Singh, S Rajen; Sugiyama, Hiromu
2012-07-01
Clinically, paragonimiasis is broadly classified into pulmonary, pleuropulmonary, and extrapulmonary forms. The common extrapulmonary forms are cerebral and cutaneous paragonimiasis. The cutaneous paragonimiasis is usually presented as a slowly migrating and painless subcutaneous nodule. The correct diagnosis is often difficult or delayed or remained undiagnosed until the nodule becomes enlarged and painful and the cause is investigated. We report here a case of cutaneous paragonimiasis in a male child who presented with mild respiratory symptoms. The diagnosis of paragonimiasis was based on a history of consumption of crabs, positive specific serological test, and blood eosinophilia. The swelling and respiratory symptoms subsided after a prescribed course of praziquantel therapy.
Minimally invasive atlantoaxial fusion: cadaveric study and report of 5 clinical cases.
Srikantha, Umesh; Khanapure, Kiran S; Jagannatha, Aniruddha T; Joshi, Krishna C; Varma, Ravi G; Hegde, Alangar S
2016-12-01
OBJECTIVE Minimally invasive techniques are being increasingly used to treat disorders of the cervical spine. They have a potential to reduce the postoperative neck discomfort subsequent to extensive muscle dissection associated with conventional atlantoaxial fusion procedures. The aim of this paper was to elaborate on the technique and results of minimally invasive atlantoaxial fusion. MATERIALS Minimally invasive atlantoaxial fusion was done initially in 4 fresh-frozen cadavers and subsequently in 5 clinical cases. Clinical cases included patients with reducible atlantoaxial instability and undisplaced or minimally displaced odontoid fractures. The surgical technique is illustrated in detail. RESULTS Among the cadaveric specimens, all C-1 lateral mass screws were in the correct position and 2 of the 8 C-2 screws had a vertebral canal breach. Among clinical cases, all C-1 lateral mass screws were in the correct position. Only one C-2 screw had a Grade 2 vertebral canal breach, which was clinically insignificant. None of the patients experienced neurological worsening or implant-related complications at follow-up. Evidence of rib graft fusion or C1-2 joint fusion was successfully demonstrated in 4 cases, and flexion-extension radiographs done at follow-up did not show mobility in any case. CONCLUSIONS Minimally invasive atlantoaxial fusion is a safe and effective alternative to the conventional approach in selected cases. Larger series with direct comparison to the conventional approach will be required to demonstrate clinical benefit presumed to be associated with a minimally invasive approach.
LENUS (Irish Health Repository)
Fanning, D M
2009-02-03
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
LENUS (Irish Health Repository)
Fanning, D M
2012-02-01
INTRODUCTION: We report the first described case of minimal deviation adenocarcinoma of the uterine cervix in the setting of a female renal cadaveric transplant recipient. MATERIALS AND METHODS: A retrospective review of this clinical case was performed. CONCLUSION: This rare cancer represents only about 1% of all cervical adenocarcinoma.
Dark matter as a Bose-Einstein Condensate: the relativistic non-minimally coupled case
Energy Technology Data Exchange (ETDEWEB)
Bettoni, Dario; Colombo, Mattia; Liberati, Stefano, E-mail: bettoni@sissa.it, E-mail: mattia.colombo@studenti.unitn.it, E-mail: liberati@sissa.it [SISSA, Via Bonomea 265, Trieste, 34136 (Italy)
2014-02-01
Bose-Einstein Condensates have been recently proposed as dark matter candidates. In order to characterize the phenomenology associated to such models, we extend previous investigations by studying the general case of a relativistic BEC on a curved background including a non-minimal coupling to curvature. In particular, we discuss the possibility of a two phase cosmological evolution: a cold dark matter-like phase at the large scales/early times and a condensed phase inside dark matter halos. During the first phase dark matter is described by a minimally coupled weakly self-interacting scalar field, while in the second one dark matter condensates and, we shall argue, develops as a consequence the non-minimal coupling. Finally, we discuss how such non-minimal coupling could provide a new mechanism to address cold dark matter paradigm issues at galactic scales.
The cointegrated vector autoregressive model with general deterministic terms
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...
The human ECG nonlinear deterministic versus stochastic aspects
Kantz, H; Kantz, Holger; Schreiber, Thomas
1998-01-01
We discuss aspects of randomness and of determinism in electrocardiographic signals. In particular, we take a critical look at attempts to apply methods of nonlinear time series analysis derived from the theory of deterministic dynamical systems. We will argue that deterministic chaos is not a likely explanation for the short time variablity of the inter-beat interval times, except for certain pathologies. Conversely, densely sampled full ECG recordings possess properties typical of deterministic signals. In the latter case, methods of deterministic nonlinear time series analysis can yield new insights.
Pinto, Rodrigo Carlos; Chambrone, Leandro; Colombini, Bella Luna; Ishikiriama, Sérgio Kiyoshi; Britto, Isabella Maria; Romito, Giuseppe Alexandre
2013-05-01
The decision-making process for the treatment of esthetic areas is based on the achievement of a healthy, harmonious, and pleasant smile. These conditions are directly associated with a solid knowledge of tooth anatomy and proportions, as well as the smile line, soft tissue morphology, and osseous architecture. To achieve these objectives, a multidisciplinary approach may be necessary to create long-term harmony between the final restoration and the adjacent teeth, and the health of the surrounding soft and hard tissues. This case report describes the application of a minimally invasive therapy on a 33-year-old woman seeking esthetic treatment. Minimally invasive periodontal plastic surgery associated with porcelain laminate veneers yielded satisfactory esthetics and minimal trauma to dental and periodontal tissues. Such a combined approach may be considered a viable option for the improvement of "white" and "red" esthetics.
Minimal change disease with acute renal failure: a case against the nephrosarca hypothesis.
Cameron, Mary Ann; Peri, Usha; Rogers, Thomas E; Moe, Orson W
2004-10-01
An unusual but well-documented presentation of minimal change disease is nephrotic proteinuria and acute renal failure. One pathophysiological mechanism proposed to explain this syndrome is nephrosarca, or severe oedema of the kidney. We describe a patient with minimal change disease who presented with heavy proteinuria and acute renal failure but had no evidence of renal interstitial oedema on biopsy. Aggressive fluid removal did not reverse the acute renal failure. Renal function slowly returned concomitant with resolution of the nephrotic syndrome following corticosteroid therapy. The time profile of the clinical events is not compatible with the nephrosarca hypothesis and suggests an alternative pathophysiological model for the diminished glomerular filtration rate seen in some cases of minimal change disease.
Deterministic Global Optimization
Scholz, Daniel
2012-01-01
This monograph deals with a general class of solution approaches in deterministic global optimization, namely the geometric branch-and-bound methods which are popular algorithms, for instance, in Lipschitzian optimization, d.c. programming, and interval analysis.It also introduces a new concept for the rate of convergence and analyzes several bounding operations reported in the literature, from the theoretical as well as from the empirical point of view. Furthermore, extensions of the prototype algorithm for multicriteria global optimization problems as well as mixed combinatorial optimization
Generalized Deterministic Traffic Rules
Fuks, H; Fuks, Henryk; Boccara, Nino
1997-01-01
We study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parametrized by the speed limit $m$ and another parameter $k$ that represents a ``degree of aggressiveness'' in driving, strictly related to the distance between two consecutive cars. We compare two driving strategies with identical maximum throughput: ``conservative'' driving with high speed limit and ``aggressive'' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered.
A Case of Nephrotic Syndrome With Minimal-Change Disease and Waldenstrom's Macroglobulinemia.
Grabe, Darren W; Li, Bo; Haqqie, Syed S
2013-12-01
Kidney disease is a rare complication of Waldenstrom's macroglobulinemia. We report a case of nephrotic syndrome and minimal change disease in a patient with biopsy proven Waldenstrom's macroglobulinemia. The patient presented with over 12 grams of proteinuria and was successfully treated with oral prednisone over the course of 4 weeks. Repeat serum protein electrophoresis as well as serum immunoelectrophoresis revealed no paraproteins, urine analysis was negative for protein or blood by dipstick and spot urine protein was 9 mg/dL with creatinine of 101 mg/dL at time of last office visit. This case illustrates the successful treatment with corticosteroids alone with prolonged complete remission.
Baum, Rex L.; Godt, Jonathan W.; De Vita, P.; Napolitano, E.
2012-01-01
Rainfall-induced debris flows involving ash-fall pyroclastic deposits that cover steep mountain slopes surrounding the Somma-Vesuvius volcano are natural events and a source of risk for urban settlements located at footslopes in the area. This paper describes experimental methods and modelling results of shallow landslides that occurred on 5–6 May 1998 in selected areas of the Sarno Mountain Range. Stratigraphical surveys carried out in initiation areas show that ash-fall pyroclastic deposits are discontinuously distributed along slopes, with total thicknesses that vary from a maximum value on slopes inclined less than 30° to near zero thickness on slopes inclined greater than 50°. This distribution of cover thickness influences the stratigraphical setting and leads to downward thinning and the pinching out of pyroclastic horizons. Three engineering geological settings were identified, in which most of the initial landslides that triggered debris flows occurred in May 1998 can be classified as (1) knickpoints, characterised by a downward progressive thinning of the pyroclastic mantle; (2) rocky scarps that abruptly interrupt the pyroclastic mantle; and (3) road cuts in the pyroclastic mantle that occur in a critical range of slope angle. Detailed topographic and stratigraphical surveys coupled with field and laboratory tests were conducted to define geometric, hydraulic and mechanical features of pyroclastic soil horizons in the source areas and to carry out hydrological numerical modelling of hillslopes under different rainfall conditions. The slope stability for three representative cases was calculated considering the real sliding surface of the initial landslides and the pore pressures during the infiltration process. The hydrological modelling of hillslopes demonstrated localised increase of pore pressure, up to saturation, where pyroclastic horizons with higher hydraulic conductivity pinch out and the thickness of pyroclastic mantle reduces or is
Schemes for Deterministic Polynomial Factoring
Ivanyos, Gábor; Saxena, Nitin
2008-01-01
In this work we relate the deterministic complexity of factoring polynomials (over finite fields) to certain combinatorial objects we call m-schemes. We extend the known conditional deterministic subexponential time polynomial factoring algorithm for finite fields to get an underlying m-scheme. We demonstrate how the properties of m-schemes relate to improvements in the deterministic complexity of factoring polynomials over finite fields assuming the generalized Riemann Hypothesis (GRH). In particular, we give the first deterministic polynomial time algorithm (assuming GRH) to find a nontrivial factor of a polynomial of prime degree n where (n-1) is a smooth number.
Deterministic Graphical Games Revisited
DEFF Research Database (Denmark)
Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro
2012-01-01
Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...... for finding optimal strategies in such games. The existence of a linear time comparison-based algorithm remains an open problem....
Minimally invasive post-mortem CT-angiography in a case involving a gunshot wound.
Ruder, Thomas D; Ross, Steffen; Preiss, Ulrich; Thali, Michael J
2010-05-01
Non-contrast post-mortem computed tomography (pm-CT) is useful in the evaluation of bony pathologies, whereas minimally invasive pm-CT-angiography allows for the detection of subtle vascular lesions. We present a case of an accidentally self-inflicted fatal bullet wound to the chest where pm-CT-angiography revealed a small laceration of the anterior interventricular branch of the left coronary artery and a tiny disruption of the right ventricle with pericardial and pleural effusion. Subsequent autopsy confirmed our radiological findings. Post-mortem CT-angiography has a great potential for the detection of vascular lesions and can be considered equivalent to autopsy for selected cases in forensic medicine.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Submicroscopic Deterministic Quantum Mechanics
Krasnoholovets, V
2002-01-01
So-called hidden variables introduced in quantum mechanics by de Broglie and Bohm have changed their initial enigmatic meanings and acquired quite reasonable outlines of real and measurable characteristics. The start viewpoint was the following: All the phenomena, which we observe in the quantum world, should reflect structural properties of the real space. Thus the scale 10^{-28} cm at which three fundamental interactions (electromagnetic, weak, and strong) intersect has been treated as the size of a building block of the space. The appearance of a massive particle is associated with a local deformation of the cellular space, i.e. deformation of a cell. The mechanics of a moving particle that has been constructed is deterministic by its nature and shows that the particle interacts with cells of the space creating elementary excitations called "inertons". The further study has disclosed that inertons are a substructure of the matter waves which are described by the orthodox wave \\psi-function formalism. The c...
Energy Technology Data Exchange (ETDEWEB)
Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E.; Damilakis, John [Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece); Medical Diagnostic Center ' Ayios Therissos,' P.O. Box 28405, Nicosia 2033, Cyprus and Department of Medical Physics, Medical School, Democritus University of Thrace, Panepistimioupolis, Dragana 68100, Alexandroupolis (Greece); Department of Medical Physics, University Hospital of Heraklion, P.O. Box 1352, Heraklion 71110, Crete (Greece); Department of Medical Physics, Faculty of Medicine, University of Crete, P.O. Box 2208, Heraklion 71003, Crete (Greece)
2013-01-15
Purpose: To determine patient-specific absorbed peak doses to skin, eye lens, brain parenchyma, and cranial red bone marrow (RBM) of adult individuals subjected to low-dose brain perfusion CT studies on a 256-slice CT scanner, and investigate the effect of patient head size/shape, head position during the examination and bowtie filter used on peak tissue doses. Methods: The peak doses to eye lens, skin, brain, and RBM were measured in 106 individual-specific adult head phantoms subjected to the standard low-dose brain perfusion CT on a 256-slice CT scanner using a novel Monte Carlo simulation software dedicated for patient CT dosimetry. Peak tissue doses were compared to corresponding thresholds for induction of cataract, erythema, cerebrovascular disease, and depression of hematopoiesis, respectively. The effects of patient head size/shape, head position during acquisition and bowtie filter used on resulting peak patient tissue doses were investigated. The effect of eye-lens position in the scanned head region was also investigated. The effect of miscentering and use of narrow bowtie filter on image quality was assessed. Results: The mean peak doses to eye lens, skin, brain, and RBM were found to be 124, 120, 95, and 163 mGy, respectively. The effect of patient head size and shape on peak tissue doses was found to be minimal since maximum differences were less than 7%. Patient head miscentering and bowtie filter selection were found to have a considerable effect on peak tissue doses. The peak eye-lens dose saving achieved by elevating head by 4 cm with respect to isocenter and using a narrow wedge filter was found to approach 50%. When the eye lies outside of the primarily irradiated head region, the dose to eye lens was found to drop to less than 20% of the corresponding dose measured when the eye lens was located in the middle of the x-ray beam. Positioning head phantom off-isocenter by 4 cm and employing a narrow wedge filter results in a moderate reduction of
Deterministic quantitative risk assessment development
Energy Technology Data Exchange (ETDEWEB)
Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)
2009-07-01
Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)
Long, Quan; Wu, Ping; Jiang, Gengru; Zhu, Chun
2014-04-01
The present report describes a case of nephrotic syndrome (NS) with invasive thymoma. A male patient was hospitalized for severe edema with reduced urine output. He had a history of thymectomy and radiotherapy because of invasive thymoma 4 years before the development of NS. Renal biopsy displayed minimal change disease (MCD). Although imaging study showed probably recurrent sign of invasive thymoma, the patient still received steroid monotherapy for ~ 9 months and he got partial remission of NS at the 8th week. Therefore, we suggest that MCD should be taken into account as a pathological lesion type in old NS patients with thymoma. In spite of longer remission time, steroid monotherapy and combination therapy with immunosuppressant are effective for thymoma-associated MCD.
Li, Li; Li, Jianlan; Li, Guoxia; Tan, Yanhong; Chen, Xiuhua; Ren, Fanggang; Guo, Haixiu; Wang, Hongwei
2012-12-01
Tetraploidy is a rare chromosome number aberration in de novo acute myeloid leukemia (AML), and may be associated with erythrophagocytosis by leukemic blast cells. We report a 48-year-old female patient with minimally differentiated acute myeloblastic leukemia (AML-M0) exhibiting tetraploidy and erythrophagocytosis. The karyotype was 46,XX[2]/92,XXXX[18]. Bone marrow aspirate smears showed large and prominent nuclei, with erythrophagocytosis in leukemic cells. Fluorescence in situ hybridization using RUNX1 dual color break probes detected four fusion signals, accounting for 95 % (190/200), in one interphase nucleus. The mutations of TP53 and the fusion genes RUNX1/ETO, CBFβ/MYH11, and PML/RARα were all negative. This patient showed a poor response to chemotherapy, and died 66 days after the onset. To our knowledge, this is the first reported case of AML-M0 with tetraploidy and erythrophagocytosis and without additional chromosome aberrations. This case of tetraploid AML with poor prognosis suggests that further biological study of more cases of tetraploid AML will be of great importance in improving the understanding and prognosis of this tetraploid AML.
Rini, Stefano
2012-01-01
In this paper the study of the cognitive interference channel with a common message, a variation of the classical cognitive interference channel in which the cognitive message is decoded at both receivers. We derive the capacity for the semideterministic channel in which the output at the cognitive decoder is a deterministic function of the channel inputs. We also show capacity to within a constant gap and a constant factor for the Gaussian channel in which the outputs are linear combinations of the channel inputs plus an additive Gaussian noise term. Most of these results are shown using an interesting transmission scheme in which the cognitive message, decoded at both receivers, is also pre-coded against the interference experienced at the cognitive decoder. The pre-coding of the cognitive message does not allow the primary decoder to reconstruct the interfering signal. The cognitive message acts instead as a side information at the primary receiver when decoding its intended message.
[Deterministic and stochastic identification of neurophysiologic systems].
Piatigorskiĭ, B Ia; Kostiukov, A I; Chinarov, V A; Cherkasskiĭ, V L
1984-01-01
The paper deals with deterministic and stochastic identification methods applied to the concrete neurophysiological systems. The deterministic identification was carried out for the system: efferent fibres-muscle. The obtained transition characteristics demonstrated dynamic nonlinearity of the system. Identification of the neuronal model and the "afferent fibres-synapses-neuron" system in mollusc Planorbis corneus was carried out using the stochastic methods. For these purpose the Wiener method of stochastic identification was expanded for the case of pulse trains as input and output signals. The weight of the nonlinear component in the Wiener model and accuracy of the model prediction were quantitatively estimated. The results obtained proves the possibility of using these identification methods for various neurophysiological systems.
Deterministic Real-time Thread Scheduling
Yun, Heechul; Sha, Lui
2011-01-01
Race condition is a timing sensitive problem. A significant source of timing variation comes from nondeterministic hardware interactions such as cache misses. While data race detectors and model checkers can check races, the enormous state space of complex software makes it difficult to identify all of the races and those residual implementation errors still remain a big challenge. In this paper, we propose deterministic real-time scheduling methods to address scheduling nondeterminism in uniprocessor systems. The main idea is to use timing insensitive deterministic events, e.g, an instruction counter, in conjunction with a real-time clock to schedule threads. By introducing the concept of Worst Case Executable Instructions (WCEI), we guarantee both determinism and real-time performance.
[A case of AKI-caused minimal change nephrotic syndrome with concomitant pleuritis].
Watanabe, Renya; Abe, Yasuhiro; Sasaki, Masaru; Hamauchi, Aki; Yasunaga, Tomoe; Kurata, Satoshi; Yasuno, Tetsuhiko; Ito, Kenji; Sasatomi, Yoshie; Hisano, Satoshi; Nakashima, Hitoshi
2016-01-01
A twenty-year-old man complaining of chest pain was diagnosed as nephrotic syndrome complicated with pleural effusion and ascites. Despite treatment with antibiotics, his fever and high inflammatory reaction persisted. After hospitalization, his urine volume decreased and renal function had deteriorated. As he was suffering from dyspnea, hemodialysis was performed together with chest drainage. His pleural effusion was exudative, and IVIG treatment was added to the antibiotic treatment. He was diagnosed as suspected developed minimal change nephrotic syndrome (MCNS) and administered prednisolone intravenously. His renal function ameliorated as a result of this treatment, enabling him to withdraw from hemodialysis. Inflammatory reaction gradually decreased and his general condition improved. The result of a renal biopsy examination carried out after the hemodialysis treatment confirmed MCNS, which suggested that MCNS had induced acute kidney injury (AKI) atypically in this case. Generally AKI is not induced by MCNS in youth, but it may occur under severe inflammatory conditions. Physicians should be aware that MCNS in young patients may lead to the development of AKI requiring hemodialysis treatment.
Directory of Open Access Journals (Sweden)
Silva Sandra
2007-10-01
Full Text Available Abstract Graft-versus-host disease is one of the most frequent complications occurring after haematopoietic stem cell transplantation. Recently, renal involvement has been described as a manifestation of chronic graft-versus-host disease. Immunosuppression seems to play a major role: clinical disease is triggered by its tapering and resolution is achieved with the resumption of the immunosuppressive therapy. Prognosis is apparently favourable, but long term follow up data are lacking. We report a case of a 53-year-old man who developed nephrotic syndrome 142 days after allogeneic stem cell transplantation for acute myeloid leukaemia. Onset of nephrotic syndrome occurred after reduction of immunosuppressants and was accompanied by manifestations of chronic graft-versus-host disease. Histological examination of the kidney was consistent with Minimal Change Disease. After treatment with prednisolone and mycophenolate mofetil he had complete remission of proteinuria and improvement of graft-versus-host disease. Eighteen months after transplantation the patient keeps haematological remission and normal renal function, without proteinuria. Since patients with chronic graft-versus-host disease might be considered at risk for development of nephrotic syndrome, careful monitoring of renal parameters, namely proteinuria, is advisable.
Silva, Sandra; Maximino, José; Henrique, Rui; Paiva, Ana; Baldaia, Jorge; Campilho, Fernando; Pimentel, Pedro; Loureiro, Alfredo
2007-10-30
Graft-versus-host disease is one of the most frequent complications occurring after haematopoietic stem cell transplantation. Recently, renal involvement has been described as a manifestation of chronic graft-versus-host disease. Immunosuppression seems to play a major role: clinical disease is triggered by its tapering and resolution is achieved with the resumption of the immunosuppressive therapy. Prognosis is apparently favourable, but long term follow up data are lacking.We report a case of a 53-year-old man who developed nephrotic syndrome 142 days after allogeneic stem cell transplantation for acute myeloid leukaemia. Onset of nephrotic syndrome occurred after reduction of immunosuppressants and was accompanied by manifestations of chronic graft-versus-host disease. Histological examination of the kidney was consistent with Minimal Change Disease. After treatment with prednisolone and mycophenolate mofetil he had complete remission of proteinuria and improvement of graft-versus-host disease. Eighteen months after transplantation the patient keeps haematological remission and normal renal function, without proteinuria.Since patients with chronic graft-versus-host disease might be considered at risk for development of nephrotic syndrome, careful monitoring of renal parameters, namely proteinuria, is advisable.
Deterministic joint remote state preparation
Energy Technology Data Exchange (ETDEWEB)
An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Bich, Cao Thi [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Physics Department, University of Education No. 1, 136 Xuan Thuy, Cau Giay, Hanoi (Viet Nam); Don, Nung Van [Center for Theoretical Physics, Institute of Physics, 10 Dao Tan, Ba Dinh, Hanoi (Viet Nam); Physics Department, Hanoi National University, 334 Nguyen Trai, Thanh Xuan, Hanoi (Viet Nam)
2011-09-26
We put forward a new nontrivial three-step strategy to execute joint remote state preparation via Einstein-Podolsky-Rosen pairs deterministically. At variance with all existing protocols, in ours the receiver contributes actively in both preparation and reconstruction steps, although he knows nothing about the quantum state to be prepared. -- Highlights: → Deterministic joint remote state preparation via EPR pairs is proposed. → Both general single- and two-qubit states are studied. → Differently from all existing protocols, in ours the receiver participates actively. → This is for the first time such a strategy is adopted.
Interference Decoding for Deterministic Channels
Bandemer, Bernd
2010-01-01
An inner bound to the capacity region of a class of three user pair deterministic interference channels is presented. The key idea is to simultaneously decode the combined interference signal and the intended message at each receiver. It is shown that this interference decoding inner bound is strictly larger than the inner bound obtained by treating interference as noise, which includes interference alignment for deterministic channels. The gain comes from judicious analysis of the number of combined interference sequences in different regimes of input distributions and message rates.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we consider nonlinear infinity-norm minimization problems. We device a reliable Lagrangian dual approach for solving this kind of problems and based on this method we propose an algorithm for the mixed linear and nonlinear infinitynorm minimization problems. Numerical results are presented.
Height-Deterministic Pushdown Automata
DEFF Research Database (Denmark)
Nowotka, Dirk; Srba, Jiri
2007-01-01
of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...
Minimally Invasive Antral Membrane Balloon Elevation (MIAMBE: A 3 cases report
Directory of Open Access Journals (Sweden)
Roberto Arroyo
2013-12-01
Full Text Available ABSTRACT Long-standing partial edentulism in the posterior segment of an atrophic maxilla is a challenging treatment. Sinus elevation via Cadwell Luc has several anatomical restrictions, post-operative discomfort and the need of complex surgical techniques. The osteotome approach is a significantly safe and efficient tecnique, as a variation of this technique the "minimal invasive antral membrane balloon elevation" (MIAMBE has been developed, which use a hydraulic system. We present three cases in which the system was used MIAMBE for tooth replacement in the posterior. This procedure seems to be a relatively simple and safe solution for the insertion of endo-osseus implants in the posterior atrophic maxilla. RESUMEN El edentulismo parcial de larga data en el segmento posterior en un maxilar atrófico supone un reto terapéutico. La elevación de seno vía Cadwell Luc presenta restricciones anatómicas, incomodidades post-operatorias y la necesidad de técnicas quirúrgicas complejas. El enfoque con osteotomos tiene una eficacia y seguridad considerable, como una variación a esta se ha desarrollado la "elevación mínimamente invasiva mediante globo de la membrana antral" (MIAMBE, que utiliza un sistema hidráulico. Se presentan tres casos en los que se utilizó el sistema MIAMBE para el reemplazo de dientes en el sector posterior. Este procedimiento parece ser una solución relativamente sencilla y segura para inserción de implates endo-óseos en el caso de un maxilar atrófico posterior.
Directory of Open Access Journals (Sweden)
Stephen Faddegon
2013-04-01
Full Text Available Background and Purpose Horseshoe kidney is an uncommon renal anomaly often associated with ureteropelvic junction (UPJ obstruction. Advanced minimally invasive surgical (MIS reconstructive techniques including laparoscopic and robotic surgery are now being utilized in this population. However, fewer than 30 cases of MIS UPJ reconstruction in horseshoe kidneys have been reported. We herein report our experience with these techniques in the largest series to date. Materials and Methods We performed a retrospective chart review of nine patients with UPJ obstruction in horseshoe kidneys who underwent MIS repair at our institution between March 2000 and January 2012. Four underwent laparoscopic, two robotic, and one laparoendoscopic single-site (LESS dismembered pyeloplasty. An additional two pediatric patients underwent robotic Hellstrom repair. Perioperative outcomes and treatment success were evaluated. Results Median patient age was 18 years (range 2.5-62 years. Median operative time was 136 minutes (range 109-230 min. and there were no perioperative complications. After a median follow-up of 11 months, clinical (symptomatic success was 100%, while radiographic success based on MAG-3 renogram was 78%. The two failures were defined by prolonged t1/2 drainage, but neither patient has required salvage therapy as they remain asymptomatic with stable differential renal function. Conclusions MIS repair of UPJ obstruction in horseshoe kidneys is feasible and safe. Although excellent short-term clinical success is achieved, radiographic success may be lower than MIS pyeloplasty in heterotopic kidneys, possibly due to inherent differences in anatomy. Larger studies are needed to evaluate MIS pyeloplasty in this population.
Minimally invasive transforaminal lumbar interbody fusion Results of 23 consecutive cases
Directory of Open Access Journals (Sweden)
Amit Jhala
2014-01-01
Conclusion: The study demonstrates a good clinicoradiological outcome of minimally invasive TLIF. It is also superior in terms of postoperative back pain, blood loss, hospital stay, recovery time as well as medication use.
Minimal-change renal disease and Graves’ disease: a case report and literature review
Hasnain, Wirasat; Stillman, Isaac E.; Bayliss, George P.
2011-01-01
Objective To describe a possible association between Graves' disease and nephrotic syndrome secondary to minimal change renal disease and to review the literature related to renal diseases in patients with Graves' disease. Methods The clinical, laboratory, and renal biopsy findings in a patient with Graves' disease and minimal change renal disease are discussed. In addition, the pertinent English-language literature published from 1966 to 2009, determined by means of a MEDLINE search, is revi...
Minimal-change renal disease and Graves’ disease: a case report and literature review
Hasnain, Wirasat; Stillman, Isaac E.; Bayliss, George P.
2011-01-01
Objective: To describe a possible association between Graves' disease and nephrotic syndrome secondary to minimal change renal disease and to review the literature related to renal diseases in patients with Graves' disease. Methods: The clinical, laboratory, and renal biopsy findings in a patient with Graves' disease and minimal change renal disease are discussed. In addition, the pertinent English-language literature published from 1966 to 2009, determined by means of a MEDLINE search, is re...
Energy Technology Data Exchange (ETDEWEB)
Ahn, Sang Kyu [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of); Kim, Inn Seock, E-mail: innseockkim@gmail.co [ISSA Technology, 21318 Seneca Crossing Drive, Germantown, MD 20876 (United States); Oh, Kyu Myung [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)
2010-05-15
The objective of this paper and a companion paper in this issue (part II, risk-informed approaches) is to derive technical insights from a critical review of deterministic and risk-informed safety analysis approaches that have been applied to develop licensing requirements for water-cooled reactors, or proposed for safety verification of the advanced reactor design. To this end, a review was made of a number of safety analysis approaches including those specified in regulatory guides and industry standards, as well as novel methodologies proposed for licensing of advanced reactors. This paper and the companion paper present the review insights on the deterministic and risk-informed safety analysis approaches, respectively. These insights could be used in making a safety case or developing a new licensing review infrastructure for advanced reactors including Generation IV reactors.
Directory of Open Access Journals (Sweden)
Yunzhi ZHOU
2010-01-01
Full Text Available Background and objective TACE, Ar-He target cryosurgery and radioactive seeds implantation are the mainly micro-invasive methods in the treatment of lung cancer. This article summarizes the survival quality after treatment, the clinical efficiency and survival period, and analyzes the advantages and shortcomings of each methods so as to evaluate the clinical effect of non-small cell lung cancer with multiple minimally invasive treatment. Methods All the 139 cases were nonsmall cell lung cancer patients confirmed by pathology and with follow up from July 2006 to July 2009 retrospectively, and all of them lost operative chance by comprehensive evaluation. Different combination of multiple minimally invasive treatments were selected according to the blood supply, size and location of the lesion. Among the 139 cases, 102 cases of primary and 37 cases of metastasis to mediastinum, lung and chest wall, 71 cases of abundant blood supply used the combination of superselective target artery chemotherapy, Ar-He target cryoablation and radiochemotherapy with seeds implantation; 48 cases of poor blood supply use single Ar-He target cryoablation; 20 cases of poor blood supply use the combination of Ar-He target cryoablation and radiochemotheraoy with seeds implantation. And then the pre- and post-treatment KPS score, imaging data and the result of follow up were analyzed. Results The KPS score increased 20.01 meanly after the treatment. Follow up 3 years, 44 cases of CR, 87 cases of PR, 3 cases of NC and 5 cases of PD, and the efficiency was 94.2%. Ninety-nine cases of 1 year survival (71.2%, 43 cases of 2 years survival (30.2%, 4 cases with over 3 years survival and the median survival was 19 months. Average survival was (16±1.5months. There was no severe complications, such as spinal cord injury, vessel and pericardial aspiration. Conclusion Minimally invasive technique is a highly successful, micro-invasive and effective method with mild complications
Directory of Open Access Journals (Sweden)
A Tewari
2005-01-01
Full Text Available Context: In 2000, the number of new cases of prostate cancer was estimated at 5 13 000 worldwide [Eur J Cancer 2001; 37 (Suppl 8: S4]. In next 15 years, prostate cancer is predicted to be the most common cancer in men [Eur J Cancer 2001; 37 (Suppl 8: S4]. Radical prostatectomy is one of the most common surgical treatments for clinically localized prostate cancer. In spite of its excellent oncological results, due to the fear of pain, risk for side effects, and inconvenience (Semin Urol Oncol 2002; 20: 55, many patients seek alternative treatments for their prostate cancer. At Vattikuti Urology institute, we have developed a minimally invasive technique for treating prostate cancer, which achieves oncological results of surgical treatment without causing significant pain, large surgical incision, and side effects (BJU Int, 2003; 92: 205. This technique involves a da Vinci™ (Intuitive Surgical ®, Sunnyvale, CA surgical robot with 3-D stereoscopic visualization and ergonomic multijointed instruments. Presented herein are our results after treating 750 patients. Methods: We prospectively collected baseline demographic data such as age, race, body mass index (BMI, serum prostate specific antigen, prostate volume, Gleason score, percentage cancer, TNM clinical staging, and comorbidities. Urinary symptoms were measured with the international prostate symptom score (IPSS, and sexual health with the sexual health inventory of males (SHIM. In addition, the patients were mailed the expanded prostate inventory composite at baseline and at 1, 3, 6, 12 and 18 months after the procedure. Results: Gleason seven or more cancer grade was noted in 33.5% of patients. The average BMI was high (27.7 and 87% patients had pathological stage PT2a-b. The mean operative time was 160 min and the mean blood loss was 153 cm3. No patient required blood transfusion. At 6 months 82% of the men who were younger and 75% of those older than 60 years had return of sexual
Analysis of FBC deterministic chaos
Energy Technology Data Exchange (ETDEWEB)
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Time-Minimal Control of Dissipative Two-level Quantum Systems: the Generic Case
Bonnard, B; Sugny, D
2008-01-01
The objective of this article is to complete preliminary results concerning the time-minimal control of dissipative two-level quantum systems whose dynamics is governed by Lindblad equations. The extremal system is described by a 3D-Hamiltonian depending upon three parameters. We combine geometric techniques with numerical simulations to deduce the optimal solutions.
Tang, Hon-Lok; Mak, Yuen-Fun; Chu, Kwok-Hong; Lee, William; Fung, Samuel Kaâ Shun; Chan, Thomas Yan-Keung; Tong, Kwok-Lung
2013-04-01
Mercury is a known cause of nephrotic syndrome and the underlying renal pathology in most of the reported cases was membranous nephropathy. We describe here 4 cases of minimal change disease following exposure to mercury-containing skin lightening cream for 2 - 6 months. The mercury content of the facial creams was very high (7,420 - 30,000 parts per million). All patients were female and presented with nephrotic syndrome and heavy proteinuria (8.35 - 20.69 g/d). The blood and urine mercury levels were 26 - 129 nmol/l and 316 - 2,521 nmol/d, respectively. Renal biopsy revealed minimal change disease (MCD) in all patients. The use of cosmetic cream was stopped and chelation therapy with D-penicillamine was given. Two patients were also given steroids. The time for blood mercury level to normalize was 1 - 7 months, whereas it took longer for urine mercury level to normalize (9 - 16 months). All patients had complete remission of proteinuria and the time to normalization of proteinuria was 1 - 9 months. Mercury-containing skin lightening cream is hazardous because skin absorption of mercury can cause minimal change disease. The public should be warned of the danger of using such products. In patients presenting with nephrotic syndrome, a detailed history should be taken, including the use of skin lightening cream. With regard to renal pathology, apart from membranous nephropathy, minimal change disease should be included as another pathological entity caused by mercury exposure or intoxication.
Minimally invasive esophagectomy for cancer: Single center experience after 44 consecutive cases
Directory of Open Access Journals (Sweden)
Bjelović Miloš
2015-01-01
Full Text Available Introduction. At the Department of Minimally Invasive Upper Digestive Surgery of the Hospital for Digestive Surgery in Belgrade, hybrid minimally invasive esophagectomy (hMIE has been a standard of care for patients with resectable esophageal cancer since 2009. As a next and final step in the change management, from January 2015 we utilized total minimally invasive esophagectomy (tMIE as a standard of care. Objective. The aim of the study was to report initial experiences in hMIE (laparoscopic approach for cancer and analyze surgical technique, major morbidity and 30-day mortality. Methods. A retrospective cohort study included 44 patients who underwent elective hMIE for esophageal cancer at the Department for Minimally Invasive Upper Digestive Surgery, Hospital for Digestive Surgery, Clinical Center of Serbia in Belgrade from April 2009 to December 2014. Results. There were 16 (36% middle thoracic esophagus tumors and 28 (64% tumors of distal thoracic esophagus. Mean duration of the operation was 319 minutes (approximately five hours and 20 minutes. The average blood loss was 173.6 ml. A total of 12 (27% of patients had postoperative complications and mean intensive care unit stay was 2.8 days. Mean hospital stay after surgery was 16 days. The average number of harvested lymph nodes during surgery was 31.9. The overall 30-day mortality rate within 30 days after surgery was 2%. Conclusion. As long as MIE is an oncological equivalent to open esophagectomy (OE, better relation between cost savings and potentially increased effectiveness will make MIE the preferred approach in high-volume esophageal centers that are experienced in minimally invasive procedures.
Design of deterministic OS for SPLC
Energy Technology Data Exchange (ETDEWEB)
Son, Choul Woong; Kim, Dong Hoon; Son, Gwang Seop [KAERI, Daejeon (Korea, Republic of)
2012-10-15
Existing safety PLCs for using in nuclear power plants operates based on priority based scheduling, in which the highest priority task runs first. This type of scheduling scheme determines processing priorities when multiple requests for processing or when there is a lack of resources available for processing, guaranteeing execution of higher priority tasks. This type of scheduling is prone to exhaustion of resources and continuous preemptions by devices with high priorities, and therefore there is uncertainty every period in terms of smooth running of the overall system. Hence, it is difficult to apply this type of scheme to where deterministic operation is required, such as in nuclear power plant. Also, existing PLCs either have no output logic with regard to devices' redundant selection or it was set in a fixed way, and as a result it was extremely inefficient to use them for redundant systems such as that of a nuclear power plant and their use was limited. Therefore, functional modules that can manage and control all devices need to be developed by improving on the way priorities are assigned among the devices, making it more flexible. A management module should be able to schedule all devices of the system, manage resources, analyze states of the devices, and give warnings in case of abnormal situations, such as device fail or resource scarcity and decide on how to handle it. Also, the management module should have output logic for device redundancy, as well as deterministic processing capabilities, such as with regard to device interrupt events.
Streamflow disaggregation: a nonlinear deterministic approach
Directory of Open Access Journals (Sweden)
B. Sivakumar
2004-01-01
Full Text Available This study introduces a nonlinear deterministic approach for streamflow disaggregation. According to this approach, the streamflow transformation process from one scale to another is treated as a nonlinear deterministic process, rather than a stochastic process as generally assumed. The approach follows two important steps: (1 reconstruction of the scalar (streamflow series in a multi-dimensional phase-space for representing the transformation dynamics; and (2 use of a local approximation (nearest neighbor method for disaggregation. The approach is employed for streamflow disaggregation in the Mississippi River basin, USA. Data of successively doubled resolutions between daily and 16 days (i.e. daily, 2-day, 4-day, 8-day, and 16-day are studied, and disaggregations are attempted only between successive resolutions (i.e. 2-day to daily, 4-day to 2-day, 8-day to 4-day, and 16-day to 8-day. Comparisons between the disaggregated values and the actual values reveal excellent agreements for all the cases studied, indicating the suitability of the approach for streamflow disaggregation. A further insight into the results reveals that the best results are, in general, achieved for low embedding dimensions (2 or 3 and small number of neighbors (less than 50, suggesting possible presence of nonlinear determinism in the underlying transformation process. A decrease in accuracy with increasing disaggregation scale is also observed, a possible implication of the existence of a scaling regime in streamflow.
Minimally invasive two-incision total hip arthroplasty:a short-term retrospective report of 27 cases
Institute of Scientific and Technical Information of China (English)
ZHANG Xian-long; WANG Qi; SHEN Hao; JIANG Yao; ZENG Bing-fang
2007-01-01
Background Total hip arthroplasty (THA) is widely applied for the treatment of end-stage painful hip arthrosis.Traditional THA needed a long incision and caused significant soft tissue trauma. Patients usually required long recovery time after the operation. In this research we aimed to study the feasibility and clinical outcomes of minimally invasive two-incision THA.Methods From February 2004 to March 2005, 27 patients, 12 males and 15 females with a mean age of 71 years (55-76), underwent minimally invasive two-incision THA in our department. The patients included 9 cases of osteoarthritis, 10 cases of osteonecrosis, and 8 cases of femoral neck fracture. The operations were done with VerSys cementless prosthesis and minimally invasive instruments from Zimmer China. Operation time, blood loss, length of incision, postoperative hospital stay, and complications were observed.Results The mean operation time was 90 minutes (80-170 min). The mean blood loss was 260 ml (170-450 ml) and blood transfusion was carried out in 4 cases of femoral neck fracture (average 400 ml). The average length of the anterior incision was 5.0 cm (4.6-6.5 cm) and of the posterior incision 3.7 cm (3.0-4.2 cm). All of the patients were ambulant the day after surgery. Nineteen patients were discharged 5 days post-operatively and 8 patients 7 days post-operatively. The patients were followed for 18 months (13-25 months). One patient was complicated by a proximal femoral fracture intraoperatively. No other complications, including infections, dislocations, and vascular injuries, occurred. The mean Harris score was 94.5 (92-96).Conclusions Two-incision THA has the advantage of being muscle sparing and minimally invasive with less blood loss and rapid recovery. However, this technique is time consuming, technically demanding, and requires fluoroscopy.
Primary Sjögren's syndrome with minimal change disease--a case report.
Yang, Mei-Li; Kuo, Mei-Chuan; Ou, Tsan-Teng; Chen, Hung-Chun
2011-05-01
Glomerular involvement in patients with primary Sjögren's syndrome (pSS) has rarely been reported. Among them, membranoproliferative glomerulonephritis and membranous nephropathy are the more common types. We report a middle-aged female presenting concurrently with nephrotic syndrome and microscopic hematuria, and her pSS was diagnosed by positive anti-Ro (SSA)/anti-La (SSB) autoantibodies, dry mouth, severely diffuse impaired function of both bilateral parotid and submandibular glands, and a positive Schirmer test. Renal pathology revealed minimal change disease and thin basement membrane nephropathy. The patient's nephrotic syndrome resolved after treatment with corticosteroids. To our knowledge, this is the first report of minimal change disease in a patient with pSS.
Minimally Invasive Antral Membrane Balloon Elevation (MIAMBE): A 3 cases report
Roberto Arroyo; Diego Cabrera
2013-01-01
ABSTRACT Long-standing partial edentulism in the posterior segment of an atrophic maxilla is a challenging treatment. Sinus elevation via Cadwell Luc has several anatomical restrictions, post-operative discomfort and the need of complex surgical techniques. The osteotome approach is a significantly safe and efficient tecnique, as a variation of this technique the "minimal invasive antral membrane balloon elevation" (MIAMBE) has been developed, which use a hydraulic system. We present three c...
Minimally invasive intervention in a case of a noncarious lesion and severe loss of tooth structure.
Reston, Eduardo G; Corba, Vanessa D; Broliato, Gustavo; Saldini, Bruno P; Stefanello Busato, Adair L
2012-01-01
The present article describes a minimally invasive technique used for the restoration of loss of tooth structure caused by erosion of intrinsic etiology. First, the cause of erosion was treated and controlled. Subsequently, taking into consideration patient characteristics, especially a young age, a more conservative technique was chosen for dental rehabilitation with the use of composite resin. The advantages and disadvantages of the technique employed are discussed.
Deterministic treatment of model error in geophysical data assimilation
Carrassi, Alberto
2015-01-01
This chapter describes a novel approach for the treatment of model error in geophysical data assimilation. In this method, model error is treated as a deterministic process fully correlated in time. This allows for the derivation of the evolution equations for the relevant moments of the model error statistics required in data assimilation procedures, along with an approximation suitable for application to large numerical models typical of environmental science. In this contribution we first derive the equations for the model error dynamics in the general case, and then for the particular situation of parametric error. We show how this deterministic description of the model error can be incorporated in sequential and variational data assimilation procedures. A numerical comparison with standard methods is given using low-order dynamical systems, prototypes of atmospheric circulation, and a realistic soil model. The deterministic approach proves to be very competitive with only minor additional computational c...
Deterministic Circular Self Test Path
Institute of Scientific and Technical Information of China (English)
WEN Ke; HU Yu; LI Xiaowei
2007-01-01
Circular self test path (CSTP) is an attractive technique for testing digital integrated circuits(IC) in the nanometer era, because it can easily provide at-speed test with small test data volume and short test application time. However, CSTP cannot reliably attain high fault coverage because of difficulty of testing random-pattern-resistant faults. This paper presents a deterministic CSTP (DCSTP) structure that consists of a DCSTP chain and jumping logic, to attain high fault coverage with low area overhead. Experimental results on ISCAS'89 benchmarks show that 100% fault coverage can be obtained with low area overhead and CPU time, especially for large circuits.
A deterministic width function model
Directory of Open Access Journals (Sweden)
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
Zietek, Pawel; Karaczun, Maciej; Kruk, Bartosz; Szczypior, Karina
2016-01-01
Achilles injury is a common musculoskeletal disorder. Bilateral rupture of the Achilles tendon, however, is much less common and usually occurs spontaneously. Complete, traumatic, and bilateral ruptures are rare and typically require long periods of immobilization before the patient can return to full weightbearing. A 52-year-old male was hospitalized for bilateral traumatic rupture to both Achilles tendons. No risk factors for tendon rupture were found. Blood samples revealed no peripheral blood pathologic features. Both tendons were repaired with percutaneous, minimally invasive surgery using the Achillon(®) tendon suture system. Rehabilitation was begun 4 weeks later. An ankle-foot orthosis was prescribed to provide ankle support with an adjustable range of movement, and active plantar flexion was set at 0° to 30°. The patient remained non-weightbearing with the ankle-foot orthosis device and performed active range-of-motion exercises. At 8 weeks after surgery, we recommended that he begin walking with partial weightbearing using a foot-tibial orthosis with the range of motion set to 45° plantar flexion and 15° dorsiflexion. At 10 weeks postoperatively, he was encouraged to return to full weightbearing on both feet. Beginning rehabilitation as soon as possible after minimally invasive surgery, compared with 6 weeks of immobilization after surgery, provided a rapid resumption to full weightbearing. We emphasize the clinical importance of a safe, simple treatment program that can be followed for a patient with damage to the Achilles tendons. To our knowledge, ours is the first report of minimally invasive repair of bilateral simultaneous traumatic rupture of the Achilles tendon.
Minimal Pairs: Minimal Importance?
Brown, Adam
1995-01-01
This article argues that minimal pairs do not merit as much attention as they receive in pronunciation instruction. There are other aspects of pronunciation that are of greater importance, and there are other ways of teaching vowel and consonant pronunciation. (13 references) (VWL)
Scheau, C; Popa, G A; Ghergus, A E; Preda, E M; Capsa, R A; Lupescu, I G
2013-09-15
Minimal Hepatic Encephalopathy (MHE), previously referred to as infraclinical or subclinical is a precursor in the development of clinical hepatic encephalopathy (HE). The demonstration of MHE is done through neuropsychological testing in the absence of clinical evidence of HE, patients showing only a mild cognitive impairment. Neuropsychological tests employed consist of Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) and portosystemic encephalopathy (PSE) test score. Unfortunately, there are numerous occasions when the tests prove irrelevant: in the situation of inexperienced investigators, the patient's poor education, vision problems or concurring central nervous system disease, all of which may delay or deviate from the correct diagnosis.
Hong, Young Hoon; Yun, Dae Young; Jung, Yong Wook; Oh, Myung Jin; Kim, Hyun Je; Lee, Choong Ki
2011-12-01
The World Health Organization classifies lupus nephritis as class I to V or VI. However, a few cases of minimal change glomerulopathy have been reported in association with systemic lupus erythematosus (SLE). Mycophenolate mofetil has been shown to be effective for treatment of minimal change disease and lupus nephritis. A 24-year-old woman diagnosed with SLE five years prior to presentation complained of a mild generalized edema. The urinalysis showed microscopic hematuria and proteinuria. The assessed amount of total proteinuria was 1,618 mg/24 hours. A renal biopsy demonstrated diffuse fusion of the foot processes of podocytes on electron microscopy. Mycophenolate mofetil was started in addition to the maintenance medications of prednisolone 10 mg/day and hydroxychloroquine 400 mg/day. After six months of treatment, the microscopic hematuria and proteinuria resolved, and the total urine protein decreased to 100 mg/24 hours.
Directory of Open Access Journals (Sweden)
Mika Oki
2011-10-01
Full Text Available BACKGROUND: Dengue infection is endemic in many regions throughout the world. While insecticide fogging targeting the vector mosquito Aedes aegypti is a major control measure against dengue epidemics, the impact of this method remains controversial. A previous mathematical simulation study indicated that insecticide fogging minimized cases when conducted soon after peak disease prevalence, although the impact was minimal, possibly because seasonality and population immunity were not considered. Periodic outbreak patterns are also highly influenced by seasonal climatic conditions. Thus, these factors are important considerations when assessing the effect of vector control against dengue. We used mathematical simulations to identify the appropriate timing of insecticide fogging, considering seasonal change of vector populations, and to evaluate its impact on reducing dengue cases with various levels of transmission intensity. METHODOLOGY/PRINCIPAL FINDINGS: We created the Susceptible-Exposed-Infectious-Recovered (SEIR model of dengue virus transmission. Mosquito lifespan was assumed to change seasonally and the optimal timing of insecticide fogging to minimize dengue incidence under various lengths of the wet season was investigated. We also assessed whether insecticide fogging was equally effective at higher and lower endemic levels by running simulations over a 500-year period with various transmission intensities to produce an endemic state. In contrast to the previous study, the optimal application of insecticide fogging was between the onset of the wet season and the prevalence peak. Although it has less impact in areas that have higher endemicity and longer wet seasons, insecticide fogging can prevent a considerable number of dengue cases if applied at the optimal time. CONCLUSIONS/SIGNIFICANCE: The optimal timing of insecticide fogging and its impact on reducing dengue cases were greatly influenced by seasonality and the level of
Institute of Scientific and Technical Information of China (English)
Caiyi Lu; Gang Wang; Qi Zhou; Jinwen Tian; Lei Gao; Shenhua Zhou; Jinyue Zhai; Rui Chen; Zhongren Zhao; Cangqing Gao; Shiwen Wang; Yuxiao Zhang; Ming Yang; Qiao Xue; Cangsong Xiao; Wei Gao; Yang Wu
2008-01-01
A 69-year old female patient was admitted because of 3 days of worsened chest pain.Coronary angiography showed60% stenosis of distal left main stem,chronic total occlusion of left anterior descending (LAD),70% stenosis at the ostium of a smallleft circumflex,70-90%stenosis at the paroxysmal and middle part of a dominant fight coronary artery (RCA),and a normal left internalmammary artery (LIMA) with normal origination and orientation.Percutaneous intervention was attempted but failed on the occludedlesion of LAD.The patient received minimally invasive direct coronary artery bypass (MIDCAB) with left LIMA isolation by Davincirobot.Eleven days later,the RCA lesion was treated by Sirolimus Rapamicin eluting stents implantation percutaneously.Then thepatient was discharged uneventfully after 3 days hospitalization.Our experience suggests that two stop shops of hybrid technique befeasible and safe in the treatment of elderly patient with multiple coronary diseases.
Minimal access surgery in Castleman disease in a child, a case report
Directory of Open Access Journals (Sweden)
Jan F. Svensson
2015-07-01
Full Text Available This case report describes a child with Castleman disease. We present an overview of the disease, the investigation leading to the diagnosis, the laparoscopic approach for surgical treatment and the follow up. This rare entity must be considered in cases of long-standing abdominal pain, cross-sectional imaging is beneficial and we support the use of laparoscopic intervention in the treatment of unifocal abdominal Castleman disease.
Deterministic mean-variance-optimal consumption and investment
DEFF Research Database (Denmark)
Christiansen, Marcus; Steffensen, Mogens
2013-01-01
In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-07-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.
The Deterministic Dendritic Cell Algorithm
Greensmith, Julie
2010-01-01
The Dendritic Cell Algorithm is an immune-inspired algorithm orig- inally based on the function of natural dendritic cells. The original instantiation of the algorithm is a highly stochastic algorithm. While the performance of the algorithm is good when applied to large real-time datasets, it is difficult to anal- yse due to the number of random-based elements. In this paper a deterministic version of the algorithm is proposed, implemented and tested using a port scan dataset to provide a controllable system. This version consists of a controllable amount of parameters, which are experimented with in this paper. In addition the effects are examined of the use of time windows and variation on the number of cells, both which are shown to influence the algorithm. Finally a novel metric for the assessment of the algorithms output is introduced and proves to be a more sensitive metric than the metric used with the original Dendritic Cell Algorithm.
Sarita; Thumati, Prafulla
2014-12-01
Evidence of dentistry dates back to 7000 B.C. and since then has come, indeed a long sophisticated way in treatment management of our dental patients. There have been admirable advances in the field of prosthodontics by the way of techniques and materials; enabling production of artificial teeth that feel, function and appear nothing but natural. The following case report describes the management of maxillary edentulousness with removable complete denture and mandibular attrition and missing teeth with onlays and FPD by the concept of minimally invasive cosmetic dentistry. Computer guided occlusal analysis was used to guide sequential occlusal adjustments to obtain measurable bilateral occlusal contacts simultaneously.
Directory of Open Access Journals (Sweden)
Reddy
2015-07-01
Full Text Available CONTEXT: The approximate incidence of periprosthetic supracondylar femur fractures after total knee arthroplasty ranges from 0.3 to 2.5 percent. Various methods of treatment of these fractures have been suggested in the past, such as conservative management, open reduction and plate fixation and intramedullary nailing. However, there were complications like pain, stiffness, infection and delayed union. Minimally invasive plate osteosynthesis (MIPO is a relatively newer technique in the treatment of distal femoral fractures, as it preserves the periosteal blood supply an d bone perfusion as well as minimizes soft tissue dissection. AIM: To evaluate the effectiveness of MIPO technique in the treatment of periprosthetic distal femoral fracture. SETTINGS AND DESIGN : In this study, we present a case report of a 54 year old female patient who sustained type 2 (Rorabeck et al. classification periprosthetic distal femoral fractures after TKA. Her fracture fixation was done with distal femoral locking plates using minimally invasive technique. METHODS AND MATERIAL : We evaluated the clinical (using Oxford knee scoring system and radiological outcomes of the patient till six months post - operatively. Radiologically, the fracture showed complete union and she regained her full range of knee motion by the end of three months. CONCLUSION: We conclude that MIPO can be considered as an effective surgical treatment option in the management of periprosthetic distal femoral fractures after TKA
DEFF Research Database (Denmark)
Bertl, Kristina; Gotfredsen, Klaus; Jensen, Simon S;
2016-01-01
OBJECTIVES: To report two cases of adverse reaction after mucosal hyaluronan (HY) injection around implant-supported crowns, with the aim to augment the missing interdental papilla. MATERIAL AND METHODS: Two patients with single, non-neighbouring, implants in the anterior maxilla, who were treate...
Time-minimal control of dissipative two-level quantum systems: The Integrable case
Bonnard, B
2008-01-01
The objective of this article is to apply recent developments in geometric optimal control to analyze the time minimum control problem of dissipative two-level quantum systems whose dynamics is governed by the Lindblad equation. We focus our analysis on the case where the extremal Hamiltonian is integrable.
TRANSPORTATION MODAL CHOICE IN COOLANT IMPORTATION THROUGH TOTAL COSTS MINIMIZATION: A CASE STUDY
Directory of Open Access Journals (Sweden)
Marcela de Souza Leite
2016-07-01
Full Text Available Transportation plays a very significant role when it comes to the costs of a company representing on average 60% of logistics costs, so its management is very important for any company. The transportation modal choice is one of the most important transportation decisions. The purpose of this article is to select the transportation mode which is able to minimize total costs, and consistent with the objectives of customer service on the coolant import, which is used in plasma cutting machines. With the installation of a distribution center in Brazil and the professionalization of the logistics department of the company, it was decided to re-evaluate the transportation mode previously chosen to import some items. To determine the best mode of transportation was used basic compensation costs, in other words the cost compensation of using the shuttle service to the indirect cost of inventory related to the modal performance. Through the study, it was possible to observe it may be possible to save up to 73% on the coolant international transportation by changing the transportation mode used by the company.
Directory of Open Access Journals (Sweden)
Hone-Jay Chu
2016-12-01
Full Text Available Outbreaks of infectious diseases or multi-casualty incidents have the potential to generate a large number of patients. It is a challenge for the healthcare system when demand for care suddenly surges. Traditionally, valuation of heath care spatial accessibility was based on static supply and demand information. In this study, we proposed an optimal model with the three-step floating catchment area (3SFCA to account for the supply to minimize variability in spatial accessibility. We used empirical dengue fever outbreak data in Tainan City, Taiwan in 2015 to demonstrate the dynamic change in spatial accessibility based on the epidemic trend. The x and y coordinates of dengue-infected patients with precision loss were provided publicly by the Tainan City government, and were used as our model’s demand. The spatial accessibility of heath care during the dengue outbreak from August to October 2015 was analyzed spatially and temporally by producing accessibility maps, and conducting capacity change analysis. This study also utilized the particle swarm optimization (PSO model to decrease the spatial variation in accessibility and shortage areas of healthcare resources as the epidemic went on. The proposed method in this study can help decision makers reallocate healthcare resources spatially when the ratios of demand and supply surge too quickly and form clusters in some locations.
The case for an error minimizing set of coding amino acids.
Torabi, Noorossadat; Goodarzi, Hani; Shateri Najafabadi, Hamed
2007-02-21
The fidelity of the translation machinery largely depends on the accuracy by which the tRNAs within the living cells are charged. Aminoacyl-tRNA synthetases (aaRSs) attach amino acids to their cognate tRNAs ensuring the fidelity of translation in coding sequences. Based on the sequence analysis and catalytic domain structure, these enzymes are classified into two major groups of 10 enzymes each. In this study, we have generally tackled the role of aaRSs in decreasing the effects of mistranslations and consequently the evolution of the translation machinery. To this end, a fitness function was introduced in order to measure the accuracy by which each tRNA is charged with its cognate amino acid. Our results suggest that the aaRSs are very well optimized in "load minimization" based on their classes and their mechanisms in distinguishing the correct amino acids. Besides, our results support the idea that from an evolutionary point, a selectional pressure on the translational fidelity seems to be responsible in the occurrence of the 20 coding amino acids.
Chu, Hone-Jay; Lin, Bo-Cheng; Yu, Ming-Run; Chan, Ta-Chien
2016-01-01
Outbreaks of infectious diseases or multi-casualty incidents have the potential to generate a large number of patients. It is a challenge for the healthcare system when demand for care suddenly surges. Traditionally, valuation of heath care spatial accessibility was based on static supply and demand information. In this study, we proposed an optimal model with the three-step floating catchment area (3SFCA) to account for the supply to minimize variability in spatial accessibility. We used empirical dengue fever outbreak data in Tainan City, Taiwan in 2015 to demonstrate the dynamic change in spatial accessibility based on the epidemic trend. The x and y coordinates of dengue-infected patients with precision loss were provided publicly by the Tainan City government, and were used as our model’s demand. The spatial accessibility of heath care during the dengue outbreak from August to October 2015 was analyzed spatially and temporally by producing accessibility maps, and conducting capacity change analysis. This study also utilized the particle swarm optimization (PSO) model to decrease the spatial variation in accessibility and shortage areas of healthcare resources as the epidemic went on. The proposed method in this study can help decision makers reallocate healthcare resources spatially when the ratios of demand and supply surge too quickly and form clusters in some locations. PMID:27983611
Institute of Scientific and Technical Information of China (English)
Yun Niu; Tieju Liu; Xuchen Cao; Xiumin Ding; Li Wei; Yuxia Gao; Jun Liu
2009-01-01
OBJECTIVE To evaluate core needle biopsy (CNB) as a mini-mally invasive method to examine breast lesions and discuss the clinical significance of subsequent immunohistochemistry (IHC)analysis.METHODS The clinical data and pathological results of 235 pa-tients with breast lesions, who Received CNB before surgery, were analyzed and compared. Based on the results of CNB done before surgery, 87 out of 204 patients diagnosed as invasive carcinoma were subjected to immunodetection for p53, c-erbB-2, ER and PR.The morphological change of cancer tissues in response to chemo-therapy was also evaluated.RESULTS In total of 235 cases receiving CNB examination, 204 were diagnosed as invasive carcinoma, reaching a 100% consistent rate with the surgical diagnosis. Sixty percent of the cases diag-nosed as non-invasive carcinoma by CNB was identified to have the presence of invading elements in surgical specimens, and simi-larly, 50% of the cases diagnosed as atypical ductal hyperplasia by CNB was confirmed to be carcinoma by the subsequent result of excision biopsy. There was no significant difference between the CNB biopsy and regular surgical samples in positive rate of im-munohistochemistry analysis (p53, c-erbB-2, ER and PR; P > 0.05).However, there was significant difference in the expression rate of p53 and c-erbB-2 between the cases with and without morphologi-cal change in response to chemotherapy (P < 0.05). In most cases with p53 and c-erbB-2 positive, there was no obvious morphologi-cal change after chemotherapy. CONCLUSION CNB is a cost-effective diagnostic method with minimal invasion for breast lesions, although it still has some limi-tations. Immunodetection on CNB tissue is expected to have great significance in clinical applications.
Grütter, Linda; Vailati, Francesca
2013-01-01
A full-mouth adhesive rehabilitation in case of severe dental erosion may present a challenge for both the clinician and the laboratory technician, not only for the multiple teeth to be restored, but also for their time schedule, difficult to be included in a busy agenda of a private practice. Thanks to the simplicity of the 3-step technique, full-mouth rehabilitations become easier to handle. In this article the treatment of a very compromised case of dental erosion (ACE class V) is illustrated, implementing only adhesive techniques. The very pleasing clinical outcome was the result of the esthetic, mechanic and most of all biological success achieved, confirming that minimally invasive dentistry should always be the driving motor of any rehabilitation, especially in patients who have already suffered from conspicuous tooth destruction.
Iwazu, Y; Nemoto, J; Okuda, K; Nakazawa, E; Hashimoto, A; Fujio, Y; Sakamoto, M; Ando, Y; Muto, S; Kusano, E
2008-01-01
A 63-year-old man was admitted to our hospital for evaluation of generalized edema. Coexistence of severe hypothyroidism and nephrotic syndrome was detected by laboratory examination. High titer of both antimicrosomal antibody and antithyroid peroxidase antibody indicated Hashimotoâs disease. Renal biopsy showed minimal change glomerular abnormality, but no findings of membranous nephropathy. A series of medical treatments, including steroid therapy, thyroid hormone and human albumin replacement therapy, were administered. However, acute renal failure accompanied by hypotension, was not sufficiently prevented. After 9 sessions of plasmapheresis therapy, the severe proteinuria and low serum albumin levels were improved. Even after resting hypotension was normalized, neither renal function nor thyroid function were fully recovered. After discharge, renal function gradually returned to normal, and the blood pressure developed into a hypertensive state concomitant with the normalization of thyroid function. This report is a rare case of autoimmune thyroid disease complicated with minimal change nephrotic syndrome. In most cases of nephritic syndrome, acute renal failure (ARF) has been reported to coexist with hypertension. Although pseudohypothyroidism is well-known in nephrotic pathophysiology, complications of actual hypothyroidism are uncommon. It is suggested that the development of hypotension and ARF could be enhanced not only by hypoproteinemia, but also by severe hypothyroidism.
Optimal Insurance for a Minimal Expected Retention: The Case of an Ambiguity-Seeking Insurer
Directory of Open Access Journals (Sweden)
Massimiliano Amarante
2016-03-01
Full Text Available In the classical expected utility framework, a problem of optimal insurance design with a premium constraint is equivalent to a problem of optimal insurance design with a minimum expected retention constraint. When the insurer has ambiguous beliefs represented by a non-additive probability measure, as in Schmeidler, this equivalence no longer holds. Recently, Amarante, Ghossoub and Phelps examined the problem of optimal insurance design with a premium constraint when the insurer has ambiguous beliefs. In particular, they showed that when the insurer is ambiguity-seeking, with a concave distortion of the insured’s probability measure, then the optimal indemnity schedule is a state-contingent deductible schedule, in which the deductible depends on the state of the world only through the insurer’s distortion function. In this paper, we examine the problem of optimal insurance design with a minimum expected retention constraint, in the case where the insurer is ambiguity-seeking. We obtain the aforementioned result of Amarante, Ghossoub and Phelps and the classical result of Arrow as special cases.
Honda, Masayuki; Daiko, Hiroyuki; Kinoshita, Takahiro; Fujita, Takeo; Shibasaki, Hidehito; Nishida, Toshiro
2015-12-01
We report on a case of synchronous carcinomas of the esophagus and stomach. A 68-year-old man was referred to our hospital for an abnormality found during his medical examination. Further evaluation revealed squamous cell carcinoma in the thoracic lower esophagus and gastric adenocarcinoma located in the middle third of the stomach. Thoracoscopic esophagectomy in the prone position (TSEP), laparoscopic total gastrectomy (LTG) with three-field lymph node dissection, and laparoscopically assisted colon reconstruction (LACR) were performed. The patient did not have any major postoperative complications. His pathological examination revealed no metastases in 56 harvested lymph nodes and no residual tumor. He was followed up for 30 months without recurrence. To our knowledge, this is the first report of esophageal and gastric synchronous carcinomas that were successfully treated with a combination of TSEP, LTG, and LACR. These operations may be a feasible and appropriate treatment for this disease.
Wahbi, M A; Al Sharief, H S; Tayeb, H; Bokhari, A
2013-04-01
Gingival recession causes not only aesthetic problems, but problems with oral hygiene, plaque accumulation, speech, and tooth sensitivity. Replacing the missing gingival tissue with composite resin, when indicated, can be a time- and cost-effective solution. Here we report the case of a 25-year-old female who presented with generalized gingival recession. Black triangles were present between the maxillary and mandibular anterior teeth due to loss of interdental tissues, caused by recent periodontal surgery. She also had slightly malposed maxillary anterior teeth. The patient elected to replace gingival tissue with pink composite resin and to alter the midline with composite resin veneers. The first treatment phase involved placement of pink gingival composite to restore the appearance of interdental papilla to her upper (16, 15, 14, 13, 12, 11, 21, 22, 23, and 24) and lower (34, 33, 32, 31, 41, 42, 43, and 44) teeth. Phase two was to place direct composite resin bonded veneers on her upper (16, 15, 14, 13, 12, 11, 21, 22, 23, and 24) teeth to alter the midline and achieve desired colour. The third treatment phase was to level the lower incisal edge shape by enameloplasty (31, 32, 41, and 42) to produce a more youthful and attractive smile. This case report and brief review attempt to describe the clinical obstacles and the current treatment options along with a suggested protocol. Use of contemporary materials such as gingival coloured composite to restore lost gingival tissue and improve aesthetics can be a simple and cost-effective way to manage patients affected by generalized aggressive periodontitis (AgP).
Ryu, Minsoo
Time-Triggered Controller Area Network is widely accepted as a viable solution for real-time communication systems such as in-vehicle communications. However, although TTCAN has been designed to support both periodic and sporadic real-time messages, previous studies mostly focused on providing deterministic real-time guarantees for periodic messages while barely addressing the performance issue of sporadic messages. In this paper, we present an O(n2) scheduling algorithm that can minimize the maximum duration of exclusive windows occupied by periodic messages, thereby minimizing the worst-case scheduling delays experienced by sporadic messages.
Accomplishing Deterministic XML Query Optimization
Institute of Scientific and Technical Information of China (English)
Dun-Ren Che
2005-01-01
As the popularity of XML (eXtensible Markup Language) keeps growing rapidly, the management of XML compliant structured-document databases has become a very interesting and compelling research area. Query optimization for XML structured-documents stands out as one of the most challenging research issues in this area because of the much enlarged optimization (search) space, which is a consequence of the intrinsic complexity of the underlying data model of XML data. We therefore propose to apply deterministic transformations on query expressions to most aggressively prune the search space and fast achieve a sufficiently improved alternative (if not the optimal) for each incoming query expression. This idea is not just exciting but practically attainable. This paper first provides an overview of our optimization strategy, and then focuses on the key implementation issues of our rule-based transformation system for XML query optimization in a database environment. The performance results we obtained from experimentation show that our approach is a valid and effective one.
Deterministic patterns in cell motility
Lavi, Ido; Piel, Matthieu; Lennon-Duménil, Ana-Maria; Voituriez, Raphaël; Gov, Nir S.
2016-12-01
Cell migration paths are generally described as random walks, associated with both intrinsic and extrinsic noise. However, complex cell locomotion is not merely related to such fluctuations, but is often determined by the underlying machinery. Cell motility is driven mechanically by actin and myosin, two molecular components that generate contractile forces. Other cell functions make use of the same components and, therefore, will compete with the migratory apparatus. Here, we propose a physical model of such a competitive system, namely dendritic cells whose antigen capture function and migratory ability are coupled by myosin II. The model predicts that this coupling gives rise to a dynamic instability, whereby cells switch from persistent migration to unidirectional self-oscillation, through a Hopf bifurcation. Cells can then switch to periodic polarity reversals through a homoclinic bifurcation. These predicted dynamic regimes are characterized by robust features that we identify through in vitro trajectories of dendritic cells over long timescales and distances. We expect that competition for limited resources in other migrating cell types can lead to similar deterministic migration modes.
Directory of Open Access Journals (Sweden)
Qian Yimei
2009-01-01
Full Text Available Abstract Introduction Acute kidney injury in the setting of adult minimal change disease is associated with proteinuria, hypertension and hyperlipidemia but anemia is usually absent. Renal biopsies exhibit foot process effacement as well as tubular interstitial inflammation, acute tubular necrosis or intratubular obstruction. We recently managed a patient with unique clinical and pathological features of minimal change disease, who presented with severe anemia and acute kidney injury, an association not previously reported in the literature. Case presentation A 60-year-old Indian-American woman with a history of hypertension and diabetes mellitus for 10 years presented with progressive oliguria over 2 days. Laboratory data revealed severe hyperkalemia, azotemia, heavy proteinuria and progressively worsening anemia. Urine eosinophils were not seen. Emergent hemodialysis, erythropoietin and blood transfusion were initiated. Serologic tests for hepatitis B, hepatitis C, anti-nuclear antibodies, anti-glomerular basement membrane antibodies and anti-neutrophil cytoplasmic antibodies were negative. Complement levels (C3, C4 and CH50 were normal. Renal biopsy unexpectedly displayed 100% foot process effacement. A 24-hour urine collection detected 6.38 g of protein. Proteinuria and anemia resolved during six weeks of steroid therapy. Renal function recovered completely. No signs of relapse were observed at 8-month follow-up. Conclusion Adult minimal change disease should be considered when a patient presents with proteinuria and severe acute kidney injury even when accompanied by severe anemia. This report adds to a growing body of literature suggesting that in addition to steroid therapy, prompt initiation of erythropoietin therapy may facilitate full recovery of renal function in acute kidney injury.
Constructing stochastic models from deterministic process equations by propensity adjustment
Directory of Open Access Journals (Sweden)
Wu Jialiang
2011-11-01
Full Text Available Abstract Background Gillespie's stochastic simulation algorithm (SSA for chemical reactions admits three kinds of elementary processes, namely, mass action reactions of 0th, 1st or 2nd order. All other types of reaction processes, for instance those containing non-integer kinetic orders or following other types of kinetic laws, are assumed to be convertible to one of the three elementary kinds, so that SSA can validly be applied. However, the conversion to elementary reactions is often difficult, if not impossible. Within deterministic contexts, a strategy of model reduction is often used. Such a reduction simplifies the actual system of reactions by merging or approximating intermediate steps and omitting reactants such as transient complexes. It would be valuable to adopt a similar reduction strategy to stochastic modelling. Indeed, efforts have been devoted to manipulating the chemical master equation (CME in order to achieve a proper propensity function for a reduced stochastic system. However, manipulations of CME are almost always complicated, and successes have been limited to relative simple cases. Results We propose a rather general strategy for converting a deterministic process model into a corresponding stochastic model and characterize the mathematical connections between the two. The deterministic framework is assumed to be a generalized mass action system and the stochastic analogue is in the format of the chemical master equation. The analysis identifies situations: where a direct conversion is valid; where internal noise affecting the system needs to be taken into account; and where the propensity function must be mathematically adjusted. The conversion from deterministic to stochastic models is illustrated with several representative examples, including reversible reactions with feedback controls, Michaelis-Menten enzyme kinetics, a genetic regulatory motif, and stochastic focusing. Conclusions The construction of a stochastic
Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.
2017-01-01
In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.
Gkrouzman, Elena; Kirou, Kyriakos A; Seshan, Surya V; Chevalier, James M
2015-01-01
Secondary causes of minimal change disease (MCD) account for a minority of cases compared to its primary or idiopathic form and provide ground for consideration of common mechanisms of pathogenesis. In this paper we report a case of a 27-year-old Latina woman, a renal transplant recipient with systemic lupus erythematosus (SLE), who developed nephrotic range proteinuria 6 months after transplantation. The patient had recurrent acute renal failure and multiple biopsies were consistent with MCD. However, she lacked any other features of the typical nephrotic syndrome. An angiogram revealed a right external iliac vein stenosis in the region of renal vein anastomosis, which when restored resulted in normalization of creatinine and relief from proteinuria. We report a rare case of MCD developing secondary to iliac vein stenosis in a renal transplant recipient with SLE. Additionally we suggest that, in the event of biopsy-proven MCD presenting as an atypical nephrotic syndrome, alternative or secondary, potentially reversible, causes should be considered and explored.
Directory of Open Access Journals (Sweden)
Elena Gkrouzman
2015-01-01
Full Text Available Secondary causes of minimal change disease (MCD account for a minority of cases compared to its primary or idiopathic form and provide ground for consideration of common mechanisms of pathogenesis. In this paper we report a case of a 27-year-old Latina woman, a renal transplant recipient with systemic lupus erythematosus (SLE, who developed nephrotic range proteinuria 6 months after transplantation. The patient had recurrent acute renal failure and multiple biopsies were consistent with MCD. However, she lacked any other features of the typical nephrotic syndrome. An angiogram revealed a right external iliac vein stenosis in the region of renal vein anastomosis, which when restored resulted in normalization of creatinine and relief from proteinuria. We report a rare case of MCD developing secondary to iliac vein stenosis in a renal transplant recipient with SLE. Additionally we suggest that, in the event of biopsy-proven MCD presenting as an atypical nephrotic syndrome, alternative or secondary, potentially reversible, causes should be considered and explored.
Tsukamoto, Yoshitane; Otsuki, Taiichiro; Hao, Hiroyuki; Kuribayashi, Kozo; Nakano, Takashi; Kida, Aritoshi; Nakanishi, Takeshi; Funatsu, Eriko; Noguchi, Chihiro; Yoshihara, Shunya; Kaku, Koji; Hirota, Seiichi
2015-12-01
Malignant pleural mesothelioma (MPM) is the aggressive disease typically spreading along the pleural surface and encasing the lung, leading to respiratory failure or cachexia. Rare cases with atypical clinical manifestation or presentation have been reported in MPM. We experienced a unique case of MPM concurrently associated with miliary pulmonary metastases and nephrotic syndrome. A 73-year-old Japanese man with past history of asbestos exposure was referred to our hospital for the investigation of the left pleural effusion. Chest computed tomography showed thickening of the left parietal pleura. Biopsy specimen of the pleura showed proliferating epithelioid tumor cells, leading to the pathological diagnosis of epithelioid MPM with the aid of immunohistochemistry. After the diagnosis of MPM, chemotherapy was performed without effect. Soon after the clinical diagnosis of progressive disease with skull metastasis, edema and weight gain appeared. Laboratory data met the criteria of nephrotic syndrome, and renal biopsy with electron microscopic examination revealed the minimal change disease. Steroid therapy was started but showed no effect. Around the same time of onset of nephrotic syndrome, multiple miliary lung nodules appeared on chest CT. Transbronchial biopsy specimen of the nodules showed the metastatic MPM in the lung. The patient died because of the worsening of the general condition. To our knowledge, this is the first case of MPM concurrently associated with multiple miliary pulmonary metastases and nephrotic syndrome.
Directory of Open Access Journals (Sweden)
Minas D. Leventis
2016-01-01
Full Text Available Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP granules coated with poly(lactic-co-glycolic acid (PLGA were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques.
Leventis, Minas D.; Fairbairn, Peter; Kakar, Ashish; Leventis, Angelos D.; Margaritis, Vasileios; Lückerath, Walter; Horowitz, Robert A.; Rao, Bappanadu H.; Lindner, Annette; Nagursky, Heiner
2016-01-01
Ridge preservation measures, which include the filling of extraction sockets with bone substitutes, have been shown to reduce ridge resorption, while methods that do not require primary soft tissue closure minimize patient morbidity and decrease surgical time and cost. In a case series of 10 patients requiring single extraction, in situ hardening beta-tricalcium phosphate (β-TCP) granules coated with poly(lactic-co-glycolic acid) (PLGA) were utilized as a grafting material that does not necessitate primary wound closure. After 4 months, clinical observations revealed excellent soft tissue healing without loss of attached gingiva in all cases. At reentry for implant placement, bone core biopsies were obtained and primary implant stability was measured by final seating torque and resonance frequency analysis. Histological and histomorphometrical analysis revealed pronounced bone regeneration (24.4 ± 7.9% new bone) in parallel to the resorption of the grafting material (12.9 ± 7.7% graft material) while high levels of primary implant stability were recorded. Within the limits of this case series, the results suggest that β-TCP coated with polylactide can support new bone formation at postextraction sockets, while the properties of the material improve the handling and produce a stable and porous bone substitute scaffold in situ, facilitating the application of noninvasive surgical techniques. PMID:27190516
A Deterministic and Polynomial Modified Perceptron Algorithm
Directory of Open Access Journals (Sweden)
Olof Barr
2006-01-01
Full Text Available We construct a modified perceptron algorithm that is deterministic, polynomial and also as fast as previous known algorithms. The algorithm runs in time O(mn3lognlog(1/ρ, where m is the number of examples, n the number of dimensions and ρ is approximately the size of the margin. We also construct a non-deterministic modified perceptron algorithm running in timeO(mn2lognlog(1/ρ.
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
Deterministic gathering of anonymous agents in arbitrary networks
Dieudonné, Yoann
2011-01-01
A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to meet at the same node. Agents are anonymous (identical), execute the same deterministic algorithm and move in synchronous rounds along links of the network. Which configurations are gatherable and how to gather all of them deterministically by the same algorithm? We give a complete solution of this gathering problem in arbitrary networks. We characterize all gatherable configurations and give two universal deterministic gathering algorithms, i.e., algorithms that gather all gatherable configurations. The first algorithm works under the assumption that an upper bound n on the size of the network is known. In this case our algorithm guarantees gathering with detection, i.e., the existence of a round for any gatherable configuration, such that all agents are at the same node and all declare that gathering is accomplished. If no upper bound on the size of the network i...
How Does Quantum Uncertainty Emerge from Deterministic Bohmian Mechanics?
Solé, A.; Oriols, X.; Marian, D.; Zanghì, N.
2016-10-01
Bohmian mechanics is a theory that provides a consistent explanation of quantum phenomena in terms of point particles whose motion is guided by the wave function. In this theory, the state of a system of particles is defined by the actual positions of the particles and the wave function of the system; and the state of the system evolves deterministically. Thus, the Bohmian state can be compared with the state in classical mechanics, which is given by the positions and momenta of all the particles, and which also evolves deterministically. However, while in classical mechanics it is usually taken for granted and considered unproblematic that the state is, at least in principle, measurable, this is not the case in Bohmian mechanics. Due to the linearity of the quantum dynamical laws, one essential component of the Bohmian state, the wave function, is not directly measurable. Moreover, it turns out that the measurement of the other component of the state — the positions of the particles — must be mediated by the wave function; a fact that in turn implies that the positions of the particles, though measurable, are constrained by absolute uncertainty. This is the key to understanding how Bohmian mechanics, despite being deterministic, can account for all quantum predictions, including quantum randomness and uncertainty.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Exploiting Deterministic TPG for Path Delay Testing
Institute of Scientific and Technical Information of China (English)
李晓维
2000-01-01
Detection of path delay faults requires two-pattern tests. BIST technique provides a low-cost test solution. This paper proposes an approach to designing a cost-effective deterministic test pattern generator (TPG) for path delay testing. Given a set of pre-generated test-pairs with pre-determined fault coverage, a deterministic TPG is synthesized to apply the given test-pair set in a limited test time. To achieve this objective, configurable linear feedback shift register (LFSR) structures are used. Techniques are developed to synthesize such a TPG, which is used to generate an unordered deterministic test-pair set. The resulting TPG is very efficient in terms of hardware size and speed performance. Simulation of academic benchmark circuits has given good results when compared to alternative solutions.
Deterministic mediated superdense coding with linear optics
Energy Technology Data Exchange (ETDEWEB)
Pavičić, Mladen, E-mail: mpavicic@physik.hu-berlin.de [Department of Physics—Nanooptics, Faculty of Mathematics and Natural Sciences, Humboldt University of Berlin (Germany); Center of Excellence for Advanced Materials and Sensing Devices (CEMS), Photonics and Quantum Optics Unit, Ruđer Bošković Institute, Zagreb (Croatia)
2016-02-22
We present a scheme of deterministic mediated superdense coding of entangled photon states employing only linear-optics elements. Ideally, we are able to deterministically transfer four messages by manipulating just one of the photons. Two degrees of freedom, polarization and spatial, are used. A new kind of source of heralded down-converted photon pairs conditioned on detection of another pair with an efficiency of 92% is proposed. Realistic probabilistic experimental verification of the scheme with such a source of preselected pairs is feasible with today's technology. We obtain the channel capacity of 1.78 bits for a full-fledged implementation. - Highlights: • Deterministic linear optics mediated superdense coding is proposed. • Two degrees of freedom, polarization and spatial, are used. • Heralded source of conditioned entangled photon pairs, 92% efficient, is proposed.
Stochastic versus deterministic systems of differential equations
Ladde, G S
2003-01-01
This peerless reference/text unfurls a unified and systematic study of the two types of mathematical models of dynamic processes-stochastic and deterministic-as placed in the context of systems of stochastic differential equations. Using the tools of variational comparison, generalized variation of constants, and probability distribution as its methodological backbone, Stochastic Versus Deterministic Systems of Differential Equations addresses questions relating to the need for a stochastic mathematical model and the between-model contrast that arises in the absence of random disturbances/flu
Neutron noise computation using panda deterministic code
Energy Technology Data Exchange (ETDEWEB)
Humbert, Ph. [CEA Bruyeres le Chatel (France)
2003-07-01
PANDA is a general purpose discrete ordinates neutron transport code with deterministic and non deterministic applications. In this paper we consider the adaptation of PANDA to stochastic neutron counting problems. More specifically we consider the first two moments of the count number probability distribution. In a first part we will recall the equations for the single neutron and source induced count number moments with the corresponding expression for the excess of relative variance or Feynman function. In a second part we discuss the numerical solution of these inhomogeneous adjoint time dependent transport coupled equations with discrete ordinate methods. Finally, numerical applications are presented in the third part. (author)
Optimal Deterministic Investment Strategies for Insurers
Directory of Open Access Journals (Sweden)
Ulrich Rieder
2013-11-01
Full Text Available We consider an insurance company whose risk reserve is given by a Brownian motion with drift and which is able to invest the money into a Black–Scholes financial market. As optimization criteria, we treat mean-variance problems, problems with other risk measures, exponential utility and the probability of ruin. Following recent research, we assume that investment strategies have to be deterministic. This leads to deterministic control problems, which are quite easy to solve. Moreover, it turns out that there are some interesting links between the optimal investment strategies of these problems. Finally, we also show that this approach works in the Lévy process framework.
Deterministic Methods for Filtering, part I: Mean-field Ensemble Kalman Filtering
Law, Kody J H; Tempone, Raul
2014-01-01
This paper provides a proof of convergence of the standard EnKF generalized to non-Gaussian state space models, based on the indistinguishability property of the joint distribution on the ensemble. A density-based deterministic approximation of the mean-field EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for d<2k. The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.
Deterministic doping and the exploration of spin qubits
Energy Technology Data Exchange (ETDEWEB)
Schenkel, T.; Weis, C. D.; Persaud, A. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Lo, C. C. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States); London Centre for Nanotechnology (United Kingdom); Chakarov, I. [Global Foundries, Malta, NY 12020 (United States); Schneider, D. H. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States); Bokor, J. [Accelerator and Fusion Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA 94720 (United States)
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
A Gap Property of Deterministic Tree Languages
DEFF Research Database (Denmark)
Niwinski, Damian; Walukiewicz, Igor
2003-01-01
We show that a tree language recognized by a deterministic parity automaton is either hard for the co-Büchi level and therefore cannot be recognized by a weak alternating automaton, or is on a very low evel in the hierarchy of weak alternating automata. A topological counterpart of this property...
The mathematical basis for deterministic quantum mechanics
Hooft, G. 't
2007-01-01
If there exists a classical, i.e. deterministic theory underlying quantum mechanics, an explanation must be found of the fact that the Hamiltonian, which is defined to be the operator that generates evolution in time, is bounded from below. The mechanism that can produce exactly such a constraint is
Deterministic Kalman filtering in a behavioral framework
Fagnani, F; Willems, JC
1997-01-01
The purpose of this paper is to obtain a deterministic version of the Kalman filtering equations. We will use a behavioral description of the plant, specifically, an image representation. The resulting algorithm requires a matrix spectral factorization. We also show that the filter can be implemente
DETERMINISTIC HOMOGENIZATION OF QUASILINEAR DAMPED HYPERBOLIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
Gabriel Nguetseng; Hubert Nnang; Nils Svanstedt
2011-01-01
Deterministic homogenization is studied for quasilinear monotone hyperbolic problems with a linear damping term.It is shown by the sigma-convergence method that the sequence of solutions to a class of multi-scale highly oscillatory hyperbolic problems converges to the solution to a homogenized quasilinear hyperbolic problem.
DEFF Research Database (Denmark)
Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto
1974-01-01
The use of nonterminals versus the use of homomorphisms of different kinds in the basic types of deterministic OL-systems is studied. A rather surprising result is that in some cases the use of nonterminals produces a comparatively low generative capacity, whereas in some other cases the use of n...
Méndez, Gonzalo P; Enos, Daniel; Moreira, José Luis; Alvaredo, Fátima; Oddó, David
2017-04-01
The patient was an 18-year-old man who developed nephrotic syndrome after a 'wheat spider' bite (Latrodectus mactans). Due to this atypical manifestation of latrodectism, a renal biopsy was performed showing minimal change disease. The nephrotic syndrome subsided after 1 week without specific treatment. This self-limited evolution suggests that the mechanism of podocyte damage was temporary and potentially mediated by a secondary mechanism of hypersensitivity or direct effect of the α-latrotoxin. The patient did not show signs of relapse in subsequent checkup. This is the first reported case of nephrotic syndrome due to a minimal change lesion secondary to latrodectism.
From LTL and Limit-Deterministic B\\"uchi Automata to Deterministic Parity Automata
Esparza, Javier; Křetínský, Jan; Raskin, Jean-François; Sickert, Salomon
2017-01-01
Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic B\\"uchi a...
Influence of Deterministic Attachments for Large Unifying Hybrid Network Model
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
Large unifying hybrid network model (LUHPM) introduced the deterministic mixing ratio fd on the basis of the harmonious unification hybrid preferential model, to describe the influence of deterministic attachment to the network topology characteristics,
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The optimality of a fuzzy logic alternative to the usual treatment of uncertainties in a scheduling system using fuzzy numbers is examined formally. Processing times and due dates are fuzzified and presented by fuzzy numbers. With introducing the necessity measure, we compare fuzzy completion times of jobs with fuzzy due dates to decide whether jobs are tardy. The object is to minimize the numbers of tardy jobs.The efficient solution method for this problem is proposed. And deterministic counterpart of this single machine scheduling problem is a special case of fuzzy version.
Cellular non-deterministic automata and partial differential equations
Kohler, D.; Müller, J.; Wever, U.
2015-09-01
We define cellular non-deterministic automata (CNDA) in the spirit of non-deterministic automata theory. They are different from the well-known stochastic automata. We propose the concept of deterministic superautomata to analyze the dynamical behavior of a CNDA and show especially that a CNDA can be embedded in a deterministic cellular automaton. As an application we discuss a connection between certain partial differential equations and CNDA.
Peters, H.P.E.; Kar, NC van de; Wetzels, J.F.M.
2008-01-01
Minimal change nephropathy (MCNS) and focal segmental glomerulosclerosis (FSGS) are the main causes of the idiopathic nephrotic syndrome. MCNS usually responds to steroids and the long-term prognosis is generally good. However, some patients require prolonged treatment with immunosuppressive agents.
Metz, Roderik; van der Heijden, Geert J. M. G.; Verleisdonk, Egbert-Jan M. M.; Kolfschoten, Nicky; Verhofstad, Michiel H. J.; van der Werken, Christiaan
2011-01-01
Background: Complications of acute Achilles tendon rupture treatment are considered to negatively influence outcome, but the relevance of these effects is largely unknown. Purpose: The Achilles Tendon Total Rupture Score (ATRS) was used to determine level of disability in patients with minimally inv
Microscopy with a Deterministic Single Ion Source
Jacob, Georg; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T; Poschinger, Ulrich G; Schmidt-Kaler, Ferdinand; Singer, Kilian
2015-01-01
We realize a single particle microscope by using deterministically extracted laser cooled $^{40}$Ca$^+$ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions with a resolution of 5.8$\\;\\pm\\;$1.0$\\,$nm and a minimum two-sample deviation of the beam position of 1.5$\\,$nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to much higher signal to noise ratios as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1$\\,\\mu$m circular hole structure to an accuracy of 2.7$\\,$nm using only 579 probe particles.
Bayesian Uncertainty Analyses Via Deterministic Model
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
Deterministic nonlinear systems a short course
Anishchenko, Vadim S; Strelkova, Galina I
2014-01-01
This text is a short yet complete course on nonlinear dynamics of deterministic systems. Conceived as a modular set of 15 concise lectures it reflects the many years of teaching experience by the authors. The lectures treat in turn the fundamental aspects of the theory of dynamical systems, aspects of stability and bifurcations, the theory of deterministic chaos and attractor dimensions, as well as the elements of the theory of Poincare recurrences.Particular attention is paid to the analysis of the generation of periodic, quasiperiodic and chaotic self-sustained oscillations and to the issue of synchronization in such systems. This book is aimed at graduate students and non-specialist researchers with a background in physics, applied mathematics and engineering wishing to enter this exciting field of research.
Advances in stochastic and deterministic global optimization
Zhigljavsky, Anatoly; Žilinskas, Julius
2016-01-01
Current research results in stochastic and deterministic global optimization including single and multiple objectives are explored and presented in this book by leading specialists from various fields. Contributions include applications to multidimensional data visualization, regression, survey calibration, inventory management, timetabling, chemical engineering, energy systems, and competitive facility location. Graduate students, researchers, and scientists in computer science, numerical analysis, optimization, and applied mathematics will be fascinated by the theoretical, computational, and application-oriented aspects of stochastic and deterministic global optimization explored in this book. This volume is dedicated to the 70th birthday of Antanas Žilinskas who is a leading world expert in global optimization. Professor Žilinskas's research has concentrated on studying models for the objective function, the development and implementation of efficient algorithms for global optimization with single and mu...
Dynamic optimization deterministic and stochastic models
Hinderer, Karl; Stieglitz, Michael
2016-01-01
This book explores discrete-time dynamic optimization and provides a detailed introduction to both deterministic and stochastic models. Covering problems with finite and infinite horizon, as well as Markov renewal programs, Bayesian control models and partially observable processes, the book focuses on the precise modelling of applications in a variety of areas, including operations research, computer science, mathematics, statistics, engineering, economics and finance. Dynamic Optimization is a carefully presented textbook which starts with discrete-time deterministic dynamic optimization problems, providing readers with the tools for sequential decision-making, before proceeding to the more complicated stochastic models. The authors present complete and simple proofs and illustrate the main results with numerous examples and exercises (without solutions). With relevant material covered in four appendices, this book is completely self-contained.
Deterministic Leader Election Among Disoriented Anonymous Sensors
dieudonné, Yoann; Petit, Franck; Villain, Vincent
2012-01-01
We address the Leader Election (LE) problem in networks of anonymous sensors sharing no kind of common coordinate system. Leader Election is a fundamental symmetry breaking problem in distributed computing. Its goal is to assign value 1 (leader) to one of the entities and value 0 (non-leader) to all others. In this paper, assuming n > 1 disoriented anonymous sensors, we provide a complete charac- terization on the sensors positions to deterministically elect a leader, provided that all the sensors' positions are known by every sensor. More precisely, our contribution is twofold: First, assuming n anonymous sensors agreeing on a common handedness (chirality) of their own coordinate system, we provide a complete characterization on the sensors positions to deterministically elect a leader. Second, we also provide such a complete chararacterization for sensors devoided of a common handedness. Both characterizations rely on a particular object from combinatorics on words, namely the Lyndon Words.
Introducing Synchronisation in Deterministic Network Models
DEFF Research Database (Denmark)
Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.;
2006-01-01
The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading....... The suggested models are intended for incorporation into an existing analysis tool a.k.a. CyNC based on the MATLAB/SimuLink framework for graphical system analysis and design....
Deterministic definition of the capital risk
Anna Szczypinska; Piotrowski, Edward W.
2008-01-01
In this paper we propose a look at the capital risk problem inspired by deterministic, known from classical mechanics, problem of juggling. We propose capital equivalents to the Newton's laws of motion and on this basis we determine the most secure form of credit repayment with regard to maximisation of profit. Then we extend the Newton's laws to models in linear spaces of arbitrary dimension with the help of matrix rates of return. The matrix rates describe the evolution of multidimensional ...
Deterministic nanoassembly: Neutral or plasma route?
Levchenko, I.; Ostrikov, K.; Keidar, M.; Xu, S.
2006-07-01
It is shown that, owing to selective delivery of ionic and neutral building blocks directly from the ionized gas phase and via surface migration, plasma environments offer a better deal of deterministic synthesis of ordered nanoassemblies compared to thermal chemical vapor deposition. The results of hybrid Monte Carlo (gas phase) and adatom self-organization (surface) simulation suggest that higher aspect ratios and better size and pattern uniformity of carbon nanotip microemitters can be achieved via the plasma route.
Deterministic Pattern Classifier Based on Genetic Programming
Institute of Scientific and Technical Information of China (English)
LI Jian-wu; LI Min-qiang; KOU Ji-song
2001-01-01
This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.
Schroedinger difference equation with deterministic ergodic potentials
Suto, Andras
2012-01-01
We review the recent developments in the theory of the one-dimensional tight-binding Schr\\"odinger equation for a class of deterministic ergodic potentials. In the typical examples the potentials are generated by substitutional sequences, like the Fibonacci or the Thue-Morse sequence. We concentrate on rigorous results which will be explained rather than proved. The necessary mathematical background is provided in the text.
Kawtharani, Firas; Masrouha, Karim Z; Afeiche, Nadim
2016-01-01
Fluoroquinolones are widely used antibiotics; however, numerous side effects have been reported in published studies, including a spectrum of tendinopathies, affecting numerous anatomic sites. Several risk factors have been identified, including advanced age (>60 years), corticosteroid use, renal failure or dialysis, female sex, and nonobesity. We present the case of an elderly male with minimal change disease treated with glucocorticoids and acute kidney injury, who sustained spontaneous nontraumatic bilateral Achilles tendon tears 4 days after initiating ciprofloxacin.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Energy Technology Data Exchange (ETDEWEB)
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Van Nieuwenhuyse, Inneke; VANDAELE, Nico
2004-01-01
This paper describes a model for minimizing total costs in a single-product, deterministic flow shop with overlapping operations in terms of the sublot size used. Three types of costs are considered: the inventory holding costs, the transportation costs and the so-called "gap costs" which may result from the intermittent idling of machines between consecutive Sublots.
Using EFDD as a Robust Technique for Deterministic Excitation in Operational Modal Analysis
DEFF Research Database (Denmark)
Jacobsen, Niels-Jørgen; Andersen, Palle; Brincker, Rune
2007-01-01
The algorithms used in Operational Modal Analysis assume that the input forces are stochastic in nature. While this is often the case for civil engineering structures, mechanical structures, in contrast, are subject inherently to deterministic forces due to the rotating parts in the machinery. Th...
Tsuchiya, Kazuo; Nishiyama, Takehiro; Tsujita, Katsuyoshi
2001-02-01
We have proposed an optimization method for a combinatorial optimization problem using replicator equations. To improve the solution further, a deterministic annealing algorithm may be applied. During the annealing process, bifurcations of equilibrium solutions will occur and affect the performance of the deterministic annealing algorithm. In this paper, the bifurcation structure of the proposed model is analyzed in detail. It is shown that only pitchfork bifurcations occur in the annealing process, and the solution obtained by the annealing is the branch uniquely connected with the uniform solution. It is also shown experimentally that in many cases, this solution corresponds to a good approximate solution of the optimization problem. Based on the results, a deterministic annealing algorithm is proposed and applied to the quadratic assignment problem to verify its performance.
Experimental Demonstration of Deterministic Entanglement Transformation
Institute of Scientific and Technical Information of China (English)
CHEN Geng; XU Jin-Shi; LI Chuan-Feng; GONG Ming; CHEN Lei; GUO Guang-Can
2009-01-01
According to Nielsen's theorem [Phys.Rev.Lett.83 (1999) 436]and as a proof of principle,we demonstrate the deterministic transformation from a maximum entangled state to an arbitrary nonmaximum entangled pure state with local operation and classical communication in an optical system.The output states are verified with a quantum tomography process.We further test the violation of Bell-like inequality to demonstrate the quantum nonlocality of the state we generated.Our results may be useful in quantum information processing.
Deterministic Thinning of Finite Poisson Processes
Angel, Omer; Soo, Terry
2009-01-01
Let Pi and Gamma be homogeneous Poisson point processes on a fixed set of finite volume. We prove a necessary and sufficient condition on the two intensities for the existence of a coupling of Pi and Gamma such that Gamma is a deterministic function of Pi, and all points of Gamma are points of Pi. The condition exhibits a surprising lack of monotonicity. However, in the limit of large intensities, the coupling exists if and only if the expected number of points is at least one greater in Pi than in Gamma.
Enhanced piecewise regression based on deterministic annealing
Institute of Scientific and Technical Information of China (English)
ZHANG JiangShe; YANG YuQian; CHEN XiaoWen; ZHOU ChengHu
2008-01-01
Regression is one of the important problems in statistical learning theory. This paper proves the global convergence of the piecewise regression algorithm based on deterministic annealing and continuity of global minimum of free energy w.r.t temperature, and derives a new simplified formula to compute the initial critical temperature. A new enhanced piecewise regression algorithm by using "migration of prototypes" is proposed to eliminate "empty cell" in the annealing process. Numerical experiments on several benchmark datasets show that the new algo-rithm can remove redundancy and improve generalization of the piecewise regres-sion model.
Explicit Protocol for Deterministic Entanglement Concentration
Institute of Scientific and Technical Information of China (English)
GU Yong-Jian; GAO Peng; GUO Guang-Can
2005-01-01
@@ We present an explicit protocol for extraction of an EPR pair from two partially entangled pairs in a deterministic fashion via local operations and classical communication. This protocol is constituted by a local measurement described by a positive operator-valued measure (POVM), one-way classical communication, and a corresponding local unitary operation or a choice between the two pairs. We explicitly construct the required POVM by the analysis of the doubly stochastic matrix connecting the initial and the final states. Our scheme might be useful in future quantum communication.
Directory of Open Access Journals (Sweden)
Stacey A. Strong
2016-09-01
Full Text Available Background: We present an interesting case of bilateral retinitis pigmentosa (RP-associated cystoid macular oedema that responded on two separate occasions to intravitreal injections of aflibercept, despite previously demonstrating only minimal response to intravitreal ranibizumab. This unique case would support a trial of intravitreal aflibercept for the treatment of RP-associated cystoid macular oedema. Case Presentation: A 38-year-old man from Dubai, United Arab Emirates, presented to the UK with a 3-year history of bilateral RP-associated cystoid macular oedema. Previous treatment with topical dorzolamide, oral acetazolamide, and intravitreal ranibizumab had demonstrated only minimal reduction of cystoid macular oedema. Following re-confirmation of the diagnosis by clinical examination and optical coherence tomography imaging, bilateral loading doses of intravitreal aflibercept were given. Central macular thickness reduced and the patient returned to Dubai. After 6 months, the patient was treated with intravitreal ranibizumab due to re-accumulation of fluid and the unavailability of aflibercept in Dubai. Only minimal reduction of central macular thickness was observed. Once available in Dubai, intravitreal aflibercept was administered bilaterally with further reduction of central macular thickness observed. Visual acuity remained stable throughout. Conclusions: This is the first case report to demonstrate a reduction of RP-associated CMO following intravitreal aflibercept, despite inadequate response to ranibizumab on two separate occasions. Aflibercept may provide superior action to other anti-VEGF medications due to its intermediate size (115 kDa and higher binding affinity. This is worthy of further investigation in a large prospective cohort over an extended time to determine the safety and efficacy of intravitreal aflibercept for use in this condition.
Strong, Stacey A.; Gurbaxani, Avinash; Michaelides, Michel
2016-01-01
Background We present an interesting case of bilateral retinitis pigmentosa (RP)-associated cystoid macular oedema that responded on two separate occasions to intravitreal injections of aflibercept, despite previously demonstrating only minimal response to intravitreal ranibizumab. This unique case would support a trial of intravitreal aflibercept for the treatment of RP-associated cystoid macular oedema. Case Presentation A 38-year-old man from Dubai, United Arab Emirates, presented to the UK with a 3-year history of bilateral RP-associated cystoid macular oedema. Previous treatment with topical dorzolamide, oral acetazolamide, and intravitreal ranibizumab had demonstrated only minimal reduction of cystoid macular oedema. Following re-confirmation of the diagnosis by clinical examination and optical coherence tomography imaging, bilateral loading doses of intravitreal aflibercept were given. Central macular thickness reduced and the patient returned to Dubai. After 6 months, the patient was treated with intravitreal ranibizumab due to re-accumulation of fluid and the unavailability of aflibercept in Dubai. Only minimal reduction of central macular thickness was observed. Once available in Dubai, intravitreal aflibercept was administered bilaterally with further reduction of central macular thickness observed. Visual acuity remained stable throughout. Conclusions This is the first case report to demonstrate a reduction of RP-associated CMO following intravitreal aflibercept, despite inadequate response to ranibizumab on two separate occasions. Aflibercept may provide superior action to other anti-VEGF medications due to its intermediate size (115 kDa) and higher binding affinity. This is worthy of further investigation in a large prospective cohort over an extended time to determine the safety and efficacy of intravitreal aflibercept for use in this condition.
Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A
2016-12-15
This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate.
Deterministic prediction of surface wind speed variations
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Polynomial Factoring and Association Schemes
Arora, Manuel; Karpinski, Marek; Saxena, Nitin
2012-01-01
The problem of finding a nontrivial factor of a polynomial f(x) over a finite field F_q has many known efficient, but randomized, algorithms. The deterministic complexity of this problem is a famous open question even assuming the generalized Riemann hypothesis (GRH). In this work we improve the state of the art by focusing on prime degree polynomials; let n be the degree. If (n-1) has a `large' r-smooth divisor s, then we find a nontrivial factor of f(x) in deterministic poly(n^r,log q) time; assuming GRH and that s > sqrt{n/(2^r)}. Thus, for r = O(1) our algorithm is polynomial time. Further, for r > loglog n there are infinitely many prime degrees n for which our algorithm is applicable and better than the best known; assuming GRH. Our methods build on the algebraic-combinatorial framework of m-schemes initiated by Ivanyos, Karpinski and Saxena (ISSAC 2009). We show that the m-scheme on n points, implicitly appearing in our factoring algorithm, has an exceptional structure; leading us to the improved time ...
A mathematical theory for deterministic quantum mechanics
Energy Technology Data Exchange (ETDEWEB)
Hooft, Gerard ' t [Institute for Theoretical Physics, Utrecht University (Netherlands); Spinoza Institute, Postbox 80.195, 3508 TD Utrecht (Netherlands)
2007-05-15
Classical, i.e. deterministic theories underlying quantum mechanics are considered, and it is shown how an apparent quantum mechanical Hamiltonian can be defined in such theories, being the operator that generates evolution in time. It includes various types of interactions. An explanation must be found for the fact that, in the real world, this Hamiltonian is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes. The nature of the equivalence classes follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model.
Deterministic Aided STAP for Target Detection in Heterogeneous Situations
Directory of Open Access Journals (Sweden)
J.-F. Degurse
2013-01-01
Full Text Available Classical space-time adaptive processing (STAP detectors are strongly limited when facing highly heterogeneous environments. Indeed, in this case, representative target free data are no longer available. Single dataset algorithms, such as the MLED algorithm, have proved their efficiency in overcoming this problem by only working on primary data. These methods are based on the APES algorithm which removes the useful signal from the covariance matrix. However, a small part of the clutter signal is also removed from the covariance matrix in this operation. Consequently, a degradation of clutter rejection performance is observed. We propose two algorithms that use deterministic aided STAP to overcome this issue of the single dataset APES method. The results on realistic simulated data and real data show that these methods outperform traditional single dataset methods in detection and in clutter rejection.
Analysis of deterministic cyclic gene regulatory network models with delays
Ahsen, Mehmet Eren; Niculescu, Silviu-Iulian
2015-01-01
This brief examines a deterministic, ODE-based model for gene regulatory networks (GRN) that incorporates nonlinearities and time-delayed feedback. An introductory chapter provides some insights into molecular biology and GRNs. The mathematical tools necessary for studying the GRN model are then reviewed, in particular Hill functions and Schwarzian derivatives. One chapter is devoted to the analysis of GRNs under negative feedback with time delays and a special case of a homogenous GRN is considered. Asymptotic stability analysis of GRNs under positive feedback is then considered in a separate chapter, in which conditions leading to bi-stability are derived. Graduate and advanced undergraduate students and researchers in control engineering, applied mathematics, systems biology and synthetic biology will find this brief to be a clear and concise introduction to the modeling and analysis of GRNs.
Mixed deterministic statistical modelling of regional ozone air pollution
Kalenderski, Stoitchko Dimitrov
2011-03-17
We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..
Linear embedding of free energy minimization
Moussa, Jonathan E
2016-01-01
Exact free energy minimization is a convex optimization problem that is usually approximated with stochastic sampling methods. Deterministic approximations have been less successful because many desirable properties have been difficult to attain. Such properties include the preservation of convexity, lower bounds on free energy, and applicability to systems without subsystem structure. We satisfy all of these properties by embedding free energy minimization into a linear program over energy-resolved expectation values. Numerical results on small systems are encouraging, but a lack of size consistency necessitates further development for large systems.
Setty, Pradeep; Volkov, Andrey; Richards, Boyd; Barrett, Ryan
2015-01-01
Biventricular hydrocephalus caused by a Giant Basilar Apex Aneurysm (GBAA) is a rare finding that presents unique and challenging treatment decisions. We report a case of GBAA causing a life-threatening biventricular hydrocephalus in which both the aneurysm and hydrocephalus were given definitive treatment through a staged, minimally invasive approach. An obtunded 82-year-old male was found to have biventricular hydrocephalus caused by an unruptured GBAA obstructing the foramina of Monro. The patient was treated via staged, minimally invasive technique that first involved endoscopic fenestration of the septum pellucidum to create communication between the lateral ventricles. A programmable ventriculo-peritoneal shunt was then placed with a high-pressure setting. The patient was then loaded with dual anti-platelet therapy prior to undergoing endovascular coiling of the GBAA with adjacent stenting of the Posterior Cerebral Artery. He remained on dual anti-platelet therapy and the shunt setting was lowered at the bedside to treat the hydrocephalus. At 6-month follow up, the patient had returned to his cognitive baseline, speaking fluently and appropriately. Biventricular hydrocephalus caused by a GBAA can successfully be treated in a minimally invasive fashion utilizing a combination of endoscopy and endovascular therapy, even when a stent-assisted coiling is needed.
Energy Technology Data Exchange (ETDEWEB)
Chu, Yi-Zen [Center for Particle Cosmology, Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2014-09-15
Motivated by the desire to understand the causal structure of physical signals produced in curved spacetimes – particularly around black holes – we show how, for certain classes of geometries, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d ≥ 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d + 1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the “wave equation” perpendicular to the hyperboloid – followed by a one-dimensional integral – needs to be evaluated. A topological obstruction to the general construction is also discussed by utilizing it to derive a generalized Green's function of the Laplacian on the (d ≥ 2)-dimensional sphere.
Chu, Yi-Zen
2014-09-01
Motivated by the desire to understand the causal structure of physical signals produced in curved spacetimes - particularly around black holes - we show how, for certain classes of geometries, one might obtain its retarded or advanced minimally coupled massless scalar Green's function by using the corresponding Green's functions in the higher dimensional Minkowski spacetime where it is embedded. Analogous statements hold for certain classes of curved Riemannian spaces, with positive definite metrics, which may be embedded in higher dimensional Euclidean spaces. The general formula is applied to (d ≥ 2)-dimensional de Sitter spacetime, and the scalar Green's function is demonstrated to be sourced by a line emanating infinitesimally close to the origin of the ambient (d + 1)-dimensional Minkowski spacetime and piercing orthogonally through the de Sitter hyperboloids of all finite sizes. This method does not require solving the de Sitter wave equation directly. Only the zero mode solution to an ordinary differential equation, the "wave equation" perpendicular to the hyperboloid - followed by a one-dimensional integral - needs to be evaluated. A topological obstruction to the general construction is also discussed by utilizing it to derive a generalized Green's function of the Laplacian on the (d ≥ 2)-dimensional sphere.
Kiourkos, S
1999-01-01
One of the potentially accessible decay modes of the Higgs boson in the mass region $100 < m_H < 180$ GeV is the $H^0 \\rightarrow Z^0 \\gamma$ channel. The work presented in this note examines the Standard Model and Minimal Supersymmetric Standard Model predictions for the observability of this channel using particle level simulation as well as the ATLAS fast simulation (ATLFAST). It compares present estimates for the signal observability with previously reported ones in \\cite{unal} specifying the changes arising from the assumed energy of the colliding protons and the improvements in the treatment of theoretical predictions. With the present estimates, the expected significance for the SM Higgs does not exceed, in terms of $\\frac{S}{\\sqrt{B}}$, 1.5 $\\sigma$ (including $Z^0 \\rightarrow e^+ e^-$ and $Z^0 \\rightarrow {\\mu}^+ {\\mu}^-$) for an integrated luminosity of $10^5$ pb$^{-1}$ therefore not favouring this channel for SM Higgs searches. Comparable discovery potential is expected at most for the MSSM $...
Directory of Open Access Journals (Sweden)
Samuel Kwame Ansah
2014-07-01
Full Text Available This research aims at ascertaining appropriate construction designs and techniques that could be adopted to minimize excessive heat gains in buildings. Random sampling technique was used for selecting hundred (100 domestic buildings in each of the three densely populated suburbs considered within the Cape Coast Metropolis in Ghana. In total, three hundred (300 buildings were used as a sample for this study. Structured interview and observation were used as the main research methods to obtain the necessary data for the study objectives. The results show that appropriate construction designed methods and techniques were not adopted for the construction of almost all the buildings investigated. It was also realized that majority of the occupants (96% used electric fans, and air conditioners to reduce the amount of heat gains in their rooms. The study suggested that, shading techniques such as screens to walls, fixed sun breakers and attached canopies must be encouraged in the design and construction of buildings. The study also suggested that all buildings, yet to be constructed, should be positioned with their longest walls facings north and south in order to reduce intense morning and evening sun entering into the building with more window openings accommodated in both sides of the longest walls to allow for cross ventilation.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Energy Technology Data Exchange (ETDEWEB)
Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M
2004-12-08
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF
Gollapalli, Rajesh Babu; Naiman, Ana Nusa; Merry, David
2015-07-01
Cervical necrotizing fasciitis secondary to epiglottitis is rare. The standard treatment of this severe condition has long been early and aggressive surgical debridement and adequate antimicrobial therapy. We report the case of an immunocompetent 59-year-old man who developed cervical necrotizing fasciitis as a complication of acute epiglottitis. We were able to successfully manage this patient with conservative surgical treatment (incision and drainage, in addition to antibiotic therapy) that did not involve aggressive debridement.
Deterministic aspects of nonlinear modulation instability
van Groesen, E; Karjanto, N
2011-01-01
Different from statistical considerations on stochastic wave fields, this paper aims to contribute to the understanding of (some of) the underlying physical phenomena that may give rise to the occurrence of extreme, rogue, waves. To that end a specific deterministic wavefield is investigated that develops extreme waves from a uniform background. For this explicitly described nonlinear extension of the Benjamin-Feir instability, the soliton on finite background of the NLS equation, the global down-stream evolving distortions, the time signal of the extreme waves, and the local evolution near the extreme position are investigated. As part of the search for conditions to obtain extreme waves, we show that the extreme wave has a specific optimization property for the physical energy, and comment on the possible validity for more realistic situations.
Deterministic phase slips in mesoscopic superconducting rings
Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.
2016-11-01
The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.
Primality deterministic and primality probabilistic tests
Directory of Open Access Journals (Sweden)
Alfredo Rizzi
2007-10-01
Full Text Available In this paper the A. comments the importance of prime numbers in mathematics and in cryptography. He remembers the very important researches of Eulero, Fermat, Legen-re, Rieman and others scholarships. There are many expressions that give prime numbers. Between them Mersenne’s primes have interesting properties. There are also many conjectures that still have to be demonstrated or rejected. The primality deterministic tests are the algorithms that permit to establish if a number is prime or not. There are not applicable in many practical situations, for instance in public key cryptography, because the computer time would be very long. The primality probabilistic tests consent to verify the null hypothesis: the number is prime. In the paper there are comments about the most important statistical tests.
Anisotropic permeability in deterministic lateral displacement arrays
Vernekar, Rohan; Loutherback, Kevin; Morton, Keith; Inglis, David
2016-01-01
We investigate anisotropic permeability of microfluidic deterministic lateral displacement (DLD) arrays. A DLD array can achieve high-resolution bimodal size-based separation of micro-particles, including bioparticles such as cells. Correct operation requires that the fluid flow remains at a fixed angle with respect to the periodic obstacle array. We show via experiments and lattice-Boltzmann simulations that subtle array design features cause anisotropic permeability. The anisotropy, which indicates the array's intrinsic tendency to induce an undesired lateral pressure gradient, can lead to off-axis flows and therefore local changes in the critical separation size. Thus, particle trajectories can become unpredictable and the device useless for the desired separation duty. We show that for circular posts the rotated-square layout, unlike the parallelogram layout, does not suffer from anisotropy and is the preferred geometry. Furthermore, anisotropy becomes severe for arrays with unequal axial and lateral gaps...
Deterministic polarization chaos from a laser diode
Virte, Martin; Thienpont, Hugo; Sciamanna, Marc
2014-01-01
Fifty years after the invention of the laser diode and fourty years after the report of the butterfly effect - i.e. the unpredictability of deterministic chaos, it is said that a laser diode behaves like a damped nonlinear oscillator. Hence no chaos can be generated unless with additional forcing or parameter modulation. Here we report the first counter-example of a free-running laser diode generating chaos. The underlying physics is a nonlinear coupling between two elliptically polarized modes in a vertical-cavity surface-emitting laser. We identify chaos in experimental time-series and show theoretically the bifurcations leading to single- and double-scroll attractors with characteristics similar to Lorenz chaos. The reported polarization chaos resembles at first sight a noise-driven mode hopping but shows opposite statistical properties. Our findings open up new research areas that combine the high speed performances of microcavity lasers with controllable and integrated sources of optical chaos.
Deterministic remote preparation via the Brown state
Ma, Song-Ya; Gao, Cong; Zhang, Pei; Qu, Zhi-Guo
2017-04-01
We propose two deterministic remote state preparation (DRSP) schemes by using the Brown state as the entangled channel. Firstly, the remote preparation of an arbitrary two-qubit state is considered. It is worth mentioning that the construction of measurement bases plays a key role in our scheme. Then, the remote preparation of an arbitrary three-qubit state is investigated. The proposed schemes can be extended to controlled remote state preparation (CRSP) with unit success probabilities. At variance with the existing CRSP schemes via the Brown state, the derived schemes have no restriction on the coefficients, while the success probabilities can reach 100%. It means the success probabilities are greatly improved. Moreover, we pay attention to the DRSP in noisy environments under two important decoherence models, the amplitude-damping noise and phase-damping noise.
Mechanics From Newton's Laws to Deterministic Chaos
Scheck, Florian
2010-01-01
This book covers all topics in mechanics from elementary Newtonian mechanics, the principles of canonical mechanics and rigid body mechanics to relativistic mechanics and nonlinear dynamics. It was among the first textbooks to include dynamical systems and deterministic chaos in due detail. As compared to the previous editions the present fifth edition is updated and revised with more explanations, additional examples and sections on Noether's theorem. Symmetries and invariance principles, the basic geometric aspects of mechanics as well as elements of continuum mechanics also play an important role. The book will enable the reader to develop general principles from which equations of motion follow, to understand the importance of canonical mechanics and of symmetries as a basis for quantum mechanics, and to get practice in using general theoretical concepts and tools that are essential for all branches of physics. The book contains more than 120 problems with complete solutions, as well as some practical exa...
Piazza, Federico
2015-01-01
The minimal requirement for cosmography - a nondynamical description of the universe - is a prescription for calculating null geodesics, and timelike geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a w...
Piazza, Federico; Schücker, Thomas
2016-04-01
The minimal requirement for cosmography—a non-dynamical description of the universe—is a prescription for calculating null geodesics, and time-like geodesics as a function of their proper time. In this paper, we consider the most general linear connection compatible with homogeneity and isotropy, but not necessarily with a metric. A light-cone structure is assigned by choosing a set of geodesics representing light rays. This defines a "scale factor" and a local notion of distance, as that travelled by light in a given proper time interval. We find that the velocities and relativistic energies of free-falling bodies decrease in time as a consequence of cosmic expansion, but at a rate that can be different than that dictated by the usual metric framework. By extrapolating this behavior to photons' redshift, we find that the latter is in principle independent of the "scale factor". Interestingly, redshift-distance relations and other standard geometric observables are modified in this extended framework, in a way that could be experimentally tested. An extremely tight constraint on the model, however, is represented by the blackbody-ness of the cosmic microwave background. Finally, as a check, we also consider the effects of a non-metric connection in a different set-up, namely, that of a static, spherically symmetric spacetime.
Deterministic seismic hazard macrozonation of India
Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Deterministic seismic hazard macrozonation of India
Indian Academy of Sciences (India)
Sreevalsa Kolathayar; T G Sitharam; K S Vipin
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°–38°N and 68°–98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Fielding, Louis C
2017-01-01
Background Sacroiliac joint (SIJ) disease is increasingly recognized as a common source of low back pain. Arthrodesis of the SIJ has been shown to be clinically effective for this condition. In the last decade, minimally invasive (MI) SIJ fusion procedures have been developed to achieve the clinical effectiveness of open fusion procedures, with lower operative morbidity and faster recovery. However, SIJ fusion patients occasionally present with symptomatic nonunions necessitating revision. Methods Four patients who previously underwent MI SIJ arthrodesis returned with complaints of SIJ related pain confirmed by examination. Radiographic assessment showed lucency after fixation with triangular titanium interference implants. Loose implants were removed, and the patients were revised with a different MI SIJ fusion system that utilizes decortication, placement of autograft and graft extender, and fixation with cannulated threaded implants. The trajectory of the revision implants was in a more ventral-to-dorsal and caudal-to-cranial trajectory to place the implants perpendicularly through the articular portion of the SIJ. Results The triangular implants typically exhibited haloing lucency on radiographs and CT scans, and most were easily removed using the manufacturer’s instrumentation; only one implant was left in place as it was well-fixed. The removed implants exhibited little or no bony ongrowth. Decortication of the SIJ was performed, followed by placement of local autograft and fixation with 12.5 mm or 14.5mm diameter implants, as required. A more ventral-todorsal and caudal-to-cranial trajectory was established for the revision implants through the center of the articular region of the joint in order to maximize implant purchase in residual bone stock and achieve bony fusion through the articular portion of the SIJ. By six to twelve months post-revision, the presenting symptoms were successfully resolved in all patients. Conclusions Patients demonstrating
Jones, Jo; Jackson, Janet; Tudor, Terry; Bates, Margaret
2012-09-01
Strategies for enhancing environmental management are a key focus for the government in the UK. Using a manufacturing company from the construction sector as a case study, this paper evaluates selected interventionist techniques, including environmental teams, awareness raising and staff training to improve environmental performance. The study employed a range of methods including questionnaire surveys and audits of energy consumption and generation of waste to examine the outcomes of the selected techniques. The results suggest that initially environmental management was not a focus for either the employees or the company. However, as a result of employing the techniques, the company was able to reduce energy consumption, increase recycling rates and achieve costs savings in excess of £132,000.
... to your desktop! more... What Is Minimally Invasive Dentistry? Article Chapters What Is Minimally Invasive Dentistry? Minimally ... techniques. Reviewed: January 2012 Related Articles: Minimally Invasive Dentistry Minimally Invasive Veneers Dramatically Change Smiles What Patients ...
Arveson, W
1995-01-01
It is known that every semigroup of normal completely positive maps of a von Neumann can be ``dilated" in a particular way to an E_0-semigroup acting on a larger von Neumann algebra. The E_0-semigroup is not uniquely determined by the completely positive semigroup; however, it is unique (up to conjugacy) provided that certain conditions of {\\it minimality} are met. Minimality is a subtle property, and it is often not obvious if it is satisfied for specific examples even in the simplest case where the von Neumann algebra is \\Cal B(H). In this paper we clarify these issues by giving a new characterization of minimality in terms projective cocycles and their limits. Our results are valid for semigroups of endomorphisms acting on arbitrary von Neumann algebras with separable predual.
Safety Verification of Piecewise-Deterministic Markov Processes
DEFF Research Database (Denmark)
Wisniewski, Rafael; Sloth, Christoffer; Bujorianu, Manuela
2016-01-01
We consider the safety problem of piecewise-deterministic Markov processes (PDMP). These are systems that have deterministic dynamics and stochastic jumps, where both the time and the destination of the jumps are stochastic. Specifically, we solve a p-safety problem, where we identify the set...
Recognition of deterministic ETOL languages in logarithmic space
DEFF Research Database (Denmark)
Jones, Neil D.; Skyum, Sven
1977-01-01
It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian par...
Use of deterministic models in sports and exercise biomechanics research.
Chow, John W; Knudson, Duane V
2011-09-01
A deterministic model is a modeling paradigm that determines the relationships between a movement outcome measure and the biomechanical factors that produce such a measure. This review provides an overview of the use of deterministic models in biomechanics research, a historical summary of this research, and an analysis of the advantages and disadvantages of using deterministic models. The deterministic model approach has been utilized in technique analysis over the last three decades, especially in swimming, athletics field events, and gymnastics. In addition to their applications in sports and exercise biomechanics, deterministic models have been applied successfully in research on selected motor skills. The advantage of the deterministic model approach is that it helps to avoid selecting performance or injury variables arbitrarily and to provide the necessary theoretical basis for examining the relative importance of various factors that influence the outcome of a movement task. Several disadvantages of deterministic models, such as the use of subjective measures for the performance outcome, were discussed. It is recommended that exercise and sports biomechanics scholars should consider using deterministic models to help identify meaningful dependent variables in their studies.
The degree of irreversibility in deterministic finite automata
DEFF Research Database (Denmark)
Axelsen, Holger Bock; Holzer, Markus; Kutrib, Martin
2016-01-01
Recently, Holzer et al. gave a method to decide whether the language accepted by a given deterministic finite automaton (DFA) can also be accepted by some reversible deterministic finite automaton (REV-DFA), and eventually proved NL-completeness. Here, we show that the corresponding problem for n...
Xu, Jiang; Mori, Naofumi; Kawashima, Shuichi
2015-12-01
As a continued work of [18], we are concerned with the Timoshenko system in the case of non-equal wave speeds, which admits the dissipative structure of regularity-loss. Firstly, with the modification of a priori estimates in [18], we construct global solutions to the Timoshenko system pertaining to data in the Besov space with the regularity s = 3 / 2. Owing to the weaker dissipative mechanism, extra higher regularity than that for the global-in-time existence is usually imposed to obtain the optimal decay rates of classical solutions, so it is almost impossible to obtain the optimal decay rates in the critical space. To overcome the outstanding difficulty, we develop a new frequency-localization time-decay inequality, which captures the information related to the integrability at the high-frequency part. Furthermore, by the energy approach in terms of high-frequency and low-frequency decomposition, we show the optimal decay rate for Timoshenko system in critical Besov spaces, which improves previous works greatly.
FP/FIFO scheduling: coexistence of deterministic and probabilistic QoS guarantees
Directory of Open Access Journals (Sweden)
Pascale Minet
2007-01-01
Full Text Available In this paper, we focus on applications having quantitative QoS (Quality of Service requirements on their end-to-end response time (or jitter. We propose a solution allowing the coexistence of two types of quantitative QoS garantees, deterministic and probabilistic, while providing a high resource utilization. Our solution combines the advantages of the deterministic approach and the probabilistic one. The deterministic approach is based on a worst case analysis. The probabilistic approach uses a mathematical model to obtain the probability that the response time exceeds a given value. We assume that flows are scheduled according to non-preemptive FP/FIFO. The packet with the highest fixed priority is scheduled first. If two packets share the same priority, the packet arrived first is scheduled first. We make no particular assumption concerning the flow priority and the nature of the QoS guarantee requested by the flow. An admission control derived from these results is then proposed, allowing each flow to receive a quantitative QoS guarantee adapted to its QoS requirements. An example illustrates the merits of the coexistence of deterministic and probabilistic QoS guarantees.
Directory of Open Access Journals (Sweden)
Cohen Anders
2011-09-01
Full Text Available Abstract Introduction The purpose of this study was to describe procedural details of a minimally invasive presacral approach for revision of an L5-S1 Axial Lumbar Interbody Fusion rod. Case presentation A 70-year-old Caucasian man presented to our facility with marked thoracolumbar scoliosis, osteoarthritic changes characterized by high-grade osteophytes, and significant intervertebral disc collapse and calcification. Our patient required crutches during ambulation and reported intractable axial and radicular pain. Multi-level reconstruction of L1-4 was accomplished with extreme lateral interbody fusion, although focal lumbosacral symptoms persisted due to disc space collapse at L5-S1. Lumbosacral interbody distraction and stabilization was achieved four weeks later with the Axial Lumbar Interbody Fusion System (TranS1 Inc., Wilmington, NC, USA and rod implantation via an axial presacral approach. Despite symptom resolution following this procedure, our patient suffered a fall six weeks postoperatively with direct sacral impaction resulting in symptom recurrence and loss of L5-S1 distraction. Following seven months of unsuccessful conservative care, a revision of the Axial Lumbar Interbody Fusion rod was performed that utilized the same presacral approach and used a larger diameter implant. Minimal adhesions were encountered upon presacral re-entry. A precise operative trajectory to the base of the previously implanted rod was achieved using fluoroscopic guidance. Surgical removal of the implant was successful with minimal bone resection required. A larger diameter Axial Lumbar Interbody Fusion rod was then implanted and joint distraction was re-established. The radicular symptoms resolved following revision surgery and our patient was ambulating without assistance on post-operative day one. No adverse events were reported. Conclusions The Axial Lumbar Interbody Fusion distraction rod may be revised and replaced with a larger diameter rod using
Human gait recognition via deterministic learning.
Zeng, Wei; Wang, Cong
2012-11-01
Recognition of temporal/dynamical patterns is among the most difficult pattern recognition tasks. Human gait recognition is a typical difficulty in the area of dynamical pattern recognition. It classifies and identifies individuals by their time-varying gait signature data. Recently, a new dynamical pattern recognition method based on deterministic learning theory was presented, in which a time-varying dynamical pattern can be effectively represented in a time-invariant manner and can be rapidly recognized. In this paper, we present a new model-based approach for human gait recognition via the aforementioned method, specifically for recognizing people by gait. The approach consists of two phases: a training (learning) phase and a test (recognition) phase. In the training phase, side silhouette lower limb joint angles and angular velocities are selected as gait features. A five-link biped model for human gait locomotion is employed to demonstrate that functions containing joint angle and angular velocity state vectors characterize the gait system dynamics. Due to the quasi-periodic and symmetrical characteristics of human gait, the gait system dynamics can be simplified to be described by functions of joint angles and angular velocities of one side of the human body, thus the feature dimension is effectively reduced. Locally-accurate identification of the gait system dynamics is achieved by using radial basis function (RBF) neural networks (NNs) through deterministic learning. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. A gait signature is then derived from the extracted gait system dynamics along the phase portrait of joint angles versus angular velocities. A bank of estimators is constructed using constant RBF networks to represent the training gait patterns. In the test phase, by comparing the set of estimators with the test gait pattern, a set of recognition errors are generated, and the average L(1) norms
DETERMINISTIC EVALUATION OF DELAYED HYDRIDE CRACKING BEHAVIORS IN PHWR PRESSURE TUBES
Directory of Open Access Journals (Sweden)
YOUNG-JIN OH
2013-04-01
Full Text Available Pressure tubes made of Zr-2.5 wt% Nb alloy are important components consisting reactor coolant pressure boundary of a pressurized heavy water reactor, in which unanticipated through-wall cracks and rupture may occur due to a delayed hydride cracking (DHC. The Canadian Standards Association has provided deterministic and probabilistic structural integrity evaluation procedures to protect pressure tubes against DHC. However, intuitive understanding and subsequent assessment of flaw behaviors are still insufficient due to complex degradation mechanisms and diverse influential parameters of DHC compared with those of stress corrosion cracking and fatigue crack growth phenomena. In the present study, a deterministic flaw assessment program was developed and applied for systematic integrity assessment of the pressure tubes. Based on the examination results dealing with effects of flaw shapes, pressure tube dimensional changes, hydrogen concentrations of pressure tubes and plant operation scenarios, a simple and rough method for effective cooldown operation was proposed to minimize DHC risks. The developed deterministic assessment program for pressure tubes can be used to derive further technical bases for probabilistic damage frequency assessment.
On Minimal Constraint Networks
Gottlob, Georg
2011-01-01
In a minimal binary constraint network, every tuple of a constraint relation can be extended to a solution. It was conjectured that computing a solution to such a network is NP complete. We prove this conjecture true and show that the problem remains NP hard even in case the total domain of all values that may appear in the constraint relations is bounded by a constant.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
Deterministic Random Walks on Regular Trees
Cooper, Joshua; Friedrich, Tobias; Spencer, Joel; 10.1002/rsa.20314
2010-01-01
Jim Propp's rotor router model is a deterministic analogue of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb. Probab. Comput. (2006)) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid $\\Z^d$ and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite $k$-ary tree ($k \\ge 3$), we show that for any deviation $D$ there is an initial configuration of chips such that after running the Propp model for a ...
Analysis of pinching in deterministic particle separation
Risbud, Sumedh; Luo, Mingxiang; Frechette, Joelle; Drazer, German
2011-11-01
We investigate the problem of spherical particles vertically settling parallel to Y-axis (under gravity), through a pinching gap created by an obstacle (spherical or cylindrical, center at the origin) and a wall (normal to X axis), to uncover the physics governing microfluidic separation techniques such as deterministic lateral displacement and pinched flow fractionation: (1) theoretically, by linearly superimposing the resistances offered by the wall and the obstacle separately, (2) computationally, using the lattice Boltzmann method for particulate systems and (3) experimentally, by conducting macroscopic experiments. Both, theory and simulations, show that for a given initial separation between the particle centre and the Y-axis, presence of a wall pushes the particles closer to the obstacle, than its absence. Experimentally, this is expected to result in an early onset of the short-range repulsive forces caused by solid-solid contact. We indeed observe such an early onset, which we quantify by measuring the asymmetry in the trajectories of the spherical particles around the obstacle. This work is partially supported by the National Science Foundation Grant Nos. CBET- 0731032, CMMI-0748094, and CBET-0954840.
Deterministic Secure Positioning in Wireless Sensor Networks
Delaët, Sylvie; Rokicki, Mariusz; Tixeuil, Sébastien
2007-01-01
Properly locating sensor nodes is an important building block for a large subset of wireless sensor networks (WSN) applications. As a result, the performance of the WSN degrades significantly when misbehaving nodes report false location and distance information in order to fake their actual location. In this paper we propose a general distributed deterministic protocol for accurate identification of faking sensors in a WSN. Our scheme does \\emph{not} rely on a subset of \\emph{trusted} nodes that are not allowed to misbehave and are known to every node in the network. Thus, any subset of nodes is allowed to try faking its position. As in previous approaches, our protocol is based on distance evaluation techniques developed for WSN. On the positive side, we show that when the received signal strength (RSS) technique is used, our protocol handles at most $\\lfloor \\frac{n}{2} \\rfloor-2$ faking sensors. Also, when the time of flight (ToF) technique is used, our protocol manages at most $\\lfloor \\frac{n}{2} \\rfloor...
Deterministically Driven Avalanche Models of Solar Flares
Strugarek, Antoine; Charbonneau, Paul; Joseph, Richard; Pirot, Dorian
2014-08-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick-slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy-loading process. The model design leads to a systematic deficit of small-scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global statistics of flare energy release, this latter behavior may be relevant to recurrent flaring in individual coronal loops. This class of models could provide a basis for the prediction of large solar flares.
Deterministically Driven Avalanche Models of Solar Flares
Strugarek, Antoine; Joseph, Richard; Pirot, Dorian
2014-01-01
We develop and discuss the properties of a new class of lattice-based avalanche models of solar flares. These models are readily amenable to a relatively unambiguous physical interpretation in terms of slow twisting of a coronal loop. They share similarities with other avalanche models, such as the classical stick--slip self-organized critical model of earthquakes, in that they are driven globally by a fully deterministic energy loading process. The model design leads to a systematic deficit of small scale avalanches. In some portions of model space, mid-size and large avalanching behavior is scale-free, being characterized by event size distributions that have the form of power-laws with index values, which, in some parameter regimes, compare favorably to those inferred from solar EUV and X-ray flare data. For models using conservative or near-conservative redistribution rules, a population of large, quasiperiodic avalanches can also appear. Although without direct counterparts in the observational global st...
A Deterministic Approach to Earthquake Prediction
Directory of Open Access Journals (Sweden)
Vittorio Sgrigna
2012-01-01
Full Text Available The paper aims at giving suggestions for a deterministic approach to investigate possible earthquake prediction and warning. A fundamental contribution can come by observations and physical modeling of earthquake precursors aiming at seeing in perspective the phenomenon earthquake within the framework of a unified theory able to explain the causes of its genesis, and the dynamics, rheology, and microphysics of its preparation, occurrence, postseismic relaxation, and interseismic phases. Studies based on combined ground and space observations of earthquake precursors are essential to address the issue. Unfortunately, up to now, what is lacking is the demonstration of a causal relationship (with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. In doing this, modern and/or new methods and technologies have to be adopted to try to solve the problem. Coordinated space- and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of Low-Earth-Orbit (LEO satellites. Moreover, a new strong theoretical scientific effort is necessary to try to understand the physics of the earthquake.
Energy Technology Data Exchange (ETDEWEB)
Tanigaki, Nobuhiro, E-mail: tanigaki.nobuhiro@eng.nssmc.com [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (EUROPEAN OFFICE), Am Seestern 8, 40547 Dusseldorf (Germany); Ishida, Yoshihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., 46-59, Nakabaru, Tobata-ku, Kitakyushu, Fukuoka 804-8505 (Japan); Osada, Morihiro [NIPPON STEEL & SUMIKIN ENGINEERING CO., LTD., (Head Office), Osaki Center Building 1-5-1, Osaki, Shinagawa-ku, Tokyo 141-8604 (Japan)
2015-03-15
Highlights: • A new waste management scheme and the effects of co-gasification of MSW were assessed. • A co-gasification system was compared with other conventional systems. • The co-gasification system can produce slag and metal with high-quality. • The co-gasification system showed an economic advantage when bottom ash is landfilled. • The sensitive analyses indicate an economic advantage when the landfill cost is high. - Abstract: This study evaluates municipal solid waste co-gasification technology and a new solid waste management scheme, which can minimize final landfill amounts and maximize material recycled from waste. This new scheme is considered for a region where bottom ash and incombustibles are landfilled or not allowed to be recycled due to their toxic heavy metal concentration. Waste is processed with incombustible residues and an incineration bottom ash discharged from existent conventional incinerators, using a gasification and melting technology (the Direct Melting System). The inert materials, contained in municipal solid waste, incombustibles and bottom ash, are recycled as slag and metal in this process as well as energy recovery. Based on this new waste management scheme with a co-gasification system, a case study of municipal solid waste co-gasification was evaluated and compared with other technical solutions, such as conventional incineration, incineration with an ash melting facility under certain boundary conditions. From a technical point of view, co-gasification produced high quality slag with few harmful heavy metals, which was recycled completely without requiring any further post-treatment such as aging. As a consequence, the co-gasification system had an economical advantage over other systems because of its material recovery and minimization of the final landfill amount. Sensitivity analyses of landfill cost, power price and inert materials in waste were also conducted. The higher the landfill costs, the greater the
Felli, Emanuele; Brunetti, Francesco; Disabato, Mara; Salloum, Chady; Azoulay, Daniel; De'angelis, Nicola
2014-01-01
Right colon cancer rarely presents as an emergency, in which bowel occlusion and massive bleeding are the most common clinical presentations. Although there are no definite guidelines, the first line treatment for massive right colon cancer bleeding should ideally stop the bleeding using endoscopy or interventional radiology, subsequently allowing proper tumor staging and planning of a definite treatment strategy. Minimally invasive approaches for right and left colectomy have progressively increased and are widely performed in elective settings, with laparoscopy chosen in the majority of cases. Conversely, in emergent and urgent surgeries, minimally invasive techniques are rarely performed. We report a case of an 86-year-old woman who was successfully treated for massive rectal bleeding in an urgent setting by robotic surgery (da Vinci Intuitive Surgical System®). At admission, the patient had severe anemia (Hb 6 g/dL) and hemodynamic stability. A computer tomography scanner with contrast enhancement showed a right colon cancer with active bleeding; no distant metastases were found. A colonoscopy did not show any other bowel lesion, while a constant bleeding from the right pre-stenotic colon mass was temporarily arrested by endoscopic argon coagulation. A robotic right colectomy in urgent setting (within 24 hours from admission) was indicated. A three-armed robot was used with docking in the right side of the patient and a fourth trocar for the assistant surgeon. Because of the patient's poor nutritional status, a double-barreled ileocolostomy was performed. The post-operative period was uneventful. As the neoplasia was a pT3N0 adenocarcinoma, surveillance was decided after a multidisciplinary meeting, and restoration of the intestinal continuity was performed 3 months later, once good nutritional status was achieved. In addition, we reviewed the current literature on minimally invasive colectomy performed for colon carcinoma in emergent or urgent setting. No
Traffic chaotic dynamics modeling and analysis of deterministic network
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Directory of Open Access Journals (Sweden)
Alexei V. Melkikh
2004-03-01
Full Text Available The possibility of a complicated internal structure of an elementary particle was analyzed. In this case a particle may represent a quantum computer with many degrees of freedom. It was shown that the probability of new species formation by means of random mutations is negligibly small. Deterministic model of evolution is considered. According to this model DNA nucleotides can change their state under the control of elementary particle internal degrees of freedom.
2015-01-01
In this paper we present a deterministic polynomial time algorithm for testing if a symbolic matrix in non-commuting variables over $\\mathbb{Q}$ is invertible or not. The analogous question for commuting variables is the celebrated polynomial identity testing (PIT) for symbolic determinants. In contrast to the commutative case, which has an efficient probabilistic algorithm, the best previous algorithm for the non-commutative setting required exponential time (whether or not randomization is ...
Optical Realization of Deterministic Entanglement Concentration of Polarized Photons
Institute of Scientific and Technical Information of China (English)
GU Yong-Jian; XIAN Liang; LI Wen-Dong; MA Li-Zhen
2008-01-01
@@ We propose a scheme for optical realization of deterministic entanglement concentration of polarized photons.To overcome the difficulty due to the lack of sufficiently strong interactions between photons, teleportation is employed to transfer the polarization states of two photons onto the path and polarization states of a third photon, which is made possible by the recent experimental realization of the deterministic and complete Bell state measurement. Then the required positive operator-valued measurement and further operations can be implemented deterministically by using a linear optical setup. All these are within the reach of current technology.
Directory of Open Access Journals (Sweden)
Szkup Peter L
2012-03-01
Full Text Available Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA. This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery and can be a life-saving procedure. Case presentation In the first case, an anesthetist attempted ultrasound-guided access to the right internal jugular vein during the preoperative preparation of a 66-year-old Caucasian man. A 7-French (Fr triple-lumen catheter was inadvertently placed into his arterial system. In the second case, an emergency physician inadvertently placed a 7-Fr catheter into the subclavian artery of a 77-year-old Caucasian woman whilst attempting access to her right internal jugular vein. Both arterial punctures were successfully closed by means of a percutaneous closure device (Angio-Seal. No complications were observed. Conclusions Inadvertent subclavian arterial puncture can be successfully managed with no adverse clinical sequelae by using a percutaneous vascular closure device. This minimally invasive technique may be an option for patients with non-compressible arterial punctures. This report demonstrates two practical points that may help clinicians in decision-making during daily practice. First, it provides a practical solution to a well-known vascular complication. Second, it emphasizes a role for proper vascular ultrasound training for the non-radiologist.
Minimally invasive periodontal therapy.
Dannan, Aous
2011-10-01
Minimally invasive dentistry is a concept that preserves dentition and supporting structures. However, minimally invasive procedures in periodontal treatment are supposed to be limited within periodontal surgery, the aim of which is to represent alternative approaches developed to allow less extensive manipulation of surrounding tissues than conventional procedures, while accomplishing the same objectives. In this review, the concept of minimally invasive periodontal surgery (MIPS) is firstly explained. An electronic search for all studies regarding efficacy and effectiveness of MIPS between 2001 and 2009 was conducted. For this purpose, suitable key words from Medical Subject Headings on PubMed were used to extract the required studies. All studies are demonstrated and important results are concluded. Preliminary data from case cohorts and from many studies reveal that the microsurgical access flap, in terms of MIPS, has a high potential to seal the healing wound from the contaminated oral environment by achieving and maintaining primary closure. Soft tissues are mostly preserved and minimal gingival recession is observed, an important feature to meet the demands of the patient and the clinician in the esthetic zone. However, although the potential efficacy of MIPS in the treatment of deep intrabony defects has been proved, larger studies are required to confirm and extend the reported positive preliminary outcomes.
Directory of Open Access Journals (Sweden)
Gbemileke A. Ogunranti
2016-09-01
Full Text Available Purpose: The main objective of this study is to develop a model for solving the one dimensional cutting stock problem in the wood working industry, and develop a program for its implementation. Design/methodology/approach: This study adopts the pattern oriented approach in the formulation of the cutting stock model. A pattern generation algorithm was developed and coded using Visual basic.NET language. The cutting stock model developed is a Linear Programming (LP Model constrained by numerous feasible patterns. A LP solver was integrated with the pattern generation algorithm program to develop a one - dimensional cutting stock model application named GB Cutting Stock Program. Findings and Originality/value: Applying the model to a real life optimization problem significantly reduces material waste (off-cuts and minimizes the total stock used. The result yielded about 30.7% cost savings for company-I when the total stock materials used is compared with the former cutting plan. Also, to evaluate the efficiency of the application, Case I problem was solved using two top commercial 1D-cutting stock software. The results show that the GB program performs better when related results were compared. Research limitations/implications: This study round up the linear programming solution for the number of pattern to cut. Practical implications: From Managerial perspective, implementing optimized cutting plans increases productivity by eliminating calculating errors and drastically reducing operator mistakes. Also, financial benefits that can annually amount to millions in cost savings can be achieved through significant material waste reduction. Originality/value: This paper developed a linear programming one dimensional cutting stock model based on a pattern generation algorithm to minimize waste in the wood working industry. To implement the model, the algorithm was coded using VisualBasic.net and linear programming solver called lpsolvedll (dynamic
Non deterministic finite automata for power systems fault diagnostics
Directory of Open Access Journals (Sweden)
LINDEN, R.
2009-06-01
Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.
A proof system for asynchronously communicating deterministic processes
de Boer, F.S.; van Hulst, M.
1994-01-01
We introduce in this paper new communication and synchronization constructs which allow deterministic processes, communicating asynchronously via unbounded FIFO buffers, to cope with an indeterminate environment. We develop for the resulting parallel programming language, which subsumes deterministi
A Method to Separate Stochastic and Deterministic Information from Electrocardiograms
Gutíerrez, R M
2004-01-01
In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improuved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.
Deterministic operations research models and methods in linear optimization
Rader, David J
2013-01-01
Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear
Deterministic Consistency: A Programming Model for Shared Memory Parallelism
Aviram, Amittai; Ford, Bryan
2009-01-01
The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "...
An alternative approach to measure similarity between two deterministic transient signals
Shin, Kihong
2016-06-01
In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.
Seismic hazard in Romania associated to Vrancea subcrustal source Deterministic evaluation
Radulian, M; Moldoveanu, C L; Panza, G F; Vaccari, F
2002-01-01
Our study presents an application of the deterministic approach to the particular case of Vrancea intermediate-depth earthquakes to show how efficient the numerical synthesis is in predicting realistic ground motion, and how some striking peculiarities of the observed intensity maps are properly reproduced. The deterministic approach proposed by Costa et al. (1993) is particularly useful to compute seismic hazard in Romania, where the most destructive effects are caused by the intermediate-depth earthquakes generated in the Vrancea region. Vrancea is unique among the seismic sources of the World because of its striking peculiarities: the extreme concentration of seismicity with a remarkable invariance of the foci distribution, the unusually high rate of strong shocks (an average frequency of 3 events with magnitude greater than 7 per century) inside an exceptionally narrow focal volume, the predominance of a reverse faulting mechanism with the T-axis almost vertical and the P-axis almost horizontal and the mo...
Longevity, Growth and Intergenerational Equity: The Deterministic Case
DEFF Research Database (Denmark)
Andersen, Torben M.; Gestsson, Marias Halldór
2016-01-01
Challenges raised by aging (increasing longevity) have prompted policy debates featuring policy proposals justified by reference to some notion of intergenerational equity. However, very different policies ranging from presavings to indexation of retirement ages have been justified in this way. We...
Longevity, Growth and Intergenerational Equity - The Deterministic Case
DEFF Research Database (Denmark)
Andersen, Torben M.; Gestsson, Marias Halldór
. We develop an overlapping generations model in continuous time which encompasses different generations with different mortality rates and thus longevity. Allowing for both trend increases in longevity and productivity, we address the issue of intergenerational equity under a utilitarian criterion...
Institute of Scientific and Technical Information of China (English)
王鼎; 潘苗; 吴瑛
2011-01-01
Aim at the self-calibration of direction-dependent gm-phase errors in case of deterministic signal model, the maximum likelihood method(MLM) for calibrating the direction-dependent gain-phase errors with carry-on instrumental sensors was presented. In order to maximize the high-dimensional nonlinear cost function appearing in the MLM, an improved alternative projection iteration algorithm, which could optimize the azimuths and direc6on-dependent gain-phase errors was proposed. The closed-form expressions of the Cramér-Rao bound(CRB) for azimuths and gain-phase errors were derived. Simulation experiments show the effectiveness and advantage of the novel method.%针对确定信号模型条件下方位依赖幅相误差的自校正问题,给出了一种基于辅助阵元的方位依赖幅相误差最大似然自校正方法;针对最大似然估计器中出现的高维非线性优化问题,推导了一种改进型交替投影迭代算法,从而实现了信号方位和方位依赖幅相误差的优化计算.此外,还推导了信号方位和方位依赖幅相误差的无偏克拉美罗界(CRB).仿真实验结果验证了新方法的有效性和优越性.
Minimal Surfaces for Hitchin Representations
DEFF Research Database (Denmark)
Li, Qiongling; Dai, Song
2016-01-01
Given a reductive representation $\\rho: \\pi_1(S)\\rightarrow G$, there exists a $\\rho$-equivariant harmonic map $f$ from the universal cover of a fixed Riemann surface $\\Sigma$ to the symmetric space $G/K$ associated to $G$. If the Hopf differential of $f$ vanishes, the harmonic map is then minimal...... class and has a strong rigidity property. Secondly, we show that the immersed minimal surface is never tangential to any flat inside the symmetric space. As a direct corollary, the pullback metric of the minimal surface is always strictly negatively curved. In the end, we find a fully decoupled system....... In this paper, we investigate the properties of immersed minimal surfaces inside symmetric space associated to a subloci of Hitchin component: $q_n$ and $q_{n-1}$ case. First, we show that the pullback metric of the minimal surface dominates a constant multiple of the hyperbolic metric in the same conformal...
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.
2006-01-01
This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.
Institute of Scientific and Technical Information of China (English)
陈志平
2003-01-01
A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We then check the impact of the new deterministic formulation and other two deterministic formulations on the corresponding problem size,nonzero elements and solution time by solving some typical dynamic stochastic programming problems with different interior point algorithms.Numerical results show the advantage and application of the new deterministic formulation.
Theory and applications of a deterministic approximation to the coalescent model.
Jewett, Ethan M; Rosenberg, Noah A
2014-05-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the resulting approximate formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt≈E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt≈E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios.
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
Energy Technology Data Exchange (ETDEWEB)
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
Estimating the epidemic threshold on networks by deterministic connections
Energy Technology Data Exchange (ETDEWEB)
Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu [School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004 (China); Fu, Xinchu [Department of Mathematics, Shanghai University, Shanghai 200444 (China); Small, Michael [School of Mathematics and Statistics, The University of Western Australia, Crawley, Western Australia 6009 (Australia)
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Deterministic sensing matrices in compressive sensing: a survey.
Nguyen, Thu L N; Shin, Yoan
2013-01-01
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
Deterministic and stochastic features of rhythmic human movement.
van Mourik, Anke M; Daffertshofer, Andreas; Beek, Peter J
2006-03-01
The dynamics of rhythmic movement has both deterministic and stochastic features. We advocate a recently established analysis method that allows for an unbiased identification of both types of system components. The deterministic components are revealed in terms of drift coefficients and vector fields, while the stochastic components are assessed in terms of diffusion coefficients and ellipse fields. The general principles of the procedure and its application are explained and illustrated using simulated data from known dynamical systems. Subsequently, we exemplify the method's merits in extracting deterministic and stochastic aspects of various instances of rhythmic movement, including tapping, wrist cycling and forearm oscillations. In particular, it is shown how the extracted numerical forms can be analysed to gain insight into the dependence of dynamical properties on experimental conditions.
The Deterministic Part of IPC-4: An Overview
Edelkamp, S; 10.1613/jair.1677
2011-01-01
We provide an overview of the organization and results of the deterministic part of the 4th International Planning Competition, i.e., of the part concerned with evaluating systems doing deterministic planning. IPC-4 attracted even more competing systems than its already large predecessors, and the competition event was revised in several important respects. After giving an introduction to the IPC, we briefly explain the main differences between the deterministic part of IPC-4 and its predecessors. We then introduce formally the language used, called PDDL2.2 that extends PDDL2.1 by derived predicates and timed initial literals. We list the competing systems and overview the results of the competition. The entire set of data is far too large to be presented in full. We provide a detailed summary; the complete data is available in an online appendix. We explain how we awarded the competition prizes.
Deterministic dynamics of neural activity during absence seizures in rats
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A.
2009-04-01
The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.
Biondi, A; Valsecchi, MG; Seriu, T; D'Aniello, E; Willemse, MJ; Fasching, K; Pannunzio, A; Gadner, H; Schrappe, M; Kamps, WA; Bartram, CR; van Dongen, JJM; Panzer-Grumayer, ER
2000-01-01
The medium-risk B cell precursor acute lymphoblastic leukemia (ALL) accounts for 50-60% of total childhood ALL and comprises the largest number of relapses still unpredictable with diagnostic criteria. To evaluate the prognostic impact of minimal residual disease (MRD) in this specific group, a case
Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion
Majda, Andrew J.; Tong, Xin T.
2016-10-01
Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.
Deterministic Quantum Key Distribution Using Gaussian-Modulated Squeezed States
Institute of Scientific and Technical Information of China (English)
何广强; 朱俊; 曾贵华
2011-01-01
A continuous variable ping-pong scheme, which is utilized to generate deterministic private key, is proposed. The proposed scheme is implemented physically by using Ganssian-modulated squeezed states. The deterministic char- acteristic, i.e., no basis reconciliation between two parties, leads a nearly two-time efficiency comparing to the standard quantum key distribution schemes. Especially, the separate control mode does not need in the proposed scheme so that it is simpler and more available than previous ping-pong schemes. The attacker may be detected easily through the fidelity of the transmitted signal, and may not be successful in the beam splitter attack strategy.
Deterministic approaches for noncoherent communications with chaotic carriers
Institute of Scientific and Technical Information of China (English)
Liu Xiongying; Qiu Shuisheng; Francis. C. M. Lau
2005-01-01
Two problems are proposed. The first one is the noise decontamination of chaotic carriers using a deterministic approach to reconstruct pseudo trajectories, the second is the design of communications schemes with chaotic carriers. After presenting our deterministic noise decontamination algorithm, conventional chaos shift keying (CSK) communication system is applied. The difference of Euclidean distance between noisy trajectory and decontaminated trajectory in phase space could be utilized to non-coherently detect the sent symbol simply and effectively. It is shown that this detection method can achieve the bit error rate performance comparable to other non-coherent systems.
MIMO capacity for deterministic channel models: sublinear growth
DEFF Research Database (Denmark)
Bentosela, Francois; Cornean, Horia; Marchetti, Nicola
2013-01-01
This is the second paper by the authors in a series concerned with the development of a deterministic model for the transfer matrix of a MIMO system. In our previous paper, we started from the Maxwell equations and described the generic structure of such a deterministic transfer matrix...... some generic assumptions, we prove that the capacity grows much more slowly than linearly with the number of antennas. These results reinforce previous heuristic results obtained from statistical models of the transfer matrix, which also predict a sublinear behavior....
Structural and Spectral Properties of Deterministic Aperiodic Optical Structures
Directory of Open Access Journals (Sweden)
Luca Dal Negro
2016-12-01
Full Text Available In this comprehensive paper we have addressed structure-property relationships in a number of representative systems with periodic, random, quasi-periodic and deterministic aperiodic geometry using the interdisciplinary methods of spatial point pattern analysis and spectral graph theory as well as the rigorous Green’s matrix method, which provides access to the electromagnetic scattering behavior and spectral fluctuations (distributions of complex eigenvalues as well as of their level spacing of deterministic aperiodic optical media for the first time.
An Eﬃcient and Flexible Deterministic Framework for Multithreaded Programs
Institute of Scientific and Technical Information of China (English)
卢凯; 周旭; 王小平; 陈沉
2015-01-01
Determinism is very useful to multithreaded programs in debugging, testing, etc. Many deterministic ap-proaches have been proposed, such as deterministic multithreading (DMT) and deterministic replay. However, these sys-tems either are ineﬃcient or target a single purpose, which is not flexible. In this paper, we propose an eﬃcient and flexible deterministic framework for multithreaded programs. Our framework implements determinism in two steps: relaxed determinism and strong determinism. Relaxed determinism solves data races eﬃciently by using a proper weak memory consistency model. After that, we implement strong determinism by solving lock contentions deterministically. Since we can apply different approaches for these two steps independently, our framework provides a spectrum of deterministic choices, including nondeterministic system (fast), weak deterministic system (fast and conditionally deterministic), DMT system, and deterministic replay system. Our evaluation shows that the DMT configuration of this framework could even outperform a state-of-the-art DMT system.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Sludge minimization technologies - an overview
Energy Technology Data Exchange (ETDEWEB)
Oedegaard, Hallvard
2003-07-01
The management of wastewater sludge from wastewater treatment plants represents one of the major challenges in wastewater treatment today. The cost of the sludge treatment amounts to more that the cost of the liquid in many cases. Therefore the focus on and interest in sludge minimization is steadily increasing. In the paper an overview is given for sludge minimization (sludge mass reduction) options. It is demonstrated that sludge minimization may be a result of reduced production of sludge and/or disintegration processes that may take place both in the wastewater treatment stage and in the sludge stage. Various sludge disintegration technologies for sludge minimization are discussed, including mechanical methods (focusing on stirred ball-mill, high-pressure homogenizer, ultrasonic disintegrator), chemical methods (focusing on the use of ozone), physical methods (focusing on thermal and thermal/chemical hydrolysis) and biological methods (focusing on enzymatic processes). (author)
Szymanowski, Mariusz; Kryza, Maciej
2017-02-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Genetic algorithm-based wide-band deterministic maximum likelihood direction finding algorithm
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The wide-band direction finding is one of hit and difficult task in array signal processing. This paper generalizes narrow-band deterministic maximum likelihood direction finding algorithm to the wideband case, and so constructions an object function, then utilizes genetic algorithm for nonlinear global optimization. Direction of arrival is estimated without preprocessing of array data and so the algorithm eliminates the effect of pre-estimate on the final estimation. The algorithm is applied on uniform linear array and extensive simulation results prove the efficacy of the algorithm. In the process of simulation, we obtain the relation between estimation error and parameters of genetic algorithm.
Directory of Open Access Journals (Sweden)
Vladimir I. Borodulin
2014-01-01
Full Text Available As known from previous studies, the deterministic turbulence (DeTu is a post-transitional flow that is turbulent according to the generally accepted statistical characteristics but possesses, meanwhile, a significant degree of determinism, i.e., reproducibility of its instantaneous structure. It is found that the DeTu can occur in those cases when transition is caused by convective instabilities; in boundary layers, in particular. The present paper is devoted to a brief description of history of discovering the DeTu phenomenon, as well as to some recent advance in investigation of instantaneous and statistical properties of such turbulent boundary layer flows.
Deterministic Method for Obtaining Nominal and Uncertainty Models of CD Drives
DEFF Research Database (Denmark)
Vidal, Enrique Sanchez; Stoustrup, Jakob; Andersen, Palle;
2002-01-01
properties. The method provides a systematic way to derive a nominal average model as well as a structures multiplicative input uncertainty model, and it is demonstrated how to apply mu-theory to design a controller based on the models obtained that meets certain robust performance criteria.......In this paper a deterministic method for obtaining the nominal and uncertainty models of the focus loop in a CD-player is presented based on parameter identification and measurements in the focus loop of 12 actual CD drives that differ by having worst-case behaviors with respect to various...
Simulation of quantum computation : A deterministic event-based approach
Michielsen, K; De Raedt, K; De Raedt, H
2005-01-01
We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and
Deterministic control of ferroelastic switching in multiferroic materials
Balke, N.; Choudhury, S.; Jesse, S.; Huijben, M.; Chu, Y.-H.; Baddorf, A.P.; Chen, L.Q.; Ramesh, R.; Kalinin, S.V.
2009-01-01
Multiferroic materials showing coupled electric, magnetic and elastic orderings provide a platform to explore complexity and new paradigms for memory and logic devices. Until now, the deterministic control of non-ferroelectric order parameters in multiferroics has been elusive. Here, we demonstrate
Scheme for deterministic Bell-state-measurement-free quantum teleportation
Yang, M; Yang, Ming; Cao, Zhuo-Liang
2004-01-01
A deterministic teleportation scheme for unknown atomic states is proposed in cavity QED. The Bell state measurement is not needed in the teleportation process, and the success probability can reach 1.0. In addition, the current scheme is insensitive to the cavity decay and thermal field.
Using a satisfiability solver to identify deterministic finite state automata
Heule, M.J.H.; Verwer, S.
2009-01-01
We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we p
Limiting Shapes for Deterministic Centrally Seeded Growth Models
Fey-den Boer, Anne; Redig, Frank
2007-01-01
We study the rotor router model and two deterministic sandpile models. For the rotor router model in ℤ d , Levine and Peres proved that the limiting shape of the growth cluster is a sphere. For the other two models, only bounds in dimension 2 are known. A unified approach for these models with a
Enhanced deterministic phase retrieval using a partially developed speckle field
DEFF Research Database (Denmark)
Almoro, Percival F.; Waller, Laura; Agour, Mostafa;
2012-01-01
A technique for enhanced deterministic phase retrieval using a partially developed speckle field (PDSF) and a spatial light modulator (SLM) is demonstrated experimentally. A smooth test wavefront impinges on a phase diffuser, forming a PDSF that is directed to a 4f setup. Two defocused speckle in...
Deterministic combination of numerical and physical coastal wave models
DEFF Research Database (Denmark)
Zhang, H.W.; Schäffer, Hemming Andreas; Jakobsen, K.P.
2007-01-01
A deterministic combination of numerical and physical models for coastal waves is developed. In the combined model, a Boussinesq model MIKE 21 BW is applied for the numerical wave computations. A piston-type 2D or 3D wavemaker and the associated control system with active wave absorption provides...
Exponential Regret Bounds for Gaussian Process Bandits with Deterministic Observations
de Freitas, N.; Smola, A.J.; Zoghi, M.; Langford, J.; Pineau, J.
2012-01-01
This paper analyzes the problem of Gaussian process (GP) bandits with deterministic observations. The analysis uses a branch and bound algorithm that is related to the UCB algorithm of (Srinivas et al, 2010). For GPs with Gaussian observation noise, with variance strictly greater than zero, Srinivas
A Unit on Deterministic Chaos for Student Teachers
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory.
Calculating Certified Compilers for Non-deterministic Languages
DEFF Research Database (Denmark)
Bahr, Patrick
2015-01-01
Reasoning about programming languages with non-deterministic semantics entails many difficulties. For instance, to prove correctness of a compiler for such a language, one typically has to split the correctness property into a soundness and a completeness part, and then prove these two parts...
Controllability of deterministic networks with the identical degree sequence.
Directory of Open Access Journals (Sweden)
Xiujuan Ma
Full Text Available Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y-flower. We analysed controllability of the two deterministic networks ((1, 3-flower and (2, 2-flower by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y-flower networks. Our results show that the family of (x,y-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected.
Controllability of deterministic networks with the identical degree sequence.
Ma, Xiujuan; Zhao, Haixing; Wang, Binghong
2015-01-01
Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected.
Line and lattice networks under deterministic interference models
Goseling, Jasper; Gastpar, Michael; Weber, Jos H.
2011-01-01
Capacity bounds are compared for four different deterministic models of wireless networks, representing four different ways of handling broadcast and superposition in the physical layer. In particular, the transport capacity under a multiple unicast traffic pattern is studied for a 1-D network of re
Deterministic or stochastic choices in retinal neuron specification
Chen, Zhenqing; LI Xin; DESPLAN, CLAUDE
2012-01-01
There are two views on vertebrate retinogenesis: a deterministic model dependent on fixed lineages, and a stochastic model in which choices of division modes and cell fates cannot be predicted. In this issue of Neuron, He et al. (2012) address this question in zebra fish using live imaging and mathematical modeling.
Deterministic event-based simulation of quantum phenomena
De Raedt, K; De Raedt, H; Michielsen, K
2005-01-01
We propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities. We demonstrate that locally connected networks of these machines can be used to perform blind classification on an event-by-event basis, without storing the inform
Deterministic teleportation using single-photon entanglement as a resource
DEFF Research Database (Denmark)
Björk, Gunnar; Laghaout, Amine; Andersen, Ulrik L.
2012-01-01
We outline a proof that teleportation with a single particle is, in principle, just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell-state analyzer is proposed which...
Demonstration of deterministic and high fidelity squeezing of quantum information
DEFF Research Database (Denmark)
Yoshikawa, J-I.; Hayashi, T-; Akiyama, T.
2007-01-01
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an...
Minimal Exit Trajectories with Optimum Correctional Manoeuvres
Directory of Open Access Journals (Sweden)
T. N. Srivastava
1980-10-01
Full Text Available Minimal exit trajectories with optimum correctional manoeuvers to a rocket between two coplaner, noncoaxial elliptic orbits in an inverse square gravitational field have been investigated. Case of trajectories with no correctional manoeuvres has been analysed. In the end minimal exit trajectories through specified orbital terminals are discussed and problem of ref. (2 is derived as a particular case.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
The integrated model for solving the single-period deterministic inventory routing problem
Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik
2016-08-01
This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.
Lee, Keun-Hyeun; Jeong, Han-Sol; Rhee, Harin
2014-01-01
Gongjin-dan (GJD) is a traditional formula that is widely used in Korea and China, and it has been used from 1345 AD in China to improve the circulation between the kidneys and the heart and to prevent all diseases. However, its adverse effects have not yet been reported. We present a patient with minimal change disease and focal tubulointerstitial nephritis associated with GJD. A 72-year-old man visited the clinic for generalized edema 20 days after starting GJD. His serum albumin level was low and nephrotic-range proteinuria was detected. A kidney biopsy showed minimal change disease and acute tubulointerstitial nephritis. After stopping GJD, a spontaneous complete remission was achieved. We discuss the possible pathogenesis of GJD-induced minimal change disease and review the adverse effects of GJD's ingredients and traditional Chinese medicines that can induce proteinuria. We report a new adverse effect of GJD, which might induce increased IL-13 production and an allergic response, leading to minimal change disease and focal tubulointerstitial nephritis.
Increasingly minimal bias routing
Energy Technology Data Exchange (ETDEWEB)
Bataineh, Abdulla; Court, Thomas; Roweth, Duncan
2017-02-21
A system and algorithm configured to generate diversity at the traffic source so that packets are uniformly distributed over all of the available paths, but to increase the likelihood of taking a minimal path with each hop the packet takes. This is achieved by configuring routing biases so as to prefer non-minimal paths at the injection point, but increasingly prefer minimal paths as the packet proceeds, referred to herein as Increasing Minimal Bias (IMB).
Minimal distances between SCFTs
Energy Technology Data Exchange (ETDEWEB)
Buican, Matthew [Department of Physics and Astronomy, Rutgers University,Piscataway, NJ 08854 (United States)
2014-01-28
We study lower bounds on the minimal distance in theory space between four-dimensional superconformal field theories (SCFTs) connected via broad classes of renormalization group (RG) flows preserving various amounts of supersymmetry (SUSY). For N=1 RG flows, the ultraviolet (UV) and infrared (IR) endpoints of the flow can be parametrically close. On the other hand, for RG flows emanating from a maximally supersymmetric SCFT, the distance to the IR theory cannot be arbitrarily small regardless of the amount of (non-trivial) SUSY preserved along the flow. The case of RG flows from N=2 UV SCFTs is more subtle. We argue that for RG flows preserving the full N=2 SUSY, there are various obstructions to finding examples with parametrically close UV and IR endpoints. Under reasonable assumptions, these obstructions include: unitarity, known bounds on the c central charge derived from associativity of the operator product expansion, and the central charge bounds of Hofman and Maldacena. On the other hand, for RG flows that break N=2→N=1, it is possible to find IR fixed points that are parametrically close to the UV ones. In this case, we argue that if the UV SCFT possesses a single stress tensor, then such RG flows excite of order all the degrees of freedom of the UV theory. Furthermore, if the UV theory has some flavor symmetry, we argue that the UV central charges should not be too large relative to certain parameters in the theory.
HCPT Minimally Invasive Surgery in the Treatment of Anal Fistula in 40 Cases%HCPT微创术治疗肛瘘40例
Institute of Scientific and Technical Information of China (English)
王宏波; 闫树勋; 薄超刚; 周秀芳; 谢桂珍; 王艳梅
2013-01-01
Objective:To investigate the multi-functional therapeutic instrument ( HCPT) methods and clinical efficacy in the treat-ment of anal fistula .Methods:40 patients in need of anal fistula operation were treated using HCPT minimally invasive surgery .We ob-serve the operation time , stool bleeding ,and recovery time , postoperative follow-up one year , clinical observation .Results:All patients in this group operation time is 10 ~20min, an average of 16min.No intraoperative bleeding, no postoperative pain, postoperative 24h defecation, defecation no bleeding , no pain.Recovery time is 7 ~11d, an average of 9D, the cure rate of 100%.All patients recovered to normal work, study and life, no complications, no cases of infection, anal function properly.Follow-up of 1 years, no recurrence to achieve radical purposes .Conclusion:multifunctional anorectal therapeutic apparatus in the treatment of anal fistula operation is simple , shorter operative time, less bleeding, no pain, patients do not need hospitalization , low cost, quick recovery, no obvious scars after re-covery without sequels and complications , efficacy, it is worthy of popularization and application .%目的：探讨多功能肛肠治疗仪（ HCPT ）治疗肛瘘的方法和临床疗效。方法：将需要手术的肛瘘患者40例采用HCPT微创术进行治疗，观察其手术时间、大便出血情况、及恢复时间，随访1年，观察疗效。结果：本组全部患者手术时间为10～20分钟，平均16分钟。术中无出血，术后无疼痛，术后24小时排便，排便后无出血、无疼痛。恢复时间7～11天，平均9天，治愈率100％。全部患者恢复正常的工作、学习和生活，无并发症，无感染病例，肛门功能正常。术后随访1年，无1例复发，达到根治目的。结论：多功能肛肠治疗仪治疗肛瘘操作简单、手术时间短、不出血、无痛苦，患者不需住院，费用低廉，恢复快，且恢复后无明显瘢痕
Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects
Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume
The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in
Bertolami, Orfeu; Páramos, Jorge
2011-01-01
In this work, one shows that a specific non-minimal coupling between the scalar curvature and matter can mimic the dark matter component of relaxed galaxy clusters. For this purpose, one assesses the Abell Cluster A586, a massive strong-lensing nearby relaxed cluster of galaxies in virial equilibrium, where direct mass estimates are possible. The total density, which generally follows a cusped profile and reveals a very small baryonic component, can be effectively described within this framework.
Energy Technology Data Exchange (ETDEWEB)
Giffard, F.X
2000-05-19
In the field of reactor and fuel cycle physics, particle transport plays and important role. Neutronic design, operation and evaluation calculations of nuclear system make use of large and powerful computer codes. However, current limitations in terms of computer resources make it necessary to introduce simplifications and approximations in order to keep calculation time and cost within reasonable limits. Two different types of methods are available in these codes. The first one is the deterministic method, which is applicable in most practical cases but requires approximations. The other method is the Monte Carlo method, which does not make these approximations but which generally requires exceedingly long running times. The main motivation of this work is to investigate the possibility of a combined use of the two methods in such a way as to retain their advantages while avoiding their drawbacks. Our work has mainly focused on the speed-up of 3-D continuous energy Monte Carlo calculations (TRIPOLI-4 code) by means of an optimized biasing scheme derived from importance maps obtained from the deterministic code ERANOS. The application of this method to two different practical shielding-type problems has demonstrated its efficiency: speed-up factors of 100 have been reached. In addition, the method offers the advantage of being easily implemented as it is not very to the choice of the importance mesh grid. It has also been demonstrated that significant speed-ups can be achieved by this method in the case of coupled neutron-gamma transport problems, provided that the interdependence of the neutron and photon importance maps is taken into account. Complementary studies are necessary to tackle a problem brought out by this work, namely undesirable jumps in the Monte Carlo variance estimates. (author)
The road to deterministic matrices with the restricted isometry property
Bandeira, Afonso S; Mixon, Dustin G; Wong, Percy
2012-01-01
The restricted isometry property (RIP) is a well-known matrix condition that provides state-of-the-art reconstruction guarantees for compressed sensing. While random matrices are known to satisfy this property with high probability, deterministic constructions have found less success. In this paper, we consider various techniques for demonstrating RIP deterministically, some popular and some novel, and we evaluate their performance. In evaluating some techniques, we apply random matrix theory and inadvertently find a simple alternative proof that certain random matrices are RIP. Later, we propose a particular class of matrices as candidates for being RIP, namely, equiangular tight frames (ETFs). Using the known correspondence between real ETFs and strongly regular graphs, we investigate certain combinatorial implications of a real ETF being RIP. Specifically, we give probabilistic intuition for a new bound on the clique number of Paley graphs of prime order, and we conjecture that the corresponding ETFs are R...
Deterministic chaos, fractals and quantumlike mechanics in atmospheric flows
Selvam, A M
1990-01-01
The complex spaciotemporal patterns of atmospheric flows that result from the cooperative existence of fluctuations ranging in size from millimetres to thousands of kilometres are found to exhibit long-range spacial and temporal correlations. These correlations are manifested as the self-similar fractal geometry of the global cloud cover pattern and the inverse power-law form for the atmospheric eddy energy spectrum. Such long-range spaciotemporal correlations are ubiquitous in extended natural dynamical systems and are signatures of deterministic chaos or self-organized criticality. In this paper, a cell dynamical system model for atmospheric flows is developed by consideration of microscopic domain eddy dynamical processes. This nondeterministic model enables formulation of a simple closed set of governing equations for the prediction and description of observed atmospheric flow structure characteristics as follows. The strange-attractor design of the field of deterministic chaos in atmospheric flows consis...
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-10
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
On the secure obfuscation of deterministic finite automata.
Energy Technology Data Exchange (ETDEWEB)
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic chaos at the ocean surface: applications and interpretations
Directory of Open Access Journals (Sweden)
A. J. Palmer
1998-01-01
Full Text Available Ocean surface, grazing-angle radar backscatter data from two separate experiments, one of which provided coincident time series of measured surface winds, were found to exhibit signatures of deterministic chaos. Evidence is presented that the lowest dimensional underlying dynamical system responsible for the radar backscatter chaos is that which governs the surface wind turbulence. Block-averaging time was found to be an important parameter for determining the degree of determinism in the data as measured by the correlation dimension, and by the performance of an artificial neural network in retrieving wind and stress from the radar returns, and in radar detection of an ocean internal wave. The correlation dimensions are lowered and the performance of the deterministic retrieval and detection algorithms are improved by averaging out the higher dimensional surface wave variability in the radar returns.
Deterministic and Probabilistic Approach in Primality Checking for RSA Algorithm
Directory of Open Access Journals (Sweden)
Sanjoy Das
2013-04-01
Full Text Available The RSA cryptosystem, invented by Ron Rivest, Adi Shamir and Len Adleman was first publicized in the August 1977 issue of Scientific American [1]. The security level of this algorithm very much depends on two large prime numbers [2]. In this paper two distinct approaches have been dealt with for primality checking. These are deterministic approach and probabilistic approach. For the deterministic approach, it has chosen modified trial division and for probabilistic approach, Miller-Rabin algorithm is considered. The different kinds of attacks on RSA and their remedy are also being discussed. This includes the chosen cipher text attacks, short private key exponent attack and frequency attack. Apart from these attacks, discussion has been made on how to choose the primes for the RSA algorithm. The time complexity has been demonstrated for the various algorithms implemented and compared with others. Finally the future modifications and expectations arising out of the current limitations have also been stated at the end.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TX; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
Deterministic teleportation using single-photon entanglement as a resource
Björk, Gunnar; Andersen, Ulrik L
2011-01-01
We outline a proof that teleportation with a single particle is in principle just as reliable as with two particles. We thereby hope to dispel the skepticism surrounding single-photon entanglement as a valid resource in quantum information. A deterministic Bell state analyzer is proposed which uses only classical resources, namely coherent states, a Kerr non-linearity, and a two-level atom.
Deterministic and Stochastic Study of Wind Farm Harmonic Currents
DEFF Research Database (Denmark)
Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus;
2010-01-01
Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic...... characterization of wind farm harmonic currents is analyzed. Specific issues addressed in the paper include the harmonic variation with the wind farm operating point and the random characteristics of their magnitude and phase angle....
Testing for deterministic monetary chaos: Metric and topological diagnostics
Energy Technology Data Exchange (ETDEWEB)
Barkoulas, John T. [Department of Finance and Quantitative Analysis, Georgia Southern University, Statesboro, GA 30460 (United States)], E-mail: jbarkoul@georgiasouthern.edu
2008-11-15
The evidence of deterministic chaos in monetary aggregates tends to be contradictory in the literature. We revisit the issue of monetary chaos by applying tools based on both the metric (correlation dimension and Lyapunov exponents) and topological (recurrence plots) approaches to chaos. For simple-sum and divisia monetary aggregates over an expanded sample period, the empirical evidence from both approaches is negative for monetary chaotic dynamics.
Deterministically – Probabilistic Approach for Determining the Steels Elasticity Modules
Directory of Open Access Journals (Sweden)
Popov Alexander
2015-03-01
Full Text Available The known deterministic relationships to estimate the elastic characteristics of materials are not well accounted for significant variability of these parameters in solids. Therefore, it is given a probabilistic approach to determine the modules of elasticity, adopted to random values, which increases the accuracy of the obtained results. By an ultrasonic testing, a non-destructive evaluation of the investigated steels structure and properties has been made.
Uniform Deterministic Discrete Method for Three Dimensional Systems
Institute of Scientific and Technical Information of China (English)
无
1997-01-01
For radiative direct exchange areas in three dimensional system,the Uniform Deterministic Discrete Method(UDDM) was adopted.The spherical surface dividing method for sending area element and the regular icosahedron for sending volume element can meet with the direct exchange area computation of any kind of zone pairs.The numerical examples of direct exchange area in three dimensional system with nonhomogeneous attenuation coefficients indicated that the UDDM can give very high numercal accuracy.
Deterministic chaos control in neural networks on various topologies
Neto, A. J. F.; Lima, F. W. S.
2017-01-01
Using numerical simulations, we study the control of deterministic chaos in neural networks on various topologies like Voronoi-Delaunay, Barabási-Albert, Small-World networks and Erdös-Rényi random graphs by "pinning" the state of a "special" neuron. We show that the chaotic activity of the networks or graphs, when control is on, can become constant or periodic.
Minimally Invasive Video-Assisted versus Minimally Invasive Nonendoscopic Thyroidectomy
Directory of Open Access Journals (Sweden)
Zdeněk Fík
2014-01-01
Full Text Available Minimally invasive video-assisted thyroidectomy (MIVAT and minimally invasive nonendoscopic thyroidectomy (MINET represent well accepted and reproducible techniques developed with the main goal to improve cosmetic outcome, accelerate healing, and increase patient’s comfort following thyroid surgery. Between 2007 and 2011, a prospective nonrandomized study of patients undergoing minimally invasive thyroid surgery was performed to compare advantages and disadvantages of the two different techniques. There were no significant differences in the length of incision to perform surgical procedures. Mean duration of hemithyroidectomy was comparable in both groups, but it was more time consuming to perform total thyroidectomy by MIVAT. There were more patients undergoing MIVAT procedures without active drainage in the postoperative course and we also could see a trend for less pain in the same group. This was paralleled by statistically significant decreased administration of both opiates and nonopiate analgesics. We encountered two cases of recurrent laryngeal nerve palsies in the MIVAT group only. MIVAT and MINET represent safe and feasible alternative to conventional thyroid surgery in selected cases and this prospective study has shown minimal differences between these two techniques.
Minimizing Costs Can Be Costly
Directory of Open Access Journals (Sweden)
Rasmus Rasmussen
2010-01-01
Full Text Available A quite common practice, even in academic literature, is to simplify a decision problem and model it as a cost-minimizing problem. In fact, some type of models has been standardized to minimization problems, like Quadratic Assignment Problems (QAPs, where a maximization formulation would be treated as a “generalized” QAP and not solvable by many of the specially designed softwares for QAP. Ignoring revenues when modeling a decision problem works only if costs can be separated from the decisions influencing revenues. More often than we think this is not the case, and minimizing costs will not lead to maximized profit. This will be demonstrated using spreadsheets to solve a small example. The example is also used to demonstrate other pitfalls in network models: the inability to generally balance the problem or allocate costs in advance, and the tendency to anticipate a specific type of solution and thereby make constraints too limiting when formulating the problem.
Universal quantification for deterministic chaos in dynamical systems
Selvam, A M
1993-01-01
A cell dynamical system model for deterministic chaos enables precise quantification of the round-off error growth,i.e., deterministic chaos in digital computer realizations of mathematical models of continuum dynamical systems. The model predicts the following: (a) The phase space trajectory (strange attractor) when resolved as a function of the computer accuracy has intrinsic logarithmic spiral curvature with the quasiperiodic Penrose tiling pattern for the internal structure. (b) The universal constant for deterministic chaos is identified as the steady-state fractional round-off error k for each computational step and is equal to 1 /sqr(tau) (=0.382) where tau is the golden mean. (c) The Feigenbaum's universal constants a and d are functions of k and, further, the expression 2(a**2) = (pie)*d quantifies the steady-state ordered emergence of the fractal geometry of the strange attractor. (d) The power spectra of chaotic dynamical systems follow the universal and unique inverse power law form of the statist...
Deterministic Identity Testing of Read-Once Algebraic Branching Programs
Jansen, Maurice; Sarma, Jayalal
2009-01-01
In this paper we study polynomial identity testing of sums of $k$ read-once algebraic branching programs ($\\Sigma_k$-RO-ABPs), generalizing the work in (Shpilka and Volkovich 2008,2009), who considered sums of $k$ read-once formulas ($\\Sigma_k$-RO-formulas). We show that $\\Sigma_k$-RO-ABPs are strictly more powerful than $\\Sigma_k$-RO-formulas, for any $k \\leq \\lfloor n/2\\rfloor$, where $n$ is the number of variables. We obtain the following results: 1) Given free access to the RO-ABPs in the sum, we get a deterministic algorithm that runs in time $O(k^2n^7s) + n^{O(k)}$, where $s$ bounds the size of any largest RO-ABP given on the input. This implies we have a deterministic polynomial time algorithm for testing whether the sum of a constant number of RO-ABPs computes the zero polynomial. 2) Given black-box access to the RO-ABPs computing the individual polynomials in the sum, we get a deterministic algorithm that runs in time $k^2n^{O(\\log n)} + n^{O(k)}$. 3) Finally, given only black-box access to the polyn...
Non-equilibrium Thermodynamics of Piecewise Deterministic Markov Processes
Faggionato, A.; Gabrielli, D.; Ribezzi Crivellari, M.
2009-10-01
We consider a class of stochastic dynamical systems, called piecewise deterministic Markov processes, with states ( x, σ)∈Ω×Γ, Ω being a region in ℝ d or the d-dimensional torus, Γ being a finite set. The continuous variable x follows a piecewise deterministic dynamics, the discrete variable σ evolves by a stochastic jump dynamics and the two resulting evolutions are fully-coupled. We study stationarity, reversibility and time-reversal symmetries of the process. Increasing the frequency of the σ-jumps, the system behaves asymptotically as deterministic and we investigate the structure of its fluctuations (i.e. deviations from the asymptotic behavior), recovering in a non Markovian frame results obtained by Bertini et al. (Phys. Rev. Lett. 87(4):040601, 2001; J. Stat. Phys. 107(3-4):635-675, 2002; J. Stat. Mech. P07014, 2007; Preprint available online at http://www.arxiv.org/abs/0807.4457, 2008), in the context of Markovian stochastic interacting particle systems. Finally, we discuss a Gallavotti-Cohen-type symmetry relation with involution map different from time-reversal.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Locally minimal topological groups
Außenhofer, Lydia; Chasco, María Jesús; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the ...
Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.
2014-02-01
Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.
Algorithms for Deterministic Call Admission Control of Pre-stored VBR Video Streams
Directory of Open Access Journals (Sweden)
Christos Tryfonas
2009-08-01
Full Text Available We examine the problem of accepting a new request for a pre-stored VBR video stream that has been smoothed using any of the smoothing algorithms found in the literature. The output of these algorithms is a piecewise constant-rate schedule for a Variable Bit-Rate (VBR stream. The schedule guarantees that the decoder buffer does not overflow or underflow. The problem addressed in this paper is the determination of the minimal time displacement of each new requested VBR stream so that it can be accommodated by the network and/or the video server without overbooking the committed traffic. We prove that this call-admission control problem for multiple requested VBR streams is NP-complete and inapproximable within a constant factor, by reducing it from the VERTEX COLOR problem. We also present a deterministic morphology-sensitive algorithm that calculates the minimal time displacement of a VBR stream request. The complexity of the proposed algorithm along with the experimental results we provide indicate that the proposed algorithm is suitable for real-time determination of the time displacement parameter during the call admission phase.
Energy Technology Data Exchange (ETDEWEB)
Farmer, J.C.
1997-10-01
An integrated predictive model is being developed to account for the effects of localized environmental conditions in crevices on the initiation and propagation of pits. A deterministic calculation is used to estimate the accumulation of hydrogen ions (pH suppression) in the crevice solution due to the hydrolysis of dissolved metals. Pit initiation and growth within the crevice is then dealt with by either a probabilistic model, or an equivalent deterministic model. Ultimately, the role of intergranular corrosion will have to be considered. While the strategy presented here is very promising, the integrated model is not yet ready for precise quantitative predictions. Empirical expressions for the rate of penetration based upon experimental crevice corrosion data can be used in the interim period, until the integrated model can be refined. Bounding calculations based upon such empirical expressions can provide important insight into worst-case scenarios.
Energy Technology Data Exchange (ETDEWEB)
Le Bot, O., E-mail: lebotol@gmail.com [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Mars, J.I. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Gervaise, C. [Univ. Grenoble Alpes, GIPSA-Lab, F-38000 Grenoble (France); CNRS, GIPSA-Lab, F-38000 Grenoble (France); Chaire CHORUS, Foundation of Grenoble Institute of Technology, 46 Avenue Félix Viallet, 38031 Grenoble Cedex 1 (France)
2015-10-23
This Letter proposes an algorithm to detect an unknown deterministic signal hidden in additive white Gaussian noise. The detector is based on recurrence analysis. It compares the distribution of the similarity matrix coefficients of the measured signal with an analytic expression of the distribution expected in the noise-only case. This comparison is achieved using divergence measures. Performance analysis based on the receiver operating characteristics shows that the proposed detector outperforms the energy detector, giving a probability of detection 10% to 50% higher, and has a similar performance to that of a sub-optimal filter detector. - Highlights: • We model the distribution of the similarity matrix coefficients of a Gaussian noise. • We use divergence measures for goodness-of-fit test between a model and measured data. • We distinguish deterministic signal and Gaussian noise with similarity matrix analysis. • Similarity matrix analysis outperforms energy detector.
Khimshiashvili, G.; Siersma, D.
2001-01-01
We describe the structure of minimal round functions on closed surfaces and three-folds. The minimal possible number of critical loops is determined and typical non-equisingular round function germs are interpreted in the spirit of isolated line singularities. We also discuss a version of Lusternik-
Lind, Marianne
2007-01-01
The article explores aspects of the role of prosody as a contextualization cue in aphasic conversation through auditory and acoustic analysis of an aphasic speaker's use of pitch variation in responses to closed yes/no-requests. The results reveal two prosodic realizations of 'yes' and 'no' contextualizing different kinds of responses: a flat realization with no prolongation and minimal pauses, signalling decisiveness, and a realization with movement in pitch, prolongation and preceding pauses, signalling indecisiveness. The analysis also shows how the aphasic uses a particular realization manipulatively for interactional purposes. The study illustrates the vital role that seemingly unimportant details play in the co-constructive process of creating meaning in interaction. The results indicate an area of competence that seems undisturbed in this speaker.
Energy Technology Data Exchange (ETDEWEB)
Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr
2009-08-07
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
A haplotype inference algorithm for trios based on deterministic sampling
Directory of Open Access Journals (Sweden)
Iliadis Alexandros
2010-08-01
Full Text Available Abstract Background In genome-wide association studies, thousands of individuals are genotyped in hundreds of thousands of single nucleotide polymorphisms (SNPs. Statistical power can be increased when haplotypes, rather than three-valued genotypes, are used in analysis, so the problem of haplotype phase inference (phasing is particularly relevant. Several phasing algorithms have been developed for data from unrelated individuals, based on different models, some of which have been extended to father-mother-child "trio" data. Results We introduce a technique for phasing trio datasets using a tree-based deterministic sampling scheme. We have compared our method with publicly available algorithms PHASE v2.1, BEAGLE v3.0.2 and 2SNP v1.7 on datasets of varying number of markers and trios. We have found that the computational complexity of PHASE makes it prohibitive for routine use; on the other hand 2SNP, though the fastest method for small datasets, was significantly inaccurate. We have shown that our method outperforms BEAGLE in terms of speed and accuracy for small to intermediate dataset sizes in terms of number of trios for all marker sizes examined. Our method is implemented in the "Tree-Based Deterministic Sampling" (TDS package, available for download at http://www.ee.columbia.edu/~anastas/tds Conclusions Using a Tree-Based Deterministic sampling technique, we present an intuitive and conceptually simple phasing algorithm for trio data. The trade off between speed and accuracy achieved by our algorithm makes it a strong candidate for routine use on trio datasets.
Deterministic Single-Phonon Source Triggered by a Single Photon
Söllner, Immo; Lodahl, Peter
2016-01-01
We propose a scheme that enables the deterministic generation of single phonons at GHz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on-chip in an opto-mechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new opto-mechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nano-fabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Deterministic Dynamics and Chaos: Epistemology and Interdisciplinary Methodology
Catsigeras, Eleonora
2011-01-01
We analyze, from a theoretical viewpoint, the bidirectional interdisciplinary relation between mathematics and psychology, focused on the mathematical theory of deterministic dynamical systems, and in particular, on the theory of chaos. On one hand, there is the direct classic relation: the application of mathematics to psychology. On the other hand, we propose the converse relation which consists in the formulation of new abstract mathematical problems appearing from processes and structures under research of psychology. The bidirectional multidisciplinary relation from-to pure mathematics, largely holds with the "hard" sciences, typically physics and astronomy. But it is rather new, from the social and human sciences, towards pure mathematics.
Steering Multiple Reverse Current into Unidirectional Current in Deterministic Ratchets
Institute of Scientific and Technical Information of China (English)
韦笃取; 罗晓曙; 覃英华
2011-01-01
Recent investigations have shown that with varying the amplitude of the external force, the deterministic ratchets exhibit multiple current reversals, which are undesirable in certain circumstances. To control the multiple reverse current to unidirectional current, an adaptive control law is presented inspired from the relation between multiple reversaJs current and the chaos-periodic/quasiperiodic transition of the transport velocity. The designed controller can stabilize the transport velocity of ratchets to steady state and suppress any chaos-periodic/quasiperiodic transition, namely, the stable transport in ratchets is achieved, which makes the current sign unchanged.
Deterministic multimode photonic device for quantum-information processing
DEFF Research Database (Denmark)
Nielsen, Anne Ersbak Bang; Mølmer, Klaus
2010-01-01
We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states...... by excitation to optically excited levels followed by cooperative spontaneous emission. Among our examples of applications, we demonstrate how two-photon-entangled states can be prepared and implemented in a protocol for a reference-frame-free quantum key distribution and how one-dimensional as well as higher...
A Deterministic Transport Code for Space Environment Electrons
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Noise-based deterministic logic and computing: a brief survey
Kish, Laszlo B; Bezrukov, Sergey M; Peper, Ferdinand; Gingl, Zoltan; Horvath, Tamas
2010-01-01
A short survey is provided about our recent explorations of the young topic of noise-based logic. After outlining the motivation behind noise-based computation schemes, we present a short summary of our ongoing efforts in the introduction, development and design of several noise-based deterministic multivalued logic schemes and elements. In particular, we describe classical, instantaneous, continuum, spike and random-telegraph-signal based schemes with applications such as circuits that emulate the brain's functioning and string verification via a slow communication channel.
Deterministic entanglement of Rydberg ensembles by engineered dissipation
DEFF Research Database (Denmark)
Dasari, Durga; Mølmer, Klaus
2014-01-01
We propose a scheme that employs dissipation to deterministically generate entanglement in an ensemble of strongly interacting Rydberg atoms. With a combination of microwave driving between different Rydberg levels and a resonant laser coupling to a short lived atomic state, the ensemble can...... be driven towards a dark steady state that entangles all atoms. The long-range resonant dipole-dipole interaction between different Rydberg states extends the entanglement beyond the van der Walls interaction range with perspectives for entangling large and distant ensembles....
CALTRANS: A parallel, deterministic, 3D neutronics code
Energy Technology Data Exchange (ETDEWEB)
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Deterministic ants in labirynth -- information gained by map sharing
Malinowski, Janusz
2014-01-01
A few of ant robots are dropped to a labirynth, formed by a square lattice with a small number of nodes removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which she has visited. Once two ants met, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.
Methods and models in mathematical biology deterministic and stochastic approaches
Müller, Johannes
2015-01-01
This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.
Lasing in an optimized deterministic aperiodic nanobeam cavity
Moon, Seul-Ki; Jeong, Kwang-Yong; Noh, Heeso; Yang, Jin-Kyu
2016-12-01
We have demonstrated lasing action from partially extended modes in deterministic aperiodic nanobeam cavities inflated by Rudin-Shapiro sequence with two different air holes at room temperature. By varying the size ratio of the holes and hence the structural aperiodicity, different optical lasing modes were obtained with maximized quality factors. The lasing characteristics of the partially extended modes were confirmed by numerical simulations based on scanning microscope images of the fabricated samples. We believe that this partially extended nanobeam modes will be useful for label-free optical biosensors.
Deterministic versus stochastic aspects of superexponential population growth models
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
Unambiguous Tree Languages Are Topologically Harder Than Deterministic Ones
Directory of Open Access Journals (Sweden)
Szczepan Hummel
2012-10-01
Full Text Available The paper gives an example of a tree language G that is recognised by an unambiguous parity automaton and is analytic-complete as a set in Cantor space. This already shows that the unambiguous languages are topologically more complex than the deterministic ones, that are all coanalytic. Using set G as a building block we construct an unambiguous language that is topologically harder than any countable boolean combination of analytic and coanalytic sets. In particular the language is harder than any set in difference hierarchy of analytic sets considered by O.Finkel and P.Simonnet in the context of nondeterministic automata.
Fully fault tolerant quantum computation with non-deterministic gates
Li, Ying; Stace, Thomas M; Benjamin, Simon C
2010-01-01
In certain approaches to quantum computing the operations between qubits are non-deterministic and likely to fail. For example, a distributed quantum processor would achieve scalability by networking together many small components; operations between components should assumed to be failure prone. In the logical limit of this architecture each component contains only one qubit. Here we derive thresholds for fault tolerant quantum computation under such extreme paradigms. We find that computation is supported for remarkably high failure rates (exceeding 90%) providing that failures are heralded, meanwhile the rate of unknown errors should not exceed 2 in 10^4 operations.
Deterministic secure quantum communication over a collective-noise channel
Institute of Scientific and Technical Information of China (English)
GU Bin; PEI ShiXin; SONG Biao; ZHONG Kun
2009-01-01
We present two deterministic secure quantum communication schemes over a collective-noise. One is used to complete the secure quantum communication against a collective-rotation noise and the other is used against a collective-dephasing noise. The two parties of quantum communication can exploit the correlation of their subsystems to check eavesdropping efficiently. Although the sender should prepare a sequence of three-photon entangled states for accomplishing secure communication against a collective noise, the two parties need only single-photon measurements, rather than Bell-state measurements, which will make our schemes convenient in practical application.
Deterministic generation of entangled coherent states for two atomic samples
Institute of Scientific and Technical Information of China (English)
Lu Dao-Ming; Zheng Shi-Biao
2009-01-01
This paper proposes an efficient scheme for deterministic generation of entangled coherent states for two atomic samples. In the scheme two collections of atoms are trapped in an optical cavity and driven by a classical field. Under certain conditions the two atomic samples evolve from an coherent state to an entangled coherent state. During the interaction the cavity mode is always in the vacuum state and the atoms have no probability of being populated in the excited state. Thus, the scheme is insensitive to both the cavity decay and atomic spontaneous emission.
Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.
Chew, Lock Yue
2012-01-01
We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.
Deterministic multidimensional growth model for small-world networks
Peng, Aoyuan
2011-01-01
We proposed a deterministic multidimensional growth model for small-world networks. The model can characterize the distinguishing properties of many real-life networks with geometric space structure. Our results show the model possesses small-world effect: larger clustering coefficient and smaller characteristic path length. We also obtain some accurate results for its properties including degree distribution, clustering coefficient and network diameter and discuss them. It is also worth noting that we get an accurate analytical expression for calculating the characteristic path length. We verify numerically and experimentally these main features.
Deterministic homogenization of parabolic monotone operators with time dependent coefficients
Directory of Open Access Journals (Sweden)
Gabriel Nguetseng
2004-06-01
Full Text Available We study, beyond the classical periodic setting, the homogenization of linear and nonlinear parabolic differential equations associated with monotone operators. The usual periodicity hypothesis is here substituted by an abstract deterministic assumption characterized by a great relaxation of the time behaviour. Our main tool is the recent theory of homogenization structures by the first author, and our homogenization approach falls under the two-scale convergence method. Various concrete examples are worked out with a view to pointing out the wide scope of our approach and bringing the role of homogenization structures to light.
Bosch, Gabriel; Ender, Andreas; Mehl, Albert
2015-01-01
Abrasion and erosion are two increasingly common indications for dental treatment. Thanks to modern digital technologies and new restorative materials, there are novel therapeutic approaches to restoring such losses of tooth structure in a virtually non-invasive manner. The case study in this article demonstrates one such innovative approach. The patient's severely abraded natural dentition was restored in a defect-driven, minimally invasive manner using high-performance composite materials in the posterior region, and the "sandwich technique" in the anterior region. The restorations were milled on an optimized milling machine with milling cycles adapted for the fabrication of precision-fit restorations with thin edges.
子宫颈微偏腺癌1例报道及文献回顾%Minimal deviation adenocarcinoma of the cervix:case report and literature review
Institute of Scientific and Technical Information of China (English)
刘小艳; 巩丽; 李艳红; 姚丽; 封兰兰; 兰淼; 张伟
2013-01-01
目的:报道1例子宫颈微偏腺癌病例,探讨其临床病理学特征.方法:对1例子宫颈微偏腺癌进行临床病理学及免疫组化观察.结果:显微镜下,腺体由分泌黏液的柱状上皮构成,腺体扭曲,外形不规则,大多数腺体与正常腺体无法区别,少部分腺体的细胞核有中度不典型增生,核分裂象可见.结论:子宫颈微偏腺癌是一种少见的高分化黏液腺癌,临床诊断应排除良性腺体增生及内膜异位.因大多数病例子宫颈活检不能诊断,所以在日常临床工作中如遇到考虑恶性肿瘤,而镜检为良性时,应考虑子宫颈微偏腺癌,腺体浸润深度是诊断的关键.%Objective: To report a case of minimal deviation adenocarcinoma of the cervix, and explore its clinical and pathological features. Methods: The histopathological and immunohistochemical findings of a case of minimal deviation adenocarcinoma of the cervix were observed. Results: Microscopically, glandular organ was set up by mucilaginous of the columnar epithelium, glandular organ squirmed, irregular shape, most of the glandular organs can not distinguish between the normal glandular organ, occasionally, some glandular organs' nucleus had midrange atypical hy-perplasia, infiltration depth of the glandular organ is the key for diagnosis. Conclusion; Minimal deviation adenocarcinoma of the cervix is a rare well - differentiated mucinous adenocarcinoma. Clinical diagnosis should rule out ectopic benign glandular hyperplasia and adenomyosis. Because the majority of cases of cervical biopsy can not be diagnosed, in routine clinical work, clinical considerations in case of malignancy, while the microscopic examination as benign should be considered minimal deviation adenocarcinoma of the cervix.
Institute of Scientific and Technical Information of China (English)
黄崧; 陈翔宇; 任小宝; 郭国宁; 任鸿
2012-01-01
Objective To evaluate the clinical effect of Achillon minimal invasive achilles tendon suture system on repairing of acute open achilles tendon ruptures. Methods Twenty three patients with acute open achilles tendon ruptures were treated by Achillon minimal invasive achilles tendon suture system and the follow-up was conducted. This minimal invasive treatment was performed after the debride-ment. In all cases Achilles tendons were successfully explored through the original wounds. Achillon minimal invasive achilles tendon suture system was put in to hold the rupture end of achilles tendon. Then suture threads were introduced to both proximal and distal ends. The ankle was casted with gypsum for six weeks after the surgery. Results Twenty three cases were successfully followed up for 8 to 14 months ( 10 months in average ). Accoring to Arner-Lindholm score,the clinical effect was excellent in 21 cases ( 91. 3% ),good in 2 cases ( 8. 7% ). No case of infection, Achilles tendon mal-union,sural nerve damage,recurrent achillies tendon ruptures or suture rejection was found. Conclusion Repairing acute open Achilles tendon ruptures by Achillon minimal invasive achilles tendon suture system is reliable, minimal invasive , with fast recovery of the ankle function.%目的 探讨应用Achillon微创跟腱吻合器治疗新鲜开放性跟腱断裂的临床效果.方法 采用Achillon跟腱缝合器治疗23例急性开放性跟腱断裂患者并随访.在急诊清创后,采用原创口显露跟腱断端、清理残端后,置入Achillon吻合器夹持跟腱近侧断端,引入缝线;同法在跟腱断裂远端置入缝线,拉出缝线打结,术后石膏托外固定6周.结果 23例获得随访,时间8～14个月,平均10个月,按Arner-Lindholm疗效评定标准,优21例(91.3%),良2例(8.7%),无伤口感染、跟腱愈合不良、腓肠神经损伤、复发跟腱断裂或缝线排异.结论 开放性跟腱断裂在急诊清创后,采用原创口清理显露跟腱断端,
Directory of Open Access Journals (Sweden)
Roger Chen Zhu
2016-10-01
Full Text Available Arrest in the embryologic intestinal rotation around the superior mesenteric artery prevents proper mesenteric attachment and subjects the gut to volvulus and ischemia which may lead to bowel resection. The length of non-viable resected bowel has been shown by Teitelbaum et al. to be an independent predictor of survival in patients with postoperative short bowel syndrome (RR = 5.74, P = .003. Non-occlusive mesenteric ischemia (NOMI is a feed-forward loop of vasoconstriction that aggravates the primary ischemic injury. It is an initially reversible process and a potential point of intervention for preservation of viable bowel. The Boley et al. algorithm for management of adult NOMI utilizes intravascular papaverine infusion to increase intracellular cAMP, decreasing calcium concentration and halting vasospasm. We present a modified version of this approach using topical papaverine in the setting of neonatal post-ischemic NOMI, with the goal of minimizing bowel resection.
Salsamendi, Jason; Pereira, Keith; Kang, Kyungmin; Fan, Ji
2015-09-01
Nonalcoholic fatty liver disease (NAFLD) represents a spectrum of disorders from simple steatosis to inflammation leading to fibrosis, cirrhosis, and even hepatocellular carcinoma. With the progressive epidemics of obesity and diabetes, major risk factors in the development and pathogenesis of NAFLD, the prevalence of NAFLD and its associated complications including liver failure and hepatocellular carcinoma is expected to increase by 2030 with an enormous health and economic impact. We present a patient who developed Hepatocellular carcinoma (HCC) from nonalcoholic steatohepatitis (NASH) cirrhosis. Due to morbid obesity, she was not an optimal transplant candidate and was not initially listed. After attempts for lifestyle modifications failed to lead to weight reduction, a transarterial embolization of the left gastric artery was performed. This is the sixth such procedure in humans in literature. Subsequently she had a meaningful drop in BMI from 42 to 36 over the following 6 months ultimately leading to her being listed for transplant. During this time, the left hepatic HCC was treated with chemoembolization without evidence of recurrence. In this article, we wish to highlight the use of minimally invasive percutaneous endovascular therapies such as transarterial chemoembolization (TACE) in the comprehensive management of the NAFLD spectrum and percutaneous transarterial embolization of the left gastric artery (LGA), a novel method, for the management of obesity.
Ruled Laguerre minimal surfaces
Skopenkov, Mikhail
2011-10-30
A Laguerre minimal surface is an immersed surface in ℝ 3 being an extremal of the functional ∫ (H 2/K-1)dA. In the present paper, we prove that the only ruled Laguerre minimal surfaces are up to isometry the surfaces ℝ (φλ) = (Aφ, Bφ, Cφ + D cos 2φ) + λ(sin φ, cos φ, 0), where A,B,C,D ε ℝ are fixed. To achieve invariance under Laguerre transformations, we also derive all Laguerre minimal surfaces that are enveloped by a family of cones. The methodology is based on the isotropic model of Laguerre geometry. In this model a Laguerre minimal surface enveloped by a family of cones corresponds to a graph of a biharmonic function carrying a family of isotropic circles. We classify such functions by showing that the top view of the family of circles is a pencil. © 2011 Springer-Verlag.
... get worse You develop new symptoms, including side effects from the medicines used to treat the disorder Alternative Names Minimal change nephrotic syndrome; Nil disease; Lipoid nephrosis; Idiopathic nephrotic syndrome of childhood Images ...
Deterministic approach to microscopic three-phase traffic theory
Kerner, B S; Kerner, Boris S.; Klenov, Sergey L.
2005-01-01
A deterministic approach to three-phase traffic theory is presented. Two different deterministic microscopic traffic flow models are introduced. In an acceleration time delay model (ATD-model), different time delays in driver acceleration associated with driver behavior in various local driving situations are explicitly incorporated into the model. Vehicle acceleration depends on local traffic situation, i.e., whether a driver is within the free flow, or synchronized flow, or else wide moving jam traffic phase. In a speed adaptation model (SA-model), driver time delays are simulated as a model effect: Rather than driver acceleration, vehicle speed adaptation occurs with different time delays depending on one of the three traffic phases in which the vehicle is in. It is found that the ATD- and SA-models show spatiotemporal congested traffic patterns that are adequate with empirical results. It is shown that in accordance with empirical results in the ATD- and SA-models the onset of congestion in free flow at a...
Deterministic Chaos in the X-ray Sources
Grzedzielski, M.; Sukova, P.; Janiuk, A.
2015-12-01
Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.
Deterministic chaos in the X-Ray sources
Grzedzielski, M; Janiuk, A
2015-01-01
Hardly any of the observed black hole accretion disks in X-Ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the Quasi-Periodic Oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binaries vari- ability. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the sur- rogate series. We analyze here the data of two X-Ray binaries - XTE J1550-564, and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leadin...
Deterministic nature of the underlying dynamics of surface wind fluctuations
Directory of Open Access Journals (Sweden)
R. C. Sreelekshmi
2012-10-01
Full Text Available Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
Deterministic nature of the underlying dynamics of surface wind fluctuations
Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.
2012-10-01
Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Bayesian analysis of deterministic and stochastic prisoner's dilemma games
Directory of Open Access Journals (Sweden)
Howard Kunreuther
2009-08-01
Full Text Available This paper compares the behavior of individuals playing a classic two-person deterministic prisoner's dilemma (PD game with choice data obtained from repeated interdependent security prisoner's dilemma games with varying probabilities of loss and the ability to learn (or not learn about the actions of one's counterpart, an area of recent interest in experimental economics. This novel data set, from a series of controlled laboratory experiments, is analyzed using Bayesian hierarchical methods, the first application of such methods in this research domain. We find that individuals are much more likely to be cooperative when payoffs are deterministic than when the outcomes are probabilistic. A key factor explaining this difference is that subjects in a stochastic PD game respond not just to what their counterparts did but also to whether or not they suffered a loss. These findings are interpreted in the context of behavioral theories of commitment, altruism and reciprocity. The work provides a linkage between Bayesian statistics, experimental economics, and consumer psychology.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-03
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
Quantum secure direct communication and deterministic secure quantum communication
Institute of Scientific and Technical Information of China (English)
LONG Gui-lu; DENG Fu-guo; WANG Chuan; LI Xi-han; WEN Kai; WANG Wan-ying
2007-01-01
In this review article,we review the recent development of quantum secure direct communication(QSDC)and deterministic secure quantum communication(DSQC) which both are used to transmit secret message,including the criteria for QSDC,some interesting QSDC protocols,the DSQC protocols and QSDC network,etc.The difference between these two branches of quantum Communication is that DSOC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit,and QSDC does not.They are attractivebecause they are deterministic,in particular,the QSDC protocol is fully quantum mechanical.With sophisticated quantum technology in the future,the QSDC may become more and more popular.For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel,a quantum privacy amplification protocol has been proposed.It involves very simple CHC operations and reduces the information leakage to a negligible small level.Moreover,with the one-party quantum error correction,a relation has been established between classical linear codes and quantum one-party codes,hence it is convenient to transfer many good classical error correction codes to the quantum world.The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-07-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
DNF Sparsification and a Faster Deterministic Counting Algorithm
Gopala, Parikshit; Reingold, Omer
2012-01-01
Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be $\\epsilon$-approximated by a width $w$ DNF with at most $(w\\log(1/\\epsilon))^{O(w)}$ terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic $n^{\\tilde{O}(\\log \\log(n))}$ time algorithm that computes an additive $\\epsilon$ approximation to the fraction of satisfying assignments of f for $\\epsilon = 1/\\poly(\\log n)$. The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of $n^{\\exp(O(\\sqrt{\\log \\log n}))}$.
Electrocardiogram (ECG) pattern modeling and recognition via deterministic learning
Institute of Scientific and Technical Information of China (English)
Xunde DONG; Cong WANG; Junmin HU; Shanxing OU
2014-01-01
A method for electrocardiogram (ECG) pattern modeling and recognition via deterministic learning theory is presented in this paper. Instead of recognizing ECG signals beat-to-beat, each ECG signal which contains a number of heartbeats is recognized. The method is based entirely on the temporal features (i.e., the dynamics) of ECG patterns, which contains complete information of ECG patterns. A dynamical model is employed to demonstrate the method, which is capable of generating synthetic ECG signals. Based on the dynamical model, the method is shown in the following two phases:the identification (training) phase and the recognition (test) phase. In the identification phase, the dynamics of ECG patterns is accurately modeled and expressed as constant RBF neural weights through the deterministic learning. In the recognition phase, the modeling results are used for ECG pattern recognition. The main feature of the proposed method is that the dynamics of ECG patterns is accurately modeled and is used for ECG pattern recognition. Experimental studies using the Physikalisch-Technische Bundesanstalt (PTB) database are included to demonstrate the effectiveness of the approach.
2012-01-01
Abstract Introduction In the two cases described here, the subclavian artery was inadvertently cannulated during unsuccessful access to the internal jugular vein. The puncture was successfully closed using a closure device based on a collagen plug (Angio-Seal, St Jude Medical, St Paul, MN, USA). This technique is relatively simple and inexpensive. It can provide clinicians, such as intensive care physicians and anesthesiologists, with a safe and straightforward alternative to major surgery an...
Pest persistence and eradication conditions in a deterministic model for sterile insect release.
Gordillo, Luis F
2015-01-01
The release of sterile insects is an environment friendly pest control method used in integrated pest management programmes. Difference or differential equations based on Knipling's model often provide satisfactory qualitative descriptions of pest populations subject to sterile release at relatively high densities with large mating encounter rates, but fail otherwise. In this paper, I derive and explore numerically deterministic population models that include sterile release together with scarce mating encounters in the particular case of species with long lifespan and multiple matings. The differential equations account separately the effects of mating failure due to sterile male release and the frequency of mating encounters. When insects spatial spread is incorporated through diffusion terms, computations reveal the possibility of steady pest persistence in finite size patches. In the presence of density dependence regulation, it is observed that sterile release might contribute to induce sudden suppression of the pest population.
Deterministic Partial Differential Equation Model for Dose Calculation in Electron Radiotherapy
Duclous, Roland; Frank, Martin
2009-01-01
Treatment with high energy ionizing radiation is one of the main methods in modern cancer therapy that is in clinical use. During the last decades, two main approaches to dose calculation were used, Monte Carlo simulations and semi-empirical models based on Fermi-Eyges theory. A third way to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. Starting from these, we derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free-streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on [BerCharDub], that exactly preserves key properties of the analytical solution on the discrete level. Several numerical results for test cases from the medical physics literature are presented.
Evaluating consistency of deterministic streamline tractography in non-linearly warped DTI data
Adluru, Nagesh; Tromp, Do P M; Davidson, Richard J; Zhang, Hui; Alexander, Andrew L
2016-01-01
Tractography is typically performed for each subject using the diffusion tensor imaging (DTI) data in its native subject space rather than in some space common to the entire study cohort. Despite performing tractography on a population average in a normalized space, the latter is considered less favorably at the \\emph{individual} subject level because it requires spatial transformations of DTI data that involve non-linear warping and reorientation of the tensors. Although the commonly used reorientation strategies such as finite strain and preservation of principle direction are expected to result in adequate accuracy for voxel based analyses of DTI measures such as fractional anisotropy (FA), mean diffusivity (MD), the reorientations are not always exact except in the case of rigid transformations. Small imperfections in reorientation at individual voxel level accumulate and could potentially affect the tractography results adversely. This study aims to evaluate and compare deterministic white matter fiber t...
Saligrama, Venkatesh
2008-01-01
In this paper we present a new family of discrete sequences having ``random like'' uniformly decaying auto-correlation properties. The new class of infinite length sequences are higher order chirps constructed using irrational numbers. Exploiting results from the theory of continued fractions and diophantine approximations, we show that the class of sequences so formed has the property that the worst-case auto-correlation coefficients for every finite length sequence decays at a polynomial rate. These sequences display doppler immunity as well. We also show that Toeplitz matrices formed from such sequences satisfy restricted-isometry-property (RIP), a concept that has played a central role recently in Compressed Sensing applications. Compressed sensing has conventionally dealt with sensing matrices with arbitrary components. Nevertheless, such arbitrary sensing matrices are not appropriate for linear system identification and one must employ Toeplitz structured sensing matrices. Linear system identification p...
Stojković, Milan; Kostić, Srđan; Plavšić, Jasna; Prohaska, Stevan
2017-01-01
The authors present a detailed procedure for modelling of mean monthly flow time-series using records of the Great Morava River (Serbia). The proposed procedure overcomes a major challenge of other available methods by disaggregating the time series in order to capture the main properties of the hydrologic process in both long-run and short-run. The main assumption of the conducted research is that a time series of monthly flow rates represents a stochastic process comprised of deterministic, stochastic and random components, the former of which can be further decomposed into a composite trend and two periodic components (short-term or seasonal periodicity and long-term or multi-annual periodicity). In the present paper, the deterministic component of a monthly flow time-series is assessed by spectral analysis, whereas its stochastic component is modelled using cross-correlation transfer functions, artificial neural networks and polynomial regression. The results suggest that the deterministic component can be expressed solely as a function of time, whereas the stochastic component changes as a nonlinear function of climatic factors (rainfall and temperature). For the calibration period, the results of the analysis infers a lower value of Kling-Gupta Efficiency in the case of transfer functions (0.736), whereas artificial neural networks and polynomial regression suggest a significantly better match between the observed and simulated values, 0.841 and 0.891, respectively. It seems that transfer functions fail to capture high monthly flow rates, whereas the model based on polynomial regression reproduces high monthly flows much better because it is able to successfully capture a highly nonlinear relationship between the inputs and the output. The proposed methodology that uses a combination of artificial neural networks, spectral analysis and polynomial regression for deterministic and stochastic components can be applied to forecast monthly or seasonal flow rates.
Doses from aquatic pathways in CSA-N288.1: deterministic and stochastic predictions compared
Energy Technology Data Exchange (ETDEWEB)
Chouhan, S.L.; Davis, P
2002-04-01
The conservatism and uncertainty in the Canadian Standards Association (CSA) model for calculating derived release limits (DRLs) for aquatic emissions of radionuclides from nuclear facilities was investigated. The model was run deterministically using the recommended default values for its parameters, and its predictions were compared with the distributed doses obtained by running the model stochastically. Probability density functions (PDFs) for the model parameters for the stochastic runs were constructed using data reported in the literature and results from experimental work done by AECL. The default values recommended for the CSA model for some parameters were found to be lower than the central values of the PDFs in about half of the cases. Doses (ingestion, groundshine and immersion) calculated as the median of 400 stochastic runs were higher than the deterministic doses predicted using the CSA default values of the parameters for more than half (85 out of the 163) of the cases. Thus, the CSA model is not conservative for calculating DRLs for aquatic radionuclide emissions, as it was intended to be. The output of the stochastic runs was used to determine the uncertainty in the CSA model predictions. The uncertainty in the total dose was high, with the 95% confidence interval exceeding an order of magnitude for all radionuclides. A sensitivity study revealed that total ingestion doses to adults predicted by the CSA model are sensitive primarily to water intake rates, bioaccumulation factors for fish and marine biota, dietary intakes of fish and marine biota, the fraction of consumed food arising from contaminated sources, the irrigation rate, occupancy factors and the sediment solid/liquid distribution coefficient. To improve DRL models, further research into aquatic exposure pathways should concentrate on reducing the uncertainty in these parameters. The PDFs given here can he used by other modellers to test and improve their models and to ensure that DRLs
Institute of Scientific and Technical Information of China (English)
周明卫; 傅震; 朱风仪; 赵春生; 曹胜武; 骆慧; 刘宁
2015-01-01
目的总结微创锁孔开颅幕下手术的临床效果。方法285例幕下病变在微创锁孔显微镜及神经内镜下完成。皮肤切口3‐5 cm ，骨瓣直径1‐3 cm。小脑半球病变共4例采用锁孔中线入路；桥小脑角区手术281例采用锁孔枕下乙状窦后入路。结果成功实施了肿瘤切除152例；其中，听神经瘤96例，脑膜瘤23例，胆脂瘤17例，三叉神经鞘瘤12例（全切除8例，4例跨中颅窝的为次全切除）。成功完成微血管减压术129例、畸形血管切除3例和巨大蛛网膜囊肿切除1例。结论显微镜下神经内镜微创锁孔入路在幕下手术中能获得有效的操作空间，具有创伤小、并发症少、恢复快等优点，可应用于小脑、桥小脑角区病变的手术。%Objective To summary the outcomes of minimally invasive keyhole approach of craniotomy for infratentorial lesions .Methods The minimally invasive keyhole approach of craniotomy was performed in 285 cases with infratentorial lesions .The skin incision was 3‐5 cm in length and the bone flap was 1‐3 cm in diameter .The post‐middle line keyhole approach was used in 4 cases with cerebellar hemisphere lesions and the suboccipital retrosigmoid keyhole approach was used in 281 cases with the lesions in cerebellopontine angle area .Results Tumor resection surgeries were performed successfully in 152 cases ,of whom 96 cases were with acoustic neurinoma ,23 cases with meningioma ,17 cases with cholesteatoma ,and 12 cases with trigeminal neurinoma(total resection in 8 cases and partial resection in 4 cases due to extending to the middle cranial fossa) .The cranial neural micro‐vascular decompression was performed in 129 cases ,and the resections of three deformed vessels and one large arachnoid cyst were carried out ,which were all successful .Conclusion Asisted by microscope and endoscope ,the minimally invasive keyhole approach of craniotomy has the advantages of providing effective space
Locally minimal topological groups
enhofer, Lydia Au\\ss; Dikranjan, Dikran; Domínguez, Xabier
2009-01-01
A Hausdorff topological group $(G,\\tau)$ is called locally minimal if there exists a neighborhood $U$ of 0 in $\\tau$ such that $U$ fails to be a neighborhood of zero in any Hausdorff group topology on $G$ which is strictly coarser than $\\tau.$ Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all minimal groups. Motivated by the fact that locally compact NSS groups are Lie groups, we study the connection between local minimality and the NSS property, establishing that under certain conditions, locally minimal NSS groups are metrizable. A symmetric subset of an abelian group containing zero is said to be a GTG set if it generates a group topology in an analogous way as convex and symmetric subsets are unit balls for pseudonorms on a vector space. We consider topological groups which have a neighborhood basis at zero consisting of GTG sets. Examples of these locally GTG groups are: locally pseudo--convex spaces, groups uniformly free from small subgroups (...
Directory of Open Access Journals (Sweden)
Gama-Rodrigues Joaquim J.
2000-01-01
Full Text Available The Peutz-Jeghers syndrome is a hereditary disease that requires frequent endoscopic and surgical intervention, leading to secondary complications such as short bowel syndrome. CASE REPORT: This paper reports on a 15-year-old male patient with a family history of the disease, who underwent surgery for treatment of an intestinal occlusion due to a small intestine intussusception. DISCUSSION: An intra-operative fiberscopic procedure was included for the detection and treatment of numerous polyps distributed along the small intestine. Enterotomy was performed to treat only the larger polyps, therefore limiting the intestinal resection to smaller segments. The postoperative follow-up was uneventful. CONCLUSION: We point out the importance of conservative treatment for patients with this syndrome, especially those who will undergo repeated surgical interventions because of clinical manifestation while they are still young.
Izumi, Kosuke; Conlin, Laura K; Berrodin, Donna; Fincher, Christopher; Wilkens, Alisha; Haldeman-Englert, Chad; Saitta, Sulagna C; Zackai, Elaine H; Spinner, Nancy B; Krantz, Ian D
2012-12-01
Pallister-Killian syndrome (PKS) is a multisystem sporadic genetic condition characterized by facial anomalies, variable developmental delay and intellectual impairment, hypotonia, hearing loss, seizures, pigmentary skin differences, temporal alopecia, diaphragmatic hernia, congenital heart defects, and other systemic abnormalities. PKS is typically caused by the presence of a supernumerary isochromosome composed of the short arms of chromosome 12 resulting in tetrasomy 12p, which is often present in a tissue limited mosaic state. The PKS phenotype has also often been observed in individuals with complete or partial duplications of 12p (trisomy 12p rather than tetrasomy 12p) as the result of an interstitial duplication or unbalanced translocation. We have identified a proposita with PKS who has two small de novo interstitial duplications of 12p which, along with a review of previously reported cases, has allowed us to define a minimum critical region for PKS.
Adaptive Alternating Minimization Algorithms
Niesen, Urs; Wornell, Gregory
2007-01-01
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...
Directory of Open Access Journals (Sweden)
Brazier John E
2003-04-01
Full Text Available Abstract Background The SF-6D is a new single summary preference-based measure of health derived from the SF-36. Empirical work is required to determine what is the smallest change in SF-6D scores that can be regarded as important and meaningful for health professionals, patients and other stakeholders. Objectives To use anchor-based methods to determine the minimally important difference (MID for the SF-6D for various datasets. Methods All responders to the original SF-36 questionnaire can be assigned an SF-6D score provided the 11 items used in the SF-6D have been completed. The SF-6D can be regarded as a continuous outcome scored on a 0.29 to 1.00 scale, with 1.00 indicating "full health". Anchor-based methods examine the relationship between an health-related quality of life (HRQoL measure and an independent measure (or anchor to elucidate the meaning of a particular degree of change. One anchor-based approach uses an estimate of the MID, the difference in the QoL scale corresponding to a self-reported small but important change on a global scale. Patients were followed for a period of time, then asked, using question 2 of the SF-36 as our global rating scale, (which is not part of the SF-6D, if there general health is much better (5, somewhat better (4, stayed the same (3, somewhat worse (2 or much worse (1 compared to the last time they were assessed. We considered patients whose global rating score was 4 or 2 as having experienced some change equivalent to the MID. In patients who reported a worsening of health (global change of 1 or 2 the sign of the change in the SF-6D score was reversed (i.e. multiplied by minus one. The MID was then taken as the mean change on the SF-6D scale of the patients who scored (2 or 4. Results This paper describes the MID for the SF-6D from seven longitudinal studies that had previously used the SF-36. Conclusions From the seven reviewed studies (with nine patient groups the MID for the SF-6D ranged from 0
Deterministic Squeezed States with Joint Measurements and Feedback
Cox, Kevin C; Weiner, Joshua M; Thompson, James K
2015-01-01
We demonstrate the creation of entangled or spin-squeezed states using a joint measurement and real-time feedback. The pseudo-spin state of an ensemble of $N= 5\\times 10^4$ laser-cooled $^{87}$Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) (7.4(6) dB) in variance below the standard quantum limit for unentangled atoms -- comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint pre-measurement, we directly observe up to 59(8) times (17.7(6) dB) improvement in quantum phase variance relative to the standard quantum limit for $N=4\\times 10^5$ atoms. This is the largest reported entanglement enhancement to date in any system.
Molecular dynamics with deterministic and stochastic numerical methods
Leimkuhler, Ben
2015-01-01
This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications. Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...
Deterministic simulation of thermal neutron radiography and tomography
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
Simple deterministic dynamical systems with fractal diffusion coefficients
Klages, R
1999-01-01
We analyze a simple model of deterministic diffusion. The model consists of a one-dimensional periodic array of scatterers in which point particles move from cell to cell as defined by a piecewise linear map. The microscopic chaotic scattering process of the map can be changed by a control parameter. This induces a parameter dependence for the macroscopic diffusion coefficient. We calculate the diffusion coefficent and the largest eigenmodes of the system by using Markov partitions and by solving the eigenvalue problems of respective topological transition matrices. For different boundary conditions we find that the largest eigenmodes of the map match to the ones of the simple phenomenological diffusion equation. Our main result is that the difffusion coefficient exhibits a fractal structure by varying the system parameter. To understand the origin of this fractal structure, we give qualitative and quantitative arguments. These arguments relate the sequence of oscillations in the strength of the parameter-dep...
Deterministic Computational Complexity of the Quantum Separability Problem
Ioannou, L M
2006-01-01
Ever since entanglement was identified as a computational and cryptographic resource, effort has been made to find an efficient way to tell whether a given density matrix represents an unentangled, or separable, state. Essentially, this is the quantum separability problem. In Section 1, I begin with a brief introduction to bipartite separability and entanglement, and a basic formal definition of the quantum separability problem. I conclude with a summary of one-sided tests for separability, including those involving semidefinite programming. In Section 2, I treat the separability problem as a computational decision problem and motivate its approximate formulations. After a review of basic complexity-theoretic notions, I discuss the computational complexity of the separability problem (including a Turing-NP-complete formulation of the problem and a proof of "strong NP-hardness" (based on a new NP-hardness proof by Gurvits)). In Section 3, I give a comprehensive survey and complexity analysis of deterministic a...
Deterministic single-file dynamics in collisional representation.
Marchesoni, F; Taloni, A
2007-12-01
We re-examine numerically the diffusion of a deterministic, or ballistic single file with preassigned velocity distribution (Jepsen's gas) from a collisional viewpoint. For a two-modal velocity distribution, where half the particles have velocity +/-c, the collisional statistics is analytically proven to reproduce the continuous time representation. For a three-modal velocity distribution with equal fractions, where less than 12 of the particles have velocity +/-c, with the remaining particles at rest, the collisional process is shown to be inhomogeneous; its stationary properties are discussed here by combining exact and phenomenological arguments. Collisional memory effects are then related to the negative power-law tails in the velocity autocorrelation functions, predicted earlier in the continuous time formalism. Numerical and analytical results for Gaussian and four-modal Jepsen's gases are also reported for the sake of a comparison.
Capillary-mediated interface perturbations: Deterministic pattern formation
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Connection between stochastic and deterministic modelling of microbial growth.
Kutalik, Zoltán; Razaz, Moe; Baranyi, József
2005-01-21
We present in this paper various links between individual and population cell growth. Deterministic models of the lag and subsequent growth of a bacterial population and their connection with stochastic models for the lag and subsequent generation times of individual cells are analysed. We derived the individual lag time distribution inherent in population growth models, which shows that the Baranyi model allows a wide range of shapes for individual lag time distribution. We demonstrate that individual cell lag time distributions cannot be retrieved from population growth data. We also present the results of our investigation on the effect of the mean and variance of the individual lag time and the initial cell number on the mean and variance of the population lag time. These relationships are analysed theoretically, and their consequence for predictive microbiology research is discussed.
Scattering of electromagnetic light waves from a deterministic anisotropic medium
Li, Jia; Chang, Liping; Wu, Pinghui
2015-11-01
Based on the weak scattering theory of electromagnetic waves, analytical expressions are derived for the spectral densities and degrees of polarization of an electromagnetic plane wave scattered from a deterministic anisotropic medium. It is shown that the normalized spectral densities of scattered field is highly dependent of changes of the scattering angle and degrees of polarization of incident plane waves. The degrees of polarization of scattered field are also subjective to variations of these parameters. In addition, the anisotropic effective radii of the dielectric susceptibility can lead essential influences on both spectral densities and degrees of polarization of scattered field. They are highly dependent of the effective radii of the medium. The obtained results may be applicable to determine anisotropic parameters of medium by quantitatively measuring statistics of a far-zone scattered field.
Deterministic VLSI Block Placement Algorithm Using Less Flexibility First Principle
Institute of Scientific and Technical Information of China (English)
DONG SheQin (董社勤); HONG XianLong (洪先龙); WU YuLiang (吴有亮); GU Jun (顾钧)
2003-01-01
In this paper, a simple while effective deterministic algorithm for solving the VLSI block placement problem is proposed considering the packing area and interconnect wiring simultaneously. The algorithm is based on a principle inspired by observations of ancient professionals in solving their similar problems. Using the so-called Less Flexibility First principle, it is tried to pack blocks with the least packing flexibility on its shape and interconnect requirement to the empty space with the least packing flexibility in a greedy manner. Experimental results demonstrate that the algorithm, though simple, is quite effective in solving the problem. The same philosophy could also be used in designing efficient heuristics for other hard problems, such as placement with preplaced modules, placement with L/T shape modules, etc.
Sensitivity analysis in a Lassa fever deterministic mathematical model
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Deterministic superresolution with coherent states at the shot noise limit
DEFF Research Database (Denmark)
Distante, Emanuele; Jezek, Miroslav; Andersen, Ulrik L.
2013-01-01
detection approaches. Here we show that superresolving phase measurements at the shot noise limit can be achieved without resorting to nonclassical optical states or to low-efficiency detection processes. Using robust coherent states of light, high-efficiency homodyne detection, and a deterministic......Interference of light fields plays an important role in various high-precision measurement schemes. It has been shown that superresolving phase measurements beyond the standard coherent state limit can be obtained either by using maximally entangled multiparticle states of light or using complex...... binarization processing technique, we show a narrowing of the interference fringes that scales with 1/√N where N is the mean number of photons of the coherent state. Experimentally we demonstrate a 12-fold narrowing at the shot noise limit....
Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes
DEFF Research Database (Denmark)
Starke, Jens; Reichert, Christian; Eiswirth, Markus;
2007-01-01
of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement......Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can...... with low pressure experiments. Furthermore, for intermediate pressures, noise-induced pattern formation, which has not been captured by earlier models, can be reproduced in stochastic simulations with the mesoscopic model....
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-10-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future.
Boyer, D.; Miramontes, O.; Larralde, H.
2009-10-01
Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.
Extraction of the deterministic ingredient of a dynamic geodetic control network
Shahar, L.; Even-Tzur, G.
2012-01-01
A minimum constraints solution, which resolves the datum defect of a control network, is an arbitrary solution that may result in a systematic error in the estimation of the deformation parameters. This error is not derived from measurements and is usually inconsistent with the geophysical reality. A free network is affected only by errors of measurement and, therefore, a free network is an accepted way of coping with this problem. Study of deformations, which is based on the use of geodetic measurements, is usually performed today by defining a kinematic model. Such a model, when used to describe a complex geophysical environment, can lead to the partial estimation of the deterministic dynamics, which characterize the entire network. These dynamics are themselves expressed in measurements, as the adjustment systems' residuals. The current paper presents an extension of the definition of the parameters that are revalued. This extension enables the cleaning of measurements by means of the extraction of datum elements that have been defined by geodetic measurement. This cleaning minimizes the effects of these elements on the revaluated deformation. The proposed algorithm may be applied to achieve the simultaneous estimation of the physical parameters that define the geophysical activity in the network.
Covey, Jason
2008-01-01
We provide deterministic, polynomial-time computable voting rules that approximate Dodgson's and (the ``minimization version'' of) Young's scoring rules to within a logarithmic factor. Our approximation of Dodgson's rule is tight up to a constant factor, as Dodgson's rule is $\\NP$-hard to approximate to within some logarithmic factor. The ``maximization version'' of Young's rule is known to be $\\NP$-hard to approximate by any constant factor. Both approximations are simple, and natural as rules in their own right: Given a candidate we wish to score, we can regard either its Dodgson or Young score as the edit distance between a given set of voter preferences and one in which the candidate to be scored is the Condorcet winner. (The difference between the two scoring rules is the type of edits allowed.) We regard the marginal cost of a sequence of edits to be the number of edits divided by the number of reductions (in the candidate's deficit against any of its opponents in the pairwise race against that opponent...
Immonen, Taina; Gibson, Richard; Leitner, Thomas; Miller, Melanie A; Arts, Eric J; Somersalo, Erkki; Calvetti, Daniela
2012-11-01
We present a new hybrid stochastic-deterministic, spatially distributed computational model to simulate growth competition assays on a relatively immobile monolayer of peripheral blood mononuclear cells (PBMCs), commonly used for determining ex vivo fitness of human immunodeficiency virus type-1 (HIV-1). The novel features of our approach include incorporation of viral diffusion through a deterministic diffusion model while simulating cellular dynamics via a stochastic Markov chain model. The model accounts for multiple infections of target cells, CD4-downregulation, and the delay between the infection of a cell and the production of new virus particles. The minimum threshold level of infection induced by a virus inoculum is determined via a series of dilution experiments, and is used to determine the probability of infection of a susceptible cell as a function of local virus density. We illustrate how this model can be used for estimating the distribution of cells infected by either a single virus type or two competing viruses. Our model captures experimentally observed variation in the fitness difference between two virus strains, and suggests a way to minimize variation and dual infection in experiments.
Logarithmic Superconformal Minimal Models
Pearce, Paul A; Tartaglia, Elena
2013-01-01
The higher fusion level logarithmic minimal models LM(P,P';n) have recently been constructed as the diagonal GKO cosets (A_1^{(1)})_k oplus (A_1^{(1)})_n / (A_1^{(1)})_{k+n} where n>0 is an integer fusion level and k=nP/(P'-P)-2 is a fractional level. For n=1, these are the logarithmic minimal models LM(P,P'). For n>1, we argue that these critical theories are realized on the lattice by n x n fusion of the n=1 models. For n=2, we call them logarithmic superconformal minimal models LSM(p,p') where P=|2p-p'|, P'=p' and p,p' are coprime, and they share the central charges of the rational superconformal minimal models SM(P,P'). Their mathematical description entails the fused planar Temperley-Lieb algebra which is a spin-1 BMW tangle algebra with loop fugacity beta_2=x^2+1+x^{-2} and twist omega=x^4 where x=e^{i(p'-p)pi/p'}. Examples are superconformal dense polymers LSM(2,3) with c=-5/2, beta_2=0 and superconformal percolation LSM(3,4) with c=0, beta_2=1. We calculate the free energies analytically. By numerical...
Minimal constrained supergravity
Directory of Open Access Journals (Sweden)
N. Cribiori
2017-01-01
Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.
Prostate resection - minimally invasive
... invasive URL of this page: //medlineplus.gov/ency/article/007415.htm Prostate resection - minimally invasive To use ... into your bladder instead of out through the urethra ( retrograde ... on New Developments in Prostate Cancer and Prostate Diseases. Evaluation and treatment of lower ...
Ill-Posedness of sublinear minimization problems
Directory of Open Access Journals (Sweden)
S. Issa
2011-04-01
Full Text Available It is well known that minimization problems involving sublinear regularization terms are ill-posed, in Sobolev spaces. Extended results to spaces of bounded variation functions BV were recently showed in the special case of bounded regularization terms. In this note, a generalization to sublinear regularization is presented in BV spaces. Notice that our results are optimal in the sense that linear regularization leads to well-posed minimization problems in BV spaces.
The advantages of minimally invasive dentistry.
Christensen, Gordon J
2005-11-01
Minimally invasive dentistry, in cases in which it is appropriate, is a concept that preserves dentitions and supporting structures. In this column, I have discussed several examples of minimally invasive dental techniques. This type of dentistry is gratifying for dentists and appreciated by patients. If more dentists would practice it, the dental profession could enhance the public's perception of its honesty and increase its professionalism as well.
Two tractable subclasses of minimal unsatisfiable formulas
Institute of Scientific and Technical Information of China (English)
赵希顺; 丁德成
1999-01-01
The minimal unsatisfiability problem is considered of the propositional formulas in CNF which in the case of variables x1,…, xn consist of n+k clauses including x1V…Vxn and （?） -（X1）V…V（?）xn. It is shown that when k≤4 the minimal unsatisfiability problem can be solved in polynomial time.
Minimal Braid in Applied Symbolic Dynamics
Institute of Scientific and Technical Information of China (English)
张成; 张亚刚; 彭守礼
2003-01-01
Based on the minimal braid assumption, three-dimensional periodic flows of a dynamical system are reconstructed in the case of unimodal map, and their topological structures are compared with those of the periodic orbits of the Rossler system in phase space through the numerical experiment. The numerical results justify the validity of the minimal braid assumption which provides a suspension from one-dimensional symbolic dynamics in the Poincare section to the knots of three-dimensional periodic flows.
Institute of Scientific and Technical Information of China (English)
王旭菁; 王永坤; 陈波
2016-01-01
Objective:To investigate the safety and efficacy of single port laparoscopic combined with minimally invasive gallbladder preserving biliary endoscopy for the treatment of gallbladder stones (polyps).Methods:Retrospective analysis of 2012 January to 2015 January, the clinical data of 161 cases of minimally invasive preservation of bile duct by umbilical single port laparoscopy combined with biliary endoscopy.Results:The patients in this group were operated successfully , the operation time was 30-115min, the average (70. 3 ±15.8) min, the bleeding volume was 5-50ml, the average (15.5 ±5.7) ml, postoperative hospitalization days , the average was 3. 4 ±0.7.There were 6 cases of malignant vomiting , 1 cases of incision fat liquefaction , postoperative urinary retention in 1 cases, diarrhea in 1 cases, after symptomatic treatment were cured and discharged .Follow up 3-36 months, 131 cases were followed up , the rate of fail-ure was 18.6%.Recurrence occurred in 3 cases, the recurrence rate was 1.9%, and the two operation was cured.Conclusion: The safety and feasibility of laparoscopic minimally invasive gallbladder preserving biliary endoscopy for gallbladder stones ( polyps) was safe and feasible by umbilical single port laparoscopy combined with biliary endoscopy , which have the advantages of less trauma , less compli-cations, low recurrence rate and fast recovery , can improve the quality of life of patients after surgery .%目的：探讨经脐单孔腹腔镜联合胆道镜微创保胆术治疗胆囊结石（息肉）患者的安全性及手术疗效。方法：回顾性分析我院2012年1月～2015年1月行经脐单孔腹腔镜联合胆道镜微创保胆术161例患者的临床资料。结果：本组患者手术过程顺利，手术时间30～115分钟，平均70．3±15．8分钟；术中出血量5～50ml，平均15．5±5．7ml；术后住院时间2～6天，平均3．4±0．7天。术后6例出现恶性呕吐不适，1例出现切口
Blackfolds, plane waves and minimal surfaces
Armas, Jay; Blau, Matthias
2015-01-01
Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and comp...
On the Value of Job Migration in Online Makespan Minimization
Albers, Susanne
2011-01-01
Makespan minimization on identical parallel machines is a classical scheduling problem. We consider the online scenario where a sequence of $n$ jobs has to be scheduled non-preemptively on $m$ machines so as to minimize the maximum completion time of any job. The best competitive ratio that can be achieved by deterministic online algorithms is in the range $[1.88,1.9201]$. Currently no randomized online algorithm with a smaller competitiveness is known, for general $m$. In this paper we explore the power of job migration, i.e.\\ an online scheduler is allowed to perform a limited number of job reassignments. Migration is a common technique used in theory and practice to balance load in parallel processing environments. As our main result we settle the performance that can be achieved by deterministic online algorithms. We develop an algorithm that is $\\alpha_m$-competitive, for any $m\\geq 2$, where $\\alpha_m$ is the solution of a certain equation. For $m=2$, $\\alpha_2 = 4/3$ and $\\lim_{m\\rightarrow \\infty} \\al...
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Allanach, B C; Tunstall, Lewis C; Voigt, A; Williams, A G
2013-01-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a $\\mathbb{Z}_{3}$ symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general $\\mathbb{Z}_{3}$ violating (denoted as $\\,\\mathbf{\\backslash}\\mkern-11.0mu{\\mathbb{Z}}_{3}$) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper se...
Fast Algorithm for Finding Unicast Capacity of Linear Deterministic Wireless Relay Networks
Shi, Cuizhu
2009-01-01
The deterministic channel model for wireless relay networks proposed by Avestimehr, Diggavi and Tse `07 has captured the broadcast and inference nature of wireless communications and has been widely used in approximating the capacity of wireless relay networks. The authors generalized the max-flow min-cut theorem to the linear deterministic wireless relay networks and characterized the unicast capacity of such deterministic network as the minimum rank of all the binary adjacency matrices describing source-destination cuts whose number grows exponentially with the size of the network. In this paper, we developed a fast algorithm for finding the unicast capacity of a linear deterministic wireless relay network by finding the maximum number of linearly independent paths using the idea of path augmentation. We developed a modified depth-first search algorithm tailored for the linear deterministic relay networks for finding linearly independent paths whose total number proved to equal the unicast capacity of the u...
Bachas, C; Wiese, K J; Bachas, Constantin; Doussal, Pierre Le; Wiese, Kay Joerg
2006-01-01
We study minimal surfaces which arise in wetting and capillarity phenomena. Using conformal coordinates, we reduce the problem to a set of coupled boundary equations for the contact line of the fluid surface, and then derive simple diagrammatic rules to calculate the non-linear corrections to the Joanny-de Gennes energy. We argue that perturbation theory is quasi-local, i.e. that all geometric length scales of the fluid container decouple from the short-wavelength deformations of the contact line. This is illustrated by a calculation of the linearized interaction between contact lines on two opposite parallel walls. We present a simple algorithm to compute the minimal surface and its energy based on these ideas. We also point out the intriguing singularities that arise in the Legendre transformation from the pure Dirichlet to the mixed Dirichlet-Neumann problem.
DEFF Research Database (Denmark)
Channuie, Phongpichit; Jark Joergensen, Jakob; Sannino, Francesco
2011-01-01
We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity, and that the u......We investigate models in which the inflaton emerges as a composite field of a four dimensional, strongly interacting and nonsupersymmetric gauge theory featuring purely fermionic matter. We show that it is possible to obtain successful inflation via non-minimal coupling to gravity......, and that the underlying dynamics is preferred to be near conformal. We discover that the compositeness scale of inflation is of the order of the grand unified energy scale....
Minimal triangulations of simplotopes
Seacrest, Tyler
2009-01-01
We derive lower bounds for the size of simplicial covers of simplotopes, which are products of simplices. These also serve as lower bounds for triangulations of such polytopes, including triangulations with interior vertices. We establish that a minimal triangulation of a product of two simplices is given by a vertex triangulation, i.e., one without interior vertices. For products of more than two simplices, we produce bounds for products of segments and triangles. Our analysis yields linear programs that arise from considerations of covering exterior faces and exploiting the product structure of these polytopes. Aside from cubes, these are the first known lower bounds for triangulations of simplotopes with three or more factors. We also construct a minimal triangulation for the product of a triangle and a square, and compare it to our lower bound.
Minimal hepatic encephalopathy.
Zamora Nava, Luis Eduardo; Torre Delgadillo, Aldo
2011-06-01
The term minimal hepatic encephalopathy (MHE) refers to the subtle changes in cognitive function, electrophysiological parameters, cerebral neurochemical/neurotransmitter homeostasis, cerebral blood flow, metabolism, and fluid homeostasis that can be observed in patients with cirrhosis who have no clinical evidence of hepatic encephalopathy; the prevalence is as high as 84% in patients with hepatic cirrhosis. Physician does generally not perceive cirrhosis complications, and neuropsychological tests and another especial measurement like evoked potentials and image studies like positron emission tomography can only make diagnosis. Diagnosis of minimal hepatic encephalopathy may have prognostic and therapeutic implications in cirrhotic patients. The present review pretends to explore the clinic, therapeutic, diagnosis and prognostic aspects of this complication.
DEFF Research Database (Denmark)
Frandsen, Mads Toudal
2007-01-01
I report on our construction and analysis of the effective low energy Lagrangian for the Minimal Walking Technicolor (MWT) model. The parameters of the effective Lagrangian are constrained by imposing modified Weinberg sum rules and by imposing a value for the S parameter estimated from the under...... the underlying Technicolor theory. The constrained effective Lagrangian allows for an inverted vector vs. axial-vector mass spectrum in a large part of the parameter space....
Institute of Scientific and Technical Information of China (English)
韩兴涛; 霍庆祥; 张寒; 魏澎涛
2014-01-01
Objective To investigate the effect of minimally invasive surgery on renal milk of calcium.Methods The clinical data of 7 patients diagnosed with renal calcium milk in Luoyang central hospital during 2005 to 2013 were retrospective analyzed,with cysts in 3 cases,stagnant water in 4 cases,who all received minimally invasive surgery treatment,patients with cysts received sclerotherapy after surgery.Results Seven operation patients recovered well after operation,1 patient with postoperative bleeding and 2 patients with low back pain recovered well after symptomatic treatment.One patient with cystic renal milk of calcium renal cyst recurrence after 2 years followed-up,and cured after given the color Doppler ultrasound guided puncture and therapy again.One patient with UPJO combined with hydronephrosis had good short-term effect,but the long-term effect remains to be seen.Conclusions Minimally invasive percutaneous nephrostomy technology is a viable treatment for renal milk of calcium,the method is efficient with little surgical trauma,less effect on renal function and significantly better than the traditional open surgery.%目的 探讨肾钙乳症微创手术治疗的疗效.方法 回顾性分析洛阳市中心医院2005年至2013年收治的7例确诊肾钙乳症患者,其中囊肿型3例,积水型4例,均行经皮肾微造瘘治疗,囊肿型患者术后注入硬化剂.结果 7例手术患者术后近期均恢复良好,其中1例术后出血及2例腰痛患者经对症治疗后均恢复良好.1例囊肿型肾钙乳患者随访2年肾囊肿复发,经再次行彩超引导穿刺治疗后治愈,1例合并肾盂输尿管连接部狭窄(UP-JO)积水患者近期效果良好,远期效果尚待观察.结论 经皮肾微造瘘技术是治疗肾钙乳症的一种可行的微创治疗方法,该手术创伤小,疗效确切,对肾功能影响小,明显优于传统的开放手术.
Quantization of the minimal and non-minimal vector field in curved space
Toms, David J
2015-01-01
The local momentum space method is used to study the quantized massive vector field (the Proca field) with the possible addition of non-minimal terms. Heat kernel coefficients are calculated and used to evaluate the divergent part of the one-loop effective action. It is shown that the naive expression for the effective action that one would write down based on the minimal coupling case needs modification. We adopt a Faddeev-Jackiw method of quantization and consider the case of an ultrastatic spacetime for simplicity. The operator that arises for non-minimal coupling to the curvature is shown to be non-minimal in the sense of Barvinsky and Vilkovisky. It is shown that when a general non-minimal term is added to the theory the result is not renormalizable with the addition of a local Lagrangian counterterm.
Test Time Minimization for Hybrid BIST of Core-Based Systems
Institute of Scientific and Technical Information of China (English)
Gert Jervan; Petru Eles; Zebo Peng; Raimund Ubar; Maksim Jenihhin
2006-01-01
This paper presents a solution to the test time minimization problem for core-based systems. We assume a hybrid BIST approach, where a test set is assembled, for each core, from pseudorandom test patterns that are generated online, and deterministic test patterns that are generated off-line and stored in the system. In this paper we propose an iterative algorithm to find the optimal combination of pseudorandom and deterministic test sets of the whole system,consisting of multiple cores, under given memory constraints, so that the total test time is minimized. Our approach employs a fast estimation methodology in order to avoid exhaustive search and to speed-up the calculation process. Experimental results have shown the efficiency of the algorithm to find near optimal solutions.
Chaos theory as a bridge between deterministic and stochastic views for hydrologic modeling
Sivakumar, B.
2009-04-01
Two modeling approaches are prevalent in hydrology: deterministic and stochastic. The deterministic approach may be supported on the basis of the ‘permanent' nature of the ocean-earth-atmosphere structure and the ‘cyclical' nature of mechanisms that take place within it. The stochastic approach may be favored because of the ‘highly irregular and complex nature' of hydrologic phenomena and our ‘limited ability to observe' the detailed variations. With these two contrasting concepts, asking the question whether hydrologic phenomena are better modeled using a deterministic approach or a stochastic approach is meaningless. In fact, for most (if not all) hydrologic phenomena, both the deterministic approach and the stochastic approach are complementary to each other. This may be supported by our observation of both ‘deterministic' and ‘random' nature of hydrologic phenomena at ‘one or more scales' in time and/or space; for instance, there exists a significant deterministic nature in river flow in the form of seasonality and annual cycle, whereas the interactions of the various mechanisms involved in the river flow phenomenon and their various degrees of nonlinearity bring randomness. It is reasonable, therefore, to argue that use of an integrated modeling approach that incorporates both the deterministic and the stochastic components will produce greater success compared to either a deterministic approach or a stochastic approach independently. This study discusses the role of chaos theory as a potential avenue to the formulation of an integrated deterministic-stochastic approach. Through presentation of its fundamental principles (nonlinear interdependence, hidden determinism and order, sensitivity to initial conditions) and their relevance in hydrologic systems, the study contends that chaos theory can serve as a bridge between the deterministic and stochastic ‘extreme' views and offer a ‘middle-ground' approach. Specific examples of chaos theory
Simulation of Broadband Time Histories Combining Deterministic and Stochastic Methodologies
Graves, R. W.; Pitarka, A.
2003-12-01
We present a methodology for generating broadband (0 - 10 Hz) ground motion time histories using a hybrid technique that combines a stochastic approach at high frequencies with a deterministic approach at low frequencies. Currently, the methodology is being developed for moderate and larger crustal earthquakes, although the technique can theoretically be applied to other classes of events as well. The broadband response is obtained by summing the separate responses in the time domain using matched butterworth filters centered at 1 Hz. We use a kinematic description of fault rupture, incorporating spatial heterogeneity in slip, rupture velocity and rise time by discretizing an extended finite-fault into a number of smaller subfaults. The stochastic approach sums the response for each subfault assuming a random phase, an omega-squared source spectrum and simplified Green's functions (Boore, 1983). Gross impedance effects are incorporated using quarter wavelength theory (Boore and Joyner, 1997) to bring the response to a generic baserock level (e.g., Vs = 1000 m/s). The deterministic approach sums the response for many point sources distributed across each subfault. Wave propagation is modeled using a 3D viscoelastic finite difference algorithm with the minimum shear wave velocity set at 620 m/s. Short- and mid-period amplification factors provided by Borcherdt (1994) are used to develop frequency dependent site amplification functions. The amplification functions are applied to the stochastic and determinsitic responses separately since these may have different (computational) reference site velocities. The site velocity is taken as the measured or estimated value of {Vs}30. The use of these amplification factors is attractive because they account for non-linear response by considering the input acceleration level. We note that although these design factors are strictly defined for response spectra, we have applied them to the Fourier amplitude spectra of our
Matching allele dynamics and coevolution in a minimal predator-prey replicator model
Energy Technology Data Exchange (ETDEWEB)
Sardanyes, Josep [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain)], E-mail: josep.sardanes@upf.edu; Sole, Ricard V. [Complex Systems Lab (ICREA-UPF), Barcelona Biomedical Research Park (PRBB-GRIB), Dr. Aiguader 88, 08003 Barcelona (Spain); Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)
2008-01-21
A minimal Lotka-Volterra type predator-prey model describing coevolutionary traits among entities with a strength of interaction influenced by a pair of haploid diallelic loci is studied with a deterministic time continuous model. We show a Hopf bifurcation governing the transition from evolutionary stasis to periodic Red Queen dynamics. If predator genotypes differ in their predation efficiency the more efficient genotype asymptotically achieves lower stationary concentrations.
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Fractionation by shape in deterministic lateral displacement microfluidic devices
Jiang, Mingliang; Drazer, German
2014-01-01
We investigate the migration of particles of different geometrical shapes and sizes in a scaled-up model of a gravity-driven deterministic lateral displacement (g-DLD) device. Specifically, particles move through a square array of cylindrical posts as they settle under the action of gravity. We performed experiments that cover a broad range of orientations of the driving force (gravity) with respect to the columns (or rows) in the square array of posts. We observe that as the forcing angle increases particles initially locked to move parallel to the columns in the array begin to move across the columns of obstacles and migrate at angles different from zero. We measure the probability that a particle would move across a column of obstacles, and define the critical angle {\\theta}c as the forcing angle at which this probability is 1/2. We show that critical angle depends both on particle size and shape, thus enabling both size- and shape-based separations. Finally, we show that using the diameter of the inscribe...
Three-dimensional gravity-driven deterministic lateral displacement
Du, Siqi
2016-01-01
We present a simple solution to enhance the separation ability of deterministic lateral displacement (DLD) systems by expanding the two-dimensional nature of these devices and driving the particles into size-dependent, fully three-dimensional trajectories. Specifically, we drive the particles through an array of long cylindrical posts, such that they not only move in the plane perpendicular to the posts as in traditional two-dimensional DLD systems (in-plane motion), but also along the axial direction of the solid posts (out-of-plane motion). We show that the (projected) in-plane motion of the particles is completely analogous to that observed in 2D-DLD systems. In fact, a theoretical model originally developed for force-driven, two-dimensional DLD systems accurately describes the experimental results. More importantly, we analyze the particles out-of-plane motion and observe that, for certain orientations of the driving force, significant differences in the out-of-plane displacement depending on particle siz...
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
A Modified Deterministic Model for Reverse Supply Chain in Manufacturing
Directory of Open Access Journals (Sweden)
R. N. Mahapatra
2013-01-01
Full Text Available Technology is becoming pervasive across all facets of our lives today. Technology innovation leading to development of new products and enhancement of features in existing products is happening at a faster pace than ever. It is becoming difficult for the customers to keep up with the deluge of new technology. This trend has resulted in gross increase in use of new materials and decreased customers' interest in relatively older products. This paper deals with a novel model in which the stationary demand is fulfilled by remanufactured products along with newly manufactured products. The current model is based on the assumption that the returned items from the customers can be remanufactured at a fixed rate. The remanufactured products are assumed to be as good as the new ones in terms of features, quality, and worth. A methodology is used for the calculation of optimum level for the newly manufactured items and the optimum level of the remanufactured products simultaneously. The model is formulated depending on the relationship between different parameters. An interpretive-modelling-based approach has been employed to model the reverse logistics variables typically found in supply chains (SCs. For simplicity of calculation a deterministic approach is implemented for the proposed model.
Deterministic Polynomial-Time Algorithms for Designing Short DNA Words
Kao, Ming-Yang; Sun, He; Zhang, Yong
2012-01-01
Designing short DNA words is a problem of constructing a set (i.e., code) of n DNA strings (i.e., words) with the minimum length such that the Hamming distance between each pair of words is at least k and the n words satisfy a set of additional constraints. This problem has applications in, e.g., DNA self-assembly and DNA arrays. Previous works include those that extended results from coding theory to obtain bounds on code and word sizes for biologically motivated constraints and those that applied heuristic local searches, genetic algorithms, and randomized algorithms. In particular, Kao, Sanghi, and Schweller (2009) developed polynomial-time randomized algorithms to construct n DNA words of length within a multiplicative constant of the smallest possible word length (e.g., 9 max{log n, k}) that satisfy various sets of constraints with high probability. In this paper, we give deterministic polynomial-time algorithms to construct DNA words based on derandomization techniques. Our algorithms can construct n DN...
Entrepreneurs, chance, and the deterministic concentration of wealth.
Directory of Open Access Journals (Sweden)
Joseph E Fargione
Full Text Available In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Directory of Open Access Journals (Sweden)
G. A. Tsaur
2014-07-01
Full Text Available We performed clinical and laboratory characterization of patients with rare translocation t(1;11(p32;q23 leading to MLL-EPS15 fusion gene formation. Study cohort consisted of 33 primary acute leukemia (AL cases including 6 newly diagnosed and 27 patients previously described in literature. Among study group patients t(1;11(p32;q23 was found most frequently in infant AL cases (median age 8 months. In acute lymphoblastic leukemia (ALL male/female ratio was 1:3, in acute myeloid leukemia (AML it was 1:1. Additional cytogenetic aberrations in 38 % of patients were revealed. The most frequent breakpoint position in EPS15 gene was intron 1. Four different types of MLLEPS15 fusion gene transcripts were detected. Primers-probe-plasmid combination for MLL-EPS15 fusion gene transcript monitoring by realtime quantitative polymerase chain reaction (RQ-PCR was developed and successfully applied. In 3 patients RQ-PCR was done on genomic DNA for absolute quantification of MLL-EPS15 fusion gene. High qualitative concordance rate (92 % was noted between minimal residual disease data obtained in cDNA and genomic DNA for MLL-EPS15 fusion detection.
Directory of Open Access Journals (Sweden)
G. A. Tsaur
2013-01-01
Full Text Available We performed clinical and laboratory characterization of patients with rare translocation t(1;11(p32;q23 leading to MLL-EPS15 fusion gene formation. Study cohort consisted of 33 primary acute leukemia (AL cases including 6 newly diagnosed and 27 patients previously described in literature. Among study group patients t(1;11(p32;q23 was found most frequently in infant AL cases (median age 8 months. In acute lymphoblastic leukemia (ALL male/female ratio was 1:3, in acute myeloid leukemia (AML it was 1:1. Additional cytogenetic aberrations in 38 % of patients were revealed. The most frequent breakpoint position in EPS15 gene was intron 1. Four different types of MLLEPS15 fusion gene transcripts were detected. Primers-probe-plasmid combination for MLL-EPS15 fusion gene transcript monitoring by realtime quantitative polymerase chain reaction (RQ-PCR was developed and successfully applied. In 3 patients RQ-PCR was done on genomic DNA for absolute quantification of MLL-EPS15 fusion gene. High qualitative concordance rate (92 % was noted between minimal residual disease data obtained in cDNA and genomic DNA for MLL-EPS15 fusion detection.
Susič, Vasja
2016-06-01
A realistic model in the class of renormalizable supersymmetric E6 Grand Unified Theories is constructed. Its matter sector consists of 3 × 27 representations, while the Higgs sector is 27 +27 ¯+35 1'+35 1' ¯+78 . An analytic solution for a Standard Model vacuum is found and the Yukawa sector analyzed. It is argued that if one considers the increased predictability due to only two symmetric Yukawa matrices in this model, it can be considered a minimal SUSY E6 model with this type of matter sector. This contribution is based on Ref. [1].
Minimally Invasive Parathyroidectomy
Directory of Open Access Journals (Sweden)
Lee F. Starker
2011-01-01
Full Text Available Minimally invasive parathyroidectomy (MIP is an operative approach for the treatment of primary hyperparathyroidism (pHPT. Currently, routine use of improved preoperative localization studies, cervical block anesthesia in the conscious patient, and intraoperative parathyroid hormone analyses aid in guiding surgical therapy. MIP requires less surgical dissection causing decreased trauma to tissues, can be performed safely in the ambulatory setting, and is at least as effective as standard cervical exploration. This paper reviews advances in preoperative localization, anesthetic techniques, and intraoperative management of patients undergoing MIP for the treatment of pHPT.
Maity, Debaprasad
2016-01-01
In this paper we propose two simple minimal Higgs inflation scenarios through a simple modification of the Higgs potential, as opposed to the usual non-minimal Higgs-gravity coupling prescription. The modification is done in such a way that it creates a flat plateau for a huge range of field values at the inflationary energy scale $\\mu \\simeq (\\lambda)^{1/4} \\alpha$. Assuming the perturbative Higgs quartic coupling, $\\lambda \\simeq {\\cal O}(1)$, for both the models inflation energy scale turned out to be $\\mu \\simeq (10^{14}, 10^{15})$ GeV, and prediction of all the cosmologically relevant quantities, $(n_s,r,dn_s^k)$, fit extremely well with observations made by PLANCK. Considering observed central value of the scalar spectral index, $n_s= 0.968$, our two models predict efolding number, $N = (52,47)$. Within a wide range of viable parameter space, we found that the prediction of tensor to scalar ratio $r (\\leq 10^{-5})$ is far below the current experimental sensitivity to be observed in the near future. The ...
Logarithmic superconformal minimal models
Pearce, Paul A.; Rasmussen, Jørgen; Tartaglia, Elena
2014-05-01
The higher fusion level logarithmic minimal models {\\cal LM}(P,P';n) have recently been constructed as the diagonal GKO cosets {(A_1^{(1)})_k\\oplus (A_1^ {(1)})_n}/ {(A_1^{(1)})_{k+n}} where n ≥ 1 is an integer fusion level and k = nP/(P‧- P) - 2 is a fractional level. For n = 1, these are the well-studied logarithmic minimal models {\\cal LM}(P,P')\\equiv {\\cal LM}(P,P';1). For n ≥ 2, we argue that these critical theories are realized on the lattice by n × n fusion of the n = 1 models. We study the critical fused lattice models {\\cal LM}(p,p')_{n\\times n} within a lattice approach and focus our study on the n = 2 models. We call these logarithmic superconformal minimal models {\\cal LSM}(p,p')\\equiv {\\cal LM}(P,P';2) where P = |2p - p‧|, P‧ = p‧ and p, p‧ are coprime. These models share the central charges c=c^{P,P';2}=\\frac {3}{2}\\big (1-{2(P'-P)^2}/{P P'}\\big ) of the rational superconformal minimal models {\\cal SM}(P,P'). Lattice realizations of these theories are constructed by fusing 2 × 2 blocks of the elementary face operators of the n = 1 logarithmic minimal models {\\cal LM}(p,p'). Algebraically, this entails the fused planar Temperley-Lieb algebra which is a spin-1 Birman-Murakami-Wenzl tangle algebra with loop fugacity β2 = [x]3 = x2 + 1 + x-2 and twist ω = x4 where x = eiλ and λ = (p‧- p)π/p‧. The first two members of this n = 2 series are superconformal dense polymers {\\cal LSM}(2,3) with c=-\\frac {5}{2}, β2 = 0 and superconformal percolation {\\cal LSM}(3,4) with c = 0, β2 = 1. We calculate the bulk and boundary free energies analytically. By numerically studying finite-size conformal spectra on the strip with appropriate boundary conditions, we argue that, in the continuum scaling limit, these lattice models are associated with the logarithmic superconformal models {\\cal LM}(P,P';2). For system size N, we propose finitized Kac character formulae of the form q^{-{c^{P,P';2}}/{24}+\\Delta ^{P,P';2} _{r
Deterministic chaos in government debt dynamics with mechanistic primary balance rules
Lindgren, Jussi Ilmari
2011-01-01
This paper shows that with mechanistic primary budget rules and with some simple assumptions on interest rates the well-known debt dynamics equation transforms into the infamous logistic map. The logistic map has very peculiar and rich nonlinear behaviour and it can exhibit deterministic chaos with certain parameter regimes. Deterministic chaos means the existence of the butterfly effect which in turn is qualitatively very important, as it shows that even deterministic budget rules produce unpredictable behaviour of the debt-to-GDP ratio, as chaotic systems are extremely sensitive to initial conditions.
Zhang, Xu
2010-01-01
The purpose of this paper is to present a universal approach to the study of controllability/observability problems for infinite dimensional systems governed by some stochastic/deterministic partial differential equations. The crucial analytic tool is a class of fundamental weighted identities for stochastic/deterministic partial differential operators, via which one can derive the desired global Carleman estimates. This method can also give a unified treatment of the stabilization, global unique continuation, and inverse problems for some stochastic/deterministic partial differential equations.
Deterministic Joint Remote Preparation of an Arbitrary Two-Qubit State Using the Cluster State
Institute of Scientific and Technical Information of China (English)
WANG Ming-Ming; CHEN Xiu-Bo; YANG Yi-Xian
2013-01-01
Recently,deterministic joint remote state preparation (JRSP) schemes have been proposed to achieve 100％ success probability.In this paper,we propose a new version of deterministic JRSP scheme of an arbitrary two-qubit state by using the six-qubit cluster state as shared quantum resource.Compared with previous schemes,our scheme has high efficiency since less quantum resource is required,some additional unitary operations and measurements are unnecessary.We point out that the existing two types of deterministic JRSP schemes based on GHZ states and EPR pairs are equivalent.
Strong Sector in non-minimal SUSY model
Costantini, Antonio
2016-01-01
We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM) case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.
Activity modes selection for project crashing through deterministic simulation
Directory of Open Access Journals (Sweden)
Ashok Mohanty
2011-12-01
Full Text Available Purpose: The time-cost trade-off problem addressed by CPM-based analytical approaches, assume unlimited resources and the existence of a continuous time-cost function. However, given the discrete nature of most resources, the activities can often be crashed only stepwise. Activity crashing for discrete time-cost function is also known as the activity modes selection problem in the project management. This problem is known to be NP-hard. Sophisticated optimization techniques such as Dynamic Programming, Integer Programming, Genetic Algorithm, Ant Colony Optimization have been used for finding efficient solution to activity modes selection problem. The paper presents a simple method that can provide efficient solution to activity modes selection problem for project crashing.Design/methodology/approach: Simulation based method implemented on electronic spreadsheet to determine activity modes for project crashing. The method is illustrated with the help of an example.Findings: The paper shows that a simple approach based on simple heuristic and deterministic simulation can give good result comparable to sophisticated optimization techniques.Research limitations/implications: The simulation based crashing method presented in this paper is developed to return satisfactory solutions but not necessarily an optimal solution.Practical implications: The use of spreadsheets for solving the Management Science and Operations Research problems make the techniques more accessible to practitioners. Spreadsheets provide a natural interface for model building, are easy to use in terms of inputs, solutions and report generation, and allow users to perform what-if analysis.Originality/value: The paper presents the application of simulation implemented on a spreadsheet to determine efficient solution to discrete time cost tradeoff problem.
Accurate deterministic solutions for the classic Boltzmann shock profile
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
Directory of Open Access Journals (Sweden)
Ortega Antonio
2005-01-01
Full Text Available This paper presents a reduced-complexity deterministic annealing (DA approach for vector quantizer (VQ design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Barbieri, Riccardo; Harigaya, Keisuke
2016-01-01
In a Mirror Twin World with a maximally symmetric Higgs sector the little hierarchy of the Standard Model can be significantly mitigated, perhaps displacing the cutoff scale above the LHC reach. We show that consistency with observations requires that the Z2 parity exchanging the Standard Model with its mirror be broken in the Yukawa couplings. A minimal such effective field theory, with this sole Z2 breaking, can generate the Z2 breaking in the Higgs sector necessary for the Twin Higgs mechanism, and has constrained and correlated signals in invisible Higgs decays, direct Dark Matter Detection and Dark Radiation, all within reach of foreseen experiments. For dark matter, both mirror neutrons and a variety of self-interacting mirror atoms are considered. Neutrino mass signals and the effects of a possible additional Z2 breaking from the vacuum expectation values of B-L breaking fields are also discussed.
Fabbrichesi, Marco
2015-01-01
We show how the Higgs boson mass is protected from the potentially large corrections due to the introduction of minimal dark matter if the new physics sector is made supersymmetric. The fermionic dark matter candidate (a 5-plet of $SU(2)_L$) is accompanied by a scalar state. The weak gauge sector is made supersymmetric and the Higgs boson is embedded in a supersymmetric multiplet. The remaining standard model states are non-supersymmetric. Non vanishing corrections to the Higgs boson mass only appear at three-loop level and the model is natural for dark matter masses up to 15 TeV--a value larger than the one required by the cosmological relic density. The construction presented stands as an example of a general approach to naturalness that solves the little hierarchy problem which arises when new physics is added beyond the standard model at an energy scale around 10 TeV.
Resource Minimization Job Scheduling
Chuzhoy, Julia; Codenotti, Paolo
Given a set J of jobs, where each job j is associated with release date r j , deadline d j and processing time p j , our goal is to schedule all jobs using the minimum possible number of machines. Scheduling a job j requires selecting an interval of length p j between its release date and deadline, and assigning it to a machine, with the restriction that each machine executes at most one job at any given time. This is one of the basic settings in the resource-minimization job scheduling, and the classical randomized rounding technique of Raghavan and Thompson provides an O(logn/loglogn)-approximation for it. This result has been recently improved to an O(sqrt{log n})-approximation, and moreover an efficient algorithm for scheduling all jobs on O((OPT)^2) machines has been shown. We build on this prior work to obtain a constant factor approximation algorithm for the problem.
Sequential unconstrained minimization algorithms for constrained optimization
Byrne, Charles
2008-02-01
The problem of minimizing a function f(x):RJ → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G_k(x)=f(x)+g_k(x), to obtain xk. The auxiliary functions gk(x):D ⊆ RJ → R+ are nonnegative on the set D, each xk is assumed to lie within D, and the objective is to minimize the continuous function f:RJ → R over x in the set C=\\overline D , the closure of D. We assume that such minimizers exist, and denote one such by \\hat x . We assume that the functions gk(x) satisfy the inequalities 0\\leq g_k(x)\\leq G_{k-1}(x)-G_{k-1}(x^{k-1}), for k = 2, 3, .... Using this assumption, we show that the sequence {f(xk)} is decreasing and converges to f({\\hat x}) . If the restriction of f(x) to D has bounded level sets, which happens if \\hat x is unique and f(x) is closed, proper and convex, then the sequence {xk} is bounded, and f(x^*)=f({\\hat x}) , for any cluster point x*. Therefore, if \\hat x is unique, x^*={\\hat x} and \\{x^k\\}\\rightarrow {\\hat x} . When \\hat x is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton-Raphson method. The proof techniques used for SUMMA can be extended to obtain related results for the induced proximal
Minimal projections with respect to various norms
Aksoy, Asuman Guven
2010-01-01
We will show that a theorem of Rudin \\cite{wr1}, \\cite{wr}, permits us to determine minimal projections not only with respect to the operator norm but with respect to quasi-norms in operators ideals and numerical radius in many concrete cases.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...
Deterministic chaos in RL-diode circuits and its application in metrology
Kucheruk, Volodymyr; Katsyv, Samuil; Glushko, Mykhailo; Wójcik, Waldemar; Zyska, Tomasz; Taissariyeva, Kyrmyzy; Mussabekov, Kanat
2016-09-01
The paper investigated the possibility of measuring the resistive physical quantity generator using deterministic chaos based RL-diode circuit. A generalized structure of the measuring device using a deterministic chaos signal generator. To separate the useful component of the measurement signal of amplitude detector is proposed to use. Mathematical modeling of the RL-diode circuit, which showed a significant effect of the barrier and diffusion capacity of the diode on the occurrence of deterministic chaotic oscillations in this circuit. It is shown that this type deterministic chaos signal generator has a high sensitivity to a change in output voltage resistance in the range of 250 Ohms, which can be used to create the measuring devices based on it.
Institute of Scientific and Technical Information of China (English)
王国栋; 王红超
2015-01-01
Objective:To investigate the individual minimally invasive treatment for lower limbs varicose and the effect.Methods:Retrospective reviewed the 160 patients of lower limbs varicose in a variety of minimally invasive treatment from Jan.2009 to Jun.2013, total 229 limbs.21 limbs in 15 patients were cured by endovenous laser treatment ( EVLT) only;123 limbs in 80 patients were cured by high ligation combined with EVLT;63 limbs in 48patents were cured by high ligation combined with EVLT and local varicose vein mass point removal;22 limbs in 17 patients were cured by high ligation combined with EVLT and subf-endoscopic surgery ( SEPS) .Results:All incisions were primary healing.30 cases felt pain and block or a column state at great saphenous vein trunk and crus local varicose veins which were burned;19 cases were found subcutaneous flake ecchymosis;18 cases felt local skin numbness;2 cases were recurrence after operation.Patients complicated with skin ulcer healed after operation by dressing change.All the patients had a clinically significant reduction in symptoms and no lower extremity deep vein thrombosis.141 patients were followed up for 6-54 months.Conclusion:Lower limbs varicose vein of different degree treat by individualized therapy to improve cure rate and to be an effective measure.%目的:探讨下肢静脉曲张的微创个体化治疗方法及疗效.方法:回顾性分析2009年1月～2013年6月期间综合应用多种微创方法治疗160例下肢静脉曲张患者的临床资料,共229 条肢体;其中单纯应用EVLT 15例,21条肢体;高位结扎加EVLT 80例,123条肢体;高位结扎加EVLT、局部曲张静脉团块点状剥除48例,63条肢体;高位结扎加EVLT联合腔镜下交通支离断(SEPS)17例,22条肢体.结果:160例患者切口均1期愈合.术后出现大隐静脉主干及小腿局部曲张静脉烧灼处条索状硬结、疼痛30例;不同程度皮下片状瘀斑19例;出现局部皮肤麻木18例.术后复发2例.合并皮肤溃疡
Deterministic and Probabilistic Analysis of NPP Communication Bridge Resistance Due to Extreme Loads
Directory of Open Access Journals (Sweden)
Králik Juraj
2014-12-01
Full Text Available This paper presents the experiences from the deterministic and probability analysis of the reliability of communication bridge structure resistance due to extreme loads - wind and earthquake. On the example of the steel bridge between two NPP buildings is considered the efficiency of the bracing systems. The advantages and disadvantages of the deterministic and probabilistic analysis of the structure resistance are discussed. The advantages of the utilization the LHS method to analyze the safety and reliability of the structures is presented
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A. F.; Roussin, R. W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Energy Technology Data Exchange (ETDEWEB)
Rice, A.F.; Roussin, R.W. [eds.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
2010-01-01
The purpose of this paper is to present a universal approach to the study of controllability/observability problems for infinite dimensional systems governed by some stochastic/deterministic partial differential equations. The crucial analytic tool is a class of fundamental weighted identities for stochastic/deterministic partial differential operators, via which one can derive the desired global Carleman estimates. This method can also give a unified treatment of the stabilization, global un...
Parallel deterministic neutronics with AMR in 3D
Energy Technology Data Exchange (ETDEWEB)
Clouse, C.; Ferguson, J.; Hendrickson, C. [Lawrence Livermore National Lab., CA (United States)
1997-12-31
AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.
Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view
Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.
2015-12-01
Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama
Park, Y. C.; Chang, M. H.; Lee, T.-Y.
2007-06-01
A deterministic global optimization method that is applicable to general nonlinear programming problems composed of twice-differentiable objective and constraint functions is proposed. The method hybridizes the branch-and-bound algorithm and a convex cut function (CCF). For a given subregion, the difference of a convex underestimator that does not need an iterative local optimizer to determine the lower bound of the objective function is generated. If the obtained lower bound is located in an infeasible region, then the CCF is generated for constraints to cut this region. The cutting region generated by the CCF forms a hyperellipsoid and serves as the basis of a discarding rule for the selected subregion. However, the convergence rate decreases as the number of cutting regions increases. To accelerate the convergence rate, an inclusion relation between two hyperellipsoids should be applied in order to reduce the number of cutting regions. It is shown that the two-hyperellipsoid inclusion relation is determined by maximizing a quadratic function over a sphere, which is a special case of a trust region subproblem. The proposed method is applied to twelve nonlinear programming test problems and five engineering design problems. Numerical results show that the proposed method converges in a finite calculation time and produces accurate solutions.
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Energy Technology Data Exchange (ETDEWEB)
Kim, Inn Seock, E-mail: innseockkim@gmail.co [ISSA Technology, 21318 Seneca Crossing Drive, Germantown, MD 20876 (United States); Ahn, Sang Kyu; Oh, Kyu Myung [Korea Institute of Nuclear Safety, 19 Kusong-dong, Yuseong-gu, Daejeon 305-338 (Korea, Republic of)
2010-05-15
Technical insights and findings from a critical review of deterministic approaches typically applied to ensure design safety of nuclear power plants were presented in the companion paper of Part I included in this issue. In this paper we discuss the risk-informed approaches that have been proposed to make a safety case for advanced reactors including Generation-IV reactors such as Modular High-Temperature Gas-cooled Reactor (MHTGR), Pebble Bed Modular Reactor (PBMR), or Sodium-cooled Fast Reactor (SFR). Also considered herein are a risk-informed safety analysis approach suggested by Westinghouse as a means to improve the conventional accident analysis, together with the Technology Neutral Framework recently developed by the US Nuclear Regulatory Commission as a high-level regulatory infrastructure for safety evaluation of any type of reactor design. The insights from a comparative review of various deterministic and risk-informed approaches could be usefully used in developing a new licensing architecture for enhanced safety of evolutionary or advanced plants.
Giribet, Gaston; Vásquez, Yerko
2015-01-01
Minimal massive gravity (MMG) is an extension of three-dimensional topologically massive gravity that, when formulated about anti-de Sitter space, accomplishes solving the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the nonlinear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Bañados-Teitelboim-Zanelli black holes. These geometries behave asymptotically as solutions of the so-called log gravity, and, despite the weakened falling off close to the boundary, they have finite mass and finite angular momentum, which we compute. We also find time-dependent deformations of Bañados-Teitelboim-Zanelli that obey Brown-Henneaux asymptotic boundary conditions. The existence of such solutions shows that the Birkhoff theorem does not hold in MMG at the chiral point. Other peculiar features of the theory at the chiral point, such as the degeneracy it exhibits in the decoupling limit, are discussed.
Giribet, Gaston
2014-01-01
Minimal Massive Gravity (MMG) is an extension of three-dimensional Topologically Massive Gravity that, when formulated about Anti-de Sitter space, accomplishes to solve the tension between bulk and boundary unitarity that other models in three dimensions suffer from. We study this theory at the chiral point, i.e. at the point of the parameter space where one of the central charges of the dual conformal field theory vanishes. We investigate the non-linear regime of the theory, meaning that we study exact solutions to the MMG field equations that are not Einstein manifolds. We exhibit a large class of solutions of this type, which behave asymptotically in different manners. In particular, we find analytic solutions that represent two-parameter deformations of extremal Banados-Teitelboim-Zanelli (BTZ) black holes. These geometries behave asymptotically as solutions of the so-called Log Gravity, and, despite the weakened falling-off close to the boundary, they have finite mass and finite angular momentum, which w...
Directory of Open Access Journals (Sweden)
Oda Kin-ya
2013-05-01
Full Text Available Both the ATLAS and CMS experiments at the LHC have reported the observation of the particle of mass around 125 GeV which is consistent to the Standard Model (SM Higgs boson, but with an excess of events beyond the SM expectation in the diphoton decay channel at each of them. There still remains room for a logical possibility that we are not seeing the SM Higgs but something else. Here we introduce the minimal dilaton model in which the LHC signals are explained by an extra singlet scalar of the mass around 125 GeV that slightly mixes with the SM Higgs heavier than 600 GeV. When this scalar has a vacuum expectation value well beyond the electroweak scale, it can be identified as a linearly realized version of a dilaton field. Though the current experimental constraints from the Higgs search disfavors such a region, the singlet scalar model itself still provides a viable alternative to the SM Higgs in interpreting its search results.
Deterministic Modeling of the High Temperature Test Reactor
Energy Technology Data Exchange (ETDEWEB)
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Institute of Scientific and Technical Information of China (English)
WANG JiaMin
2009-01-01
In this paper we consider a retail service facility with cross-trained workers who can perform operations in both the front room and back room. Workers are brought from the back room to the front room and vice versa depending on the number of customers in the system. A loss of productivity occurs when a worker returns to the back room. Two problems are studied. In the first problem, given the number of workers available, we determine an optimal deterministic switching policy so that the expected number of customers in queue is minimized subject to a constraint ensuring that there is a sufficient workforce to fulfill the functions in the back room. In the second problem, the number of workers needed is minimized subject to an additional constraint requiring that the expected number of customers waiting in queue is bounded above by a given threshold value. Exact solution procedures are developed and illustrative numerical examples are presented.
Minimally invasive restorative dentistry: a biomimetic approach.
Malterud, Mark I
2006-08-01
When providing dental treatment for a given patient, the practitioner should use a minimally invasive technique that conserves sound tooth structure as a clinical imperative. Biomimetics is a tenet that guides the author's practice and is generally described as the mimicking of natural life. This can be accomplished in many cases using contemporary composite resins and adhesive dental procedures. Both provide clinical benefits and support the biomimetic philosophy for treatment. This article illustrates a minimally invasive approach for the restoration of carious cervical defects created by poor hygiene exacerbated by the presence of orthodontic brackets.
Minimal String Theory and the Douglas Equation
Belavin, A. A.; Belavin, V. A.
We use the connection between the Frobenius manifold and the Douglas string equation to further investigate Minimal Liouville gravity. We search for a solution of the Douglas string equation and simultaneously a proper transformation from the KdV to the Liouville frame which ensures the fulfilment of the conformal and fusion selection rules. We find that the desired solution of the string equation has an explicit and simple form in the flat coordinates on the Frobenius manifold in the general case of (p,q) Minimal Liouville gravity.
Institute of Scientific and Technical Information of China (English)
李程; 严景元; 刘利权; 王永胜; 岳良; 王波
2012-01-01
目的:探讨微创经皮肾镜下钬激光治疗肾盂肾盏狭窄或闭锁的临床实用性及效果.方法:2008年3月至2011年2月,对26例术后继发及原发性肾盂肾盏狭窄或闭锁患者,采取在C型臂X光机或B超引导下微创经皮肾穿刺造瘘建立F16皮肾通道,在输尿管镜下用钬激光切开肾盂肾盏狭窄或闭锁,术后留置F6D-J管做肾盂肾盏支架引流.结果:26例患者有25例一期手术成功,1例患者因为肾盂闭锁段过长>2.0 cm而行微创经皮肾镜下钬激光治疗失败,故二期行开放手术整形.25例手术成功患者术后2~3个月复查B超示,肾盂肾盏积水明显缓解,肾功能正常,故成功拔除D-J管;23例3~6个月静脉肾盂造影显示,肾盂肾盏均良好显示,2例显示肾盂再次狭窄,故通过输尿管镜下再次留置F6D-J管3个月,肾盂狭窄解除.结论:微创经皮肾镜下钬激光治疗肾盂肾盏狭窄具有创伤小、安全、疗效好等优点,尤其适合曾经开放手术后继发肾盂肾盏狭窄或闭锁.%Objective: To investigate the minimally invasive percutaneous nephrolithotomy with holmium laser treatment of renal pelvis the clinical relevance of stenosis or atresia and effect. Methods From March 2008 to February 2011 , twenty-six patients with renal pelvis secondary and primary stenosis or atresia received the X-ray-guided percutaneous nephrostomy through F16 channel, and holmium laser incision. Results: 25 of 26 patients had a successful operation, and one failed because of the renal pelvis locking section was over 2. 0 cm and the open plastic surgery was taken. 25 patients were observed to have improvement by the B ultrasoundgraphy in 2- 3 months, and the D-J was removed. Then,it was showed by the IVP in 3-6 months that 23 cases had a good imaging of renal pelvis, while 2 cases had a recurrence of stenosis. Thus, F6 D-J placement was carried on again through the ureteroscopy for the 2 cases and the stenosis relieved after removal of the D-J in
Late de novo minimal change disease in a renal allograft
Madhan Krishan; Temple-Camp Cynric
2009-01-01
Among the causes of the nephrotic syndrome in renal allografts, minimal change disease is a rarity with only few cases described in the medical literature. Most cases described have occurred early in the post-transplant course. There is no established treatment for the condition but prognosis is favorable. We describe a case of minimal change disease that developed 8 years after a successful transplantation of a renal allograft in a middle-aged woman. The nephrotic syndrome was accompanied by...
Multiple objectives application approach to waste minimization
Institute of Scientific and Technical Information of China (English)
张清宇
2002-01-01
Besides economics and controllability, waste minimization has now become an obje ctive in designing chemical processes, and usually leads to high costs of invest ment and operation. An attempt was made to minimize waste discharged from chemic al reaction processes during the design and modification process while the opera tion conditions were also optimized to meet the requirements of technology and e conomics. Multiobjectives decision nonlinear programming (NLP) was employed to o ptimize the operation conditions of a chemical reaction process and reduce waste . A modeling language package-SPEEDUP was used to simulate the process. This p aper presents a case study of the benzene production process. The flowsheet factors affecting the economics and waste generation were examined. Constraints were imposed to reduce the number of objectives and carry out optimal calculations e asily. After comparisons of all possible solutions, best-compromise approach wa s applied to meet technological requirements and minimize waste.
Multiple objectives application approach to waste minimization
Institute of Scientific and Technical Information of China (English)
张清宇
2002-01-01
Besides econormics and controllability, waste minimization has now become an objective in designing chemical processes,and usually leads to high costs of investment and operation.An attempt was mede to minimize waste discharged from chemical reaction processes during the design and modification process while the operation conditions were also optimized to meet the requirements of technology and economics.Multiob-jectives decision nonlinear programming(NLP) was emplyed optimize the operation conditions of a chemical reaction process and reduce waste. A modeling package-SPEEDUP was used to simulate the process.This paper presents a case study of the benzenc production process.The flowsheer factors affecting the economics and waste generation were examined.Constraints were imposed to reduce the number of objectives and carry out optimal calculations easily.After comparisons of all possiblle solutions,best-compromise approach was applied to meet technological requirements and minimize waste.
Solute Transport in a Heterogeneous Aquifer: A Nonlinear Deterministic Dynamical Analysis
Sivakumar, B.; Harter, T.; Zhang, H.
2003-04-01
Stochastic approaches are widely used for modeling and prediction of uncertainty in groundwater flow and transport processes. An important reason for this is our belief that the dynamics of the seemingly complex and highly irregular subsurface processes are essentially random in nature. However, the discovery of nonlinear deterministic dynamical theory has revealed that random-looking behavior could also be the result of simple deterministic mechanisms influenced by only a few nonlinear interdependent variables. The purpose of the present study is to introduce this theory to subsurface solute transport process, in an attempt to investigate the possibility of understanding the transport dynamics in a much simpler, deterministic, manner. To this effect, salt transport process in a heterogeneous aquifer medium is studied. Specifically, time series of arrival time of salt particles are analyzed. These time series are obtained by integrating a geostatistical (transition probability/Markov chain) model with a groundwater flow model (MODFLOW) and a salt transport (Random Walk Particle) model. The (dynamical) behavior of the transport process (nonlinear deterministic or stochastic) is identified using standard statistical techniques (e.g. autocorrelation function, power spectrum) as well as specific nonlinear deterministic dynamical techniques (e.g. phase-space diagram, correlation dimension method). The sensitivity of the salt transport dynamical behavior to the hydrostratigraphic parameters (i.e. number, volume proportions, mean lengths, and juxtapositional tendencies of facies) used in the transition probability/Markov chain model is also studied. The results indicate that the salt transport process may exhibit very simple (i.e. deterministic) to very complex (i.e. stochastic) dynamical behavior, depending upon the above parameters (i.e. characteristics of the aquifer medium). Efforts towards verification and strengthening of the present results and prediction of salt
Institute of Scientific and Technical Information of China (English)
戎保林; 王君; 梅新宇; 余美青; 马冬春; 魏大中; 郭明发; 徐世斌; 柯立; 田界勇
2011-01-01
目的:探讨Nuss手术治疗漏斗胸的临床效果.方法:2008年4月～2010年10月采用胸腔镜辅助下微创Nuss矫治术治疗漏斗胸35例,男29例,女6例,平均年龄(12.23±5.7)岁,其中15～24岁15例.结果:35例均顺利完成手术,畸形均达到满意矫正.手术时间60～120 min,出血量3～15 ml,手术后住院时间5～9d.无心包心脏损伤发生.l例切口感染钢板外露,经换药痊愈；l例切口医用胶粘合切口裂开重新缝合愈合；1例术后7个月出现钢板反应,换药至今未愈；3例术后出现气胸(2例少量未处理痊愈,1例抽气痊愈)；1例双钢板植入患者1个月后复查,下钢板轻度移位.全组均获长期随访.结论:Nuss手术矫治小儿先天性漏斗胸安全有效,具有损伤小,恢复快及改善外观的优点.%Objective;To assess the clinical efficacy of Nuss procedure in correction of pectus excavatum. Methods: Between Apr. 2008 and Oct. 2010,we treated 35 patients with pectus excavatum by minimally invasive Nuss procedure assisted with thoracosoope. Of the 35 cases, 19 were male and 6 .female. The age ranged from 3 to 24 .with an average of (12.23 ± 5.7) years. Results .We successfully corrected the pectus excavatum for the total 35 patients. The operation time was from 60 to 120 min with an average of 70 min. Operative blood loss ranged from 3 to 15 ml .and hospital stay was 7 d on average (5 to 9 d). No pericardial or heart injury occurred during the procedure. One case of incision infection resulted in plate exposure was managed by conventional dressing change, one required secondary stitches due to failure of adhesive surgical dressing and another one case refused to recover due to plate rejection in 7 month of the surgery. Pneumothorax occurred in 3 cases in whom 2 were managed conservatively and 1 by puncture. One case with double plates implantation was found plate dislocation by follow-up in 1 month of the operation. Conclusion: Nuss procedure appears safe and
Institute of Scientific and Technical Information of China (English)
贾春福
2004-01-01
This paper addresses a stochastic scheduling problem in which n jobs are to be processed on a single machine. The machine is subject to stochastic breakdowns, which is characterized by a generalized Poisson process. The objective is to find the job schedules to minimize the expected variance of completion times. The deterministic equivalent of the stochastic scheduling problem is developed. Moreover, optimal sequences are derived for the special case with identical processing times.%讨论了机器随机故障时,工件完工时间方差的期望最小化单机调度问题,其中描述机器故障的计数过程为广义泊松过程.推导出了目标函数等价的确定形式,而后进一步给出了工件加工时间相同时问题的最优解.
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Blackfolds, plane waves and minimal surfaces
Armas, Jay; Blau, Matthias
2015-07-01
Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid, suggesting that these two families of black holes are connected. We also show that minimal surfaces embedded in spheres rather than Euclidean space can be used to construct static compact horizons in asymptotically de Sitter space-times.
On stable compact minimal submanifolds
Torralbo, Francisco
2010-01-01
Stable compact minimal submanifolds of the product of a sphere and any Riemannian manifold are classified whenever the dimension of the sphere is at least three. The complete classification of the stable compact minimal submanifolds of the product of two spheres is obtained. Also, it is proved that the only stable compact minimal surfaces of the product of a 2-sphere and any Riemann surface are the complex ones.
Against Explanatory Minimalism in Psychiatry.
Thornton, Tim
2015-01-01
The idea that psychiatry contains, in principle, a series of levels of explanation has been criticized not only as empirically false but also, by Campbell, as unintelligible because it presupposes a discredited pre-Humean view of causation. Campbell's criticism is based on an interventionist-inspired denial that mechanisms and rational connections underpin physical and mental causation, respectively, and hence underpin levels of explanation. These claims echo some superficially similar remarks in Wittgenstein's Zettel. But attention to the context of Wittgenstein's remarks suggests a reason to reject explanatory minimalism in psychiatry and reinstate a Wittgensteinian notion of levels of explanation. Only in a context broader than the one provided by interventionism is that the ascription of propositional attitudes, even in the puzzling case of delusions, justified. Such a view, informed by Wittgenstein, can reconcile the idea that the ascription mental phenomena presupposes a particular level of explanation with the rejection of an a priori claim about its connection to a neurological level of explanation.
Minimal surfaces for architectural constructions
Directory of Open Access Journals (Sweden)
Velimirović Ljubica S.
2008-01-01
Full Text Available Minimal surfaces are the surfaces of the smallest area spanned by a given boundary. The equivalent is the definition that it is the surface of vanishing mean curvature. Minimal surface theory is rapidly developed at recent time. Many new examples are constructed and old altered. Minimal area property makes this surface suitable for application in architecture. The main reasons for application are: weight and amount of material are reduced on minimum. Famous architects like Otto Frei created this new trend in architecture. In recent years it becomes possible to enlarge the family of minimal surfaces by constructing new surfaces.
Global Analysis of Minimal Surfaces
Dierkes, Ulrich; Tromba, Anthony J
2010-01-01
Many properties of minimal surfaces are of a global nature, and this is already true for the results treated in the first two volumes of the treatise. Part I of the present book can be viewed as an extension of these results. For instance, the first two chapters deal with existence, regularity and uniqueness theorems for minimal surfaces with partially free boundaries. Here one of the main features is the possibility of 'edge-crawling' along free parts of the boundary. The third chapter deals with a priori estimates for minimal surfaces in higher dimensions and for minimizers of singular integ
On minimal artinian modules and minimal artinian linear groups
Directory of Open Access Journals (Sweden)
Leonid A. Kurdachenko
2001-01-01
minimal artinian linear groups. The authors prove that in such classes of groups as hypercentral groups (so also, nilpotent and abelian groups and FC-groups, minimal artinian linear groups have precisely the same structure as the corresponding irreducible linear groups.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Minimizing ADMs on WDM Directed Fiber Trees
Institute of Scientific and Technical Information of China (English)
ZHOU FengFeng (周丰丰); CHEN GuoLiang (陈国良); XU YinLong (许胤龙); GU Jun (顾钧)
2003-01-01
This paper proposes a polynomial-time algorithm for Minimum WDM/SONET Add/Drop Multiplexer Problem (MADM) on WDM directed fiber trees whether or not wavelength converters are used. It runs in time O(m2n), where n and m are the number of nodes of the tree and the number of the requests respectively. Incorporating T. Erlebach et al.'s work into the proposed algorithm, it also reaches the lower bound of the required wavelengths with greedy algorithms for the case without wavelength converters. Combined with some previous work, the algorithm reduces the number of required wavelengths greatly while using minimal number of ADMs for the case with limited wavelength converters. The experimental results show the minimal number of required ADMs on WDM directed fiber trees.
Minimally legally invasive dentistry.
Lam, R
2014-12-01
One disadvantage of the rapid advances in modern dentistry is that treatment options have never been more varied or confusing. Compounded by a more educated population greatly assisted by online information in an increasingly litigious society, a major concern in recent times is increased litigation against health practitioners. The manner in which courts handle disputes is ambiguous and what is considered fair or just may not be reflected in the judicial process. Although legal decisions in Australia follow a doctrine of precedent, the law is not static and is often reflected by community sentiment. In medical litigation, this has seen the rejection of the Bolam principle with a preference towards greater patient rights. Recent court decisions may change the practice of dentistry and it is important that the clinician is not caught unaware. The aim of this article is to discuss legal issues that are pertinent to the practice of modern dentistry through an analysis of legal cases that have shaped health law. Through these discussions, the importance of continuing professional development, professional association and informed consent will be realized as a means to limit the legal complications of dental practice.
Institute of Scientific and Technical Information of China (English)
陈保富; 孔敏; 朱成楚; 张波; 叶中瑞; 王春国; 马德华; 叶敏华
2013-01-01
after McKeown minimally invasive esophagectomy(MMIE) for the treatment of esophageal cancer.Methods From August 1997 to December 2012,MMIE was performed in 507 patients.Esophageal tumors located in the upper in 39(7.69％),middle in 312(61.54％),lower in 156(30.77％).Preoperative neoadjuvant chemoradiotherapy was used in 21 cases (4.14 ％).Resection was performed for squamous cancer (463 cases,91.32 ％),adenocarcinoma and other histologic types (44 cases,8.68％) in patients with stages 0 (55,10.85％),Ⅰ (167,32.94％),Ⅱ (203,40.04％),Ⅲ (69,13.61％),and Ⅳ (13,2.56％) disease.Surgery were completed by thoracoscopic and laparotomy(281 cases,55.42％),total thoracoscopic/laparoscopic approach(179 cases,35.31％),thoracotomy and laparoscopic (32 cases,6.31％),conversion to thoracotomy/laparotomy (15 cases,2.96％).Results MMIE was successfully completed in 492(97.04％) patients.The operative time of thoracoscopic the esophagus free and pleural lymph node dissection was(81.5 ±34.7)min(60-180 min),laparoscopic stomach free and abdominal area lymphadenectomy was 60.3 ± 17.5)min(40-105 min).The blood loss of thoracoscopic surgery was(105.2 ±73.1) m1(55-1080 ml),laparoscopic surgery (43.5 ±21.4)m1(30-350ml).The total number of lymph node dissection was 5-48[(23.7 ± 11.5)/case],the number of thoracic lymph node dissection was 3-32 [(14.6 ± 7.7)/case],abdominal lymph node dissection 2-29 [(8.7 ±5.2)/case)],and neck lymph node dissection 0-7 [(1.3 ± 1.1)/case].198 cases of esophageal reconstruction after esophageal bed,309 cases through the sternum approach.The whole group were no deaths,intraoperative bleeding in 3 cases due to the azygos vein/spleen injury,the hook cautery/ultrasound surgery the knife accidentally injure trachea 3 cases,the non-focal cause 13 cases of thoracic duct injury,9 cases of atrial fibrillation,esophageal resection margin-positive R1 resection in 3 cases.Major complications in the early postoperative period,lung infection rate was
Minimally invasive treatment of multilevel spinal epidural abscess.
Safavi-Abbasi, Sam; Maurer, Adrian J; Rabb, Craig H
2013-01-01
The use of minimally invasive tubular retractor microsurgery for treatment of multilevel spinal epidural abscess is described. This technique was used in 3 cases, and excellent results were achieved. The authors conclude that multilevel spinal epidural abscesses can be safely and effectively managed using microsurgery via a minimally invasive tubular retractor system.
An algorithm for minimization of quantum cost
Banerjee, Anindita; Pathak, Anirban
2009-01-01
A new algorithm for minimization of quantum cost of quantum circuits has been designed. The quantum cost of different quantum circuits of particular interest (eg. circuits for EPR, quantum teleportation, shor code and different quantum arithmetic operations) are computed by using the proposed algorithm. The quantum costs obtained using the proposed algorithm is compared with the existing results and it is found that the algorithm has produced minimum quantum cost in all cases.
Heroin-associated anthrax with minimal morbidity.
Black, Heather; Chapman, Ann; Inverarity, Donald; Sinha, Satyajit
2017-03-08
In 2010, during an outbreak of anthrax affecting people who inject drugs, a heroin user aged 37 years presented with soft tissue infection. He subsequently was found to have anthrax. We describe his management and the difficulty in distinguishing anthrax from non-anthrax lesions. His full recovery, despite an overall mortality of 30% for injectional anthrax, demonstrates that some heroin-related anthrax cases can be managed predominately with oral antibiotics and minimal surgical intervention.
Directory of Open Access Journals (Sweden)
Honório Kanegae Junior
2006-06-01
Full Text Available The stands stratification for successive forest inventory is usually based on stands cadastral information, such as theage, the species, the spacing, and the management regime, among others. The size of the sample is usually conditioned by thevariability of the forest and by the required precision. Thus, the control of the variation through the efficient stratification has stronginfluence on sample precision and size. This study evaluated: the stratification propitiated by two spatial interpolators, the statisticianone represented by the krigage and the deterministic one represented by the inverse of the square of the distance; evaluated theinterpolators in relation to simple random sampling and the traditional stratification based on cadastral data, in the reduction of thevariance of the average and sampling error; and defined the optimal number of strata when spatial interpolators are used. For thegeneration of the strata, it was studied 4 different dendrometric variables: volume, basal area, dominant height and site index in 2different ages: 2.5 years and 3.5 years. It was concluded that the krigage of the volume per hectare obtained at 3.5 years of age reducedin 47% the stand average variance and in 32% the inventory sampling error, when compared to the simple random sampling. Thevolume interpolator IDW, at 3.5 years of age, reduced in 74% the stand average variance and in 48% the inventory sampling error.The less efficient stratificator was the one based on age, species and spacing. In spite of the IDW method having presented highefficiency, it doesn t guarantee that the efficiency be maintained, if a new sampling is accomplished in the same projects, contrarily tothe geostatistic krigage. In forest stands that don t present spatial dependence, the IDW method can be used with great efficiency in thetraditional stratification. The less efficient stratification method is the one based on the control of age, species and spacing (STR
Uniqueness of PL Minimal Surfaces
Institute of Scientific and Technical Information of China (English)
Yi NI
2007-01-01
Using a standard fact in hyperbolic geometry, we give a simple proof of the uniqueness of PL minimal surfaces, thus filling in a gap in the original proof of Jaco and Rubinstein. Moreover, in order to clarify some ambiguity, we sharpen the definition of PL minimal surfaces, and prove a technical lemma on the Plateau problem in the hyperbolic space.
Bergshoeff, Eric; Hohm, Olaf; Merbis, Wout; Routh, Alasdair J.; Townsend, Paul K.
2014-01-01
We present an alternative to topologically massive gravity (TMG) with the same 'minimal' bulk properties; i.e. a single local degree of freedom that is realized as a massive graviton in linearization about an anti-de Sitter (AdS) vacuum. However, in contrast to TMG, the new 'minimal massive gravity'
Guidelines for mixed waste minimization
Energy Technology Data Exchange (ETDEWEB)
Owens, C.
1992-02-01
Currently, there is no commercial mixed waste disposal available in the United States. Storage and treatment for commercial mixed waste is limited. Host States and compacts region officials are encouraging their mixed waste generators to minimize their mixed wastes because of management limitations. This document provides a guide to mixed waste minimization.
Directory of Open Access Journals (Sweden)
Knol Dirk L
2006-08-01
Full Text Available Abstract Changes in scores on health status questionnaires are difficult to interpret. Several methods to determine minimally important changes (MICs have been proposed which can broadly be divided in distribution-based and anchor-based methods. Comparisons of these methods have led to insight into essential differences between these approaches. Some authors have tried to come to a uniform measure for the MIC, such as 0.5 standard deviation and the value of one standard error of measurement (SEM. Others have emphasized the diversity of MIC values, depending on the type of anchor, the definition of minimal importance on the anchor, and characteristics of the disease under study. A closer look makes clear that some distribution-based methods have been merely focused on minimally detectable changes. For assessing minimally important changes, anchor-based methods are preferred, as they include a definition of what is minimally important. Acknowledging the distinction between minimally detectable and minimally important changes is useful, not only to avoid confusion among MIC methods, but also to gain information on two important benchmarks on the scale of a health status measurement instrument. Appreciating the distinction, it becomes possible to judge whether the minimally detectable change of a measurement instrument is sufficiently small to detect minimally important changes.
Optimization of structures subjected to dynamic load: deterministic and probabilistic methods
Directory of Open Access Journals (Sweden)
Élcio Cassimiro Alves
Full Text Available Abstract This paper deals with the deterministic and probabilistic optimization of structures against bending when submitted to dynamic loads. The deterministic optimization problem considers the plate submitted to a time varying load while the probabilistic one takes into account a random loading defined by a power spectral density function. The correlation between the two problems is made by one Fourier Transformed. The finite element method is used to model the structures. The sensitivity analysis is performed through the analytical method and the optimization problem is dealt with by the method of interior points. A comparison between the deterministic optimisation and the probabilistic one with a power spectral density function compatible with the time varying load shows very good results.
Bossche, Adrien Van Den; Campo, Eric
2008-01-01
Today, many network applications require shorter react time. Robotic field is an excellent example of these needs: robot react time has a direct effect on its task's complexity. Here, we propose a full deterministic medium access method for a wireless robotic application. This contribution is based on some low-power wireless personal area networks, like ZigBee standard. Indeed, ZigBee has identified limits with Quality of Service due to non-determinist medium access and probable collisions during medium reservation requests. In this paper, two major improvements are proposed: an efficient polling of the star nodes and a temporal deterministic distribution of peer-to-peer messages. This new MAC protocol with no collision offers some QoS faculties.
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
2015-01-01
mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......) factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee–Carter model in various directions.......The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...
Deterministic and stochastic trends in the Lee-Carter mortality model
DEFF Research Database (Denmark)
Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene
that characterizes mortality data. We find empirical evidence that this feature of the Lee-Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find...... as a two (or several)-factor model where one factor is deterministic and the other factors are stochastic. This feature generalizes to the range of models that extend the Lee-Carter model in various directions.......The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics loads with identical weights when describing the development of age specific mortality rates. Effectively this means that the main characteristics of the model simplifies to a random walk model...
Analysis of Photonic Quantum Nodes Based on Deterministic Single-Photon Raman Passage
Rosenblum, Serge
2014-01-01
The long-standing goal of deterministically controlling a single photon using another was recently realized in various experimental settings. Among these, a particularly attractive demonstration relied on deterministic single-photon Raman passage in a three-level Lambda system coupled to a single-mode waveguide. Beyond the ability to control the direction of propagation of one photon by the direction of another photon, this scheme can also perform as a passive quantum memory and a universal quantum gate. Relying on interference, this all-optical, coherent scheme requires no additional control fields, and can therefore form the basis for scalable quantum networks composed of passive quantum nodes that interact with each other only with single photon pulses. Here we present an analytical and numerical study of deterministic single-photon Raman passage, and characterise its limitations and the parameters for optimal operation. Specifically, we study the effect of losses and the presence of multiple excited state...
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-02-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.
[Minimally Invasive Treatment of Esophageal Benign Diseases].
Inoue, Haruhiro
2016-07-01
As a minimally invasive treatment of esophageal achalasia per-oral endoscopic myotomy( POEM) was developed in 2008. More than 1,100 cases of achalasia-related diseases received POEM. Success rate of the procedure was more than 95%(Eckerdt score improvement 3 points and more). No serious( Clavian-Dindo classification III b and more) complication was experienced. These results suggest that POEM becomes a standard minimally invasive treatment for achalasia-related diseases. As an off-shoot of POEM submucosal tumor removal through submucosal tunnel (per-oral endoscopic tumor resection:POET) was developed and safely performed. Best indication of POET is less than 5 cm esophageal leiomyoma. A novel endoscopic treatment of gastroesophageal reflux disease (GERD) was developed. Anti-reflux mucosectomy( ARMS) is nearly circumferential mucosal reduction of gastric cardia mucosa. ARMS is performed in 56 consecutive cases of refractory GERD. No major complications were encountered and excellent clinical results. Best indication of ARMS is a refractory GERD without long sliding hernia. Longest follow-up case is more than 10 years. Minimally invasive treatments for esophageal benign diseases are currently performed by therapeutic endoscopy.
Minimal Webs in Riemannian Manifolds
DEFF Research Database (Denmark)
Markvorsen, Steen
2008-01-01
are of instrumental importance for the applications. We apply these properties to show that minimal webs in ambient Riemannian spaces share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in such spaces. In particular we use appropriate versions of the divergence......)$ into Riemannian manifolds $(N^{n}, h)$. Such immersions we call {\\em{minimal webs}}. They admit a natural 'geometric' extension of the intrinsic combinatorial discrete Laplacian. The geometric Laplacian on minimal webs enjoys standard properties such as the maximum principle and the divergence theorems, which...... theorems together with the comparison techniques for distance functions in Riemannian geometry and obtain bounds for the first Dirichlet eigenvalues, the exit times and the capacities as well as isoperimetric type inequalities for so-called extrinsic $R-$webs of minimal webs in ambient Riemannian manifolds...
Waste minimization handbook, Volume 1
Energy Technology Data Exchange (ETDEWEB)
Boing, L.E.; Coffey, M.J.
1995-12-01
This technical guide presents various methods used by industry to minimize low-level radioactive waste (LLW) generated during decommissioning and decontamination (D and D) activities. Such activities generate significant amounts of LLW during their operations. Waste minimization refers to any measure, procedure, or technique that reduces the amount of waste generated during a specific operation or project. Preventive waste minimization techniques implemented when a project is initiated can significantly reduce waste. Techniques implemented during decontamination activities reduce the cost of decommissioning. The application of waste minimization techniques is not limited to D and D activities; it is also useful during any phase of a facility`s life cycle. This compendium will be supplemented with a second volume of abstracts of hundreds of papers related to minimizing low-level nuclear waste. This second volume is expected to be released in late 1996.
A non-deterministic approach to forecasting the trophic evolution of lakes
Directory of Open Access Journals (Sweden)
Roberto Bertoni
2016-03-01
Full Text Available Limnologists have long recognized that one of the goals of their discipline is to increase its predictive capability. In recent years, the role of prediction in applied ecology escalated, mainly due to man’s increased ability to change the biosphere. Such alterations often came with unplanned and noticeably negative side effects mushrooming from lack of proper attention to long-term consequences. Regression analysis of common limnological parameters has been successfully applied to develop predictive models relating the variability of limnological parameters to specific key causes. These approaches, though, are biased by the requirement of a priori cause-relation assumption, oftentimes difficult to find in the complex, nonlinear relationships entangling ecological data. A set of quantitative tools that can help addressing current environmental challenges avoiding such restrictions is currently being researched and developed within the framework of ecological informatics. One of these approaches attempting to model the relationship between a set of inputs and known outputs, is based on genetic algorithms and programming (GP. This stochastic optimization tool is based on the process of evolution in natural systems and was inspired by a direct analogy to sexual reproduction and Charles Darwin’s principle of natural selection. GP works through genetic algorithms that use selection and recombination operators to generate a population of equations. Thanks to a 25-years long time-series of regular limnological data, the deep, large, oligotrophic Lake Maggiore (Northern Italy is the ideal case study to test the predictive ability of GP. Testing of GP on the multi-year data series of this lake has allowed us to verify the forecasting efficacy of the models emerging from GP application. In addition, this non-deterministic approach leads to the discovery of non-obvious relationships between variables and enabled the formulation of new stochastic models.
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States); Wang, Yaqi [North Carolina State Univ., Raleigh, NC (United States)
2013-12-20
The research team has developed a practical, high-order, discrete-ordinates, short characteristics neutron transport code for three-dimensional configurations represented on unstructured tetrahedral grids that can be used for realistic reactor physics applications at both the assembly and core levels. This project will perform a comprehensive verification and validation of this new computational tool against both a continuous-energy Monte Carlo simulation (e.g. MCNP) and experimentally measured data, an essential prerequisite for its deployment in reactor core modeling. Verification is divided into three phases. The team will first conduct spatial mesh and expansion order refinement studies to monitor convergence of the numerical solution to reference solutions. This is quantified by convergence rates that are based on integral error norms computed from the cell-by-cell difference between the code’s numerical solution and its reference counterpart. The latter is either analytic or very fine- mesh numerical solutions from independent computational tools. For the second phase, the team will create a suite of code-independent benchmark configurations to enable testing the theoretical order of accuracy of any particular discretization of the discrete ordinates approximation of the transport equation. For each tested case (i.e. mesh and spatial approximation order), researchers will execute the code and compare the resulting numerical solution to the exact solution on a per cell basis to determine the distribution of the numerical error. The final activity comprises a comparison to continuous-energy Monte Carlo solutions for zero-power critical configuration measurements at Idaho National Laboratory’s Advanced Test Reactor (ATR). Results of this comparison will allow the investigators to distinguish between modeling errors and the above-listed discretization errors introduced by the deterministic method, and to separate the sources of uncertainty.
Deterministic skill of ENSO predictions from the North American Multimodel Ensemble
Barnston, Anthony G.; Tippett, Michael K.; Ranganathan, Meghana; L'Heureux, Michelle L.
2017-03-01
Hindcasts and real-time predictions of the east-central tropical Pacific sea surface temperature (SST) from the North American Multimodel Ensemble (NMME) system are verified for 1982-2015. Skill is examined using two deterministic verification measures: mean squared error skill score (MSESS) and anomaly correlation. Verification of eight individual models shows somewhat differing skills among them, with some models consistently producing more successful predictions than others. The skill levels of MME predictions are approximately the same as the two best performing individual models, and sometimes exceed both of them. A decomposition of the MSESS indicates the presence of calibration errors in some of the models. In particular, the amplitudes of some model predictions are too high when predictability is limited by the northern spring ENSO predictability barrier and/or when the interannual variability of the SST is near its seasonal minimum. The skill of the NMME system is compared to that of the MME from the IRI/CPC ENSO prediction plume, both for a comparable hindcast period and also for a set of real-time predictions spanning 2002-2011. Comparisons are made both between the MME predictions of each model group, and between the average of the skills of the respective individual models in each group. Acknowledging a hindcast versus real-time inconcsistency in the 2002-2012 skill comparison, the skill of the NMME is slightly higher than that of the prediction plume models in all cases. This result reflects well on the NMME system, with its large total ensemble size and opportunity for possible complementary contributions to skill.
Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal
Wronna, M.; Omira, R.; Baptista, M. A.
2015-11-01
In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.
Minimally invasive surgical techniques in periodontal regeneration.
Cortellini, Pierpaolo
2012-09-01
A review of the current scientific literature was undertaken to evaluate the efficacy of minimally invasive periodontal regenerative surgery in the treatment of periodontal defects. The impact on clinical outcomes, surgical chair-time, side effects and patient morbidity were evaluated. An electronic search of PUBMED database from January 1987 to December 2011 was undertaken on dental journals using the key-word "minimally invasive surgery". Cohort studies, retrospective studies and randomized controlled clinical trials referring to treatment of periodontal defects with at least 6 months of follow-up were selected. Quality assessment of the selected studies was done through the Strength of Recommendation Taxonomy Grading (SORT) System. Ten studies (1 retrospective, 5 cohorts and 4 RCTs) were included. All the studies consistently support the efficacy of minimally invasive surgery in the treatment of periodontal defects in terms of clinical attachment level gain, probing pocket depth reduction and minimal gingival recession. Six studies reporting on side effects and patient morbidity consistently indicate very low levels of pain and discomfort during and after surgery resulting in a reduced intake of pain-killers and very limited interference with daily activities in the post-operative period. Minimally invasive surgery might be considered a true reality in the field of periodontal regeneration. The observed clinical improvements are consistently associated with very limited morbidity to the patient during the surgical procedure as well as in the post-operative period. Minimally invasive surgery, however, cannot be applied at all cases. A stepwise decisional algorithm should support clinicians in choosing the treatment approach.
Quantum field theoretic behavior of a deterministic cellular automaton
Hooft, G. 't; Isler, K.; Kalitzin, S.
1992-01-01
A certain class of cellular automata in 1 space + 1 time dimension is shown to be closely related to quantum field theories containing Dirac fermions. In the massless case this relation can be studied analytically, while the introduction of Dirac mass requires numerical simulations. We show that in
Institute of Scientific and Technical Information of China (English)
左毅; 王海燕; 陈尚军
2013-01-01
Objective To analyze the clinical effect of high-dose urokinase via minimally invasive approach for the treatment of ventricular hemorrhage.Methods A total of 90 cases of ventricular hemorrhage were divided into control group and treatment group randomly.Lateral ventricular injection of urokinase after external ventricular drainage plus drainage of lumbar cistern were performed in control group,and lateral ventricular injection of urokinase after external ventricular drainage plus drainage of lumbar cistern with injection of urokinase were performed in treatment group.Results In treatment group,the time of lumbar cistern drainage was 6 d in average,and the time of ventricular hematoma elimination was (5 ±1.47) d.In control group,the average time of lumbar cistern drainage was 12 d and the time of ventricular hematoma elimination was (11 ±3.76) d.Conclusion Compared with control group,ventricular injection of urokinase after external ventricular drainage plus lumbar cistern injection of urokinase are helpful for the hematoma elimination and increase of survival rate.%目的 分析微创大剂量尿激酶对脑室出血的临床治疗作用.方法 90例脑室出血患者随机分为治疗组和对照组,对照组:脑室外引流术后侧脑室注入尿激酶+腰大池引流术；治疗组:脑室外引流术后侧脑室注入尿激酶+腰大池引流术后经引流管注入尿激酶.结果 治疗组腰大池置管持续引流平均时间为6d,脑室内血肿消失时间平均为(5±1.47)d.对照组腰大池置管持续引流平均时间为12 d,脑室内血肿消失时间平均为(11±3.76)d.结论 较单纯腰大池置管引流,微创超大剂量尿激酶双向治疗重度脑室出血,能显著加快脑室血肿溶解,提高生存率.
VISCO-ELASTIC SYSTEMS UNDER BOTH DETERMINISTIC AND BOUND RANDOM PARAMETRIC EXCITATION
Institute of Scientific and Technical Information of China (English)
徐伟; 戎海武; 方同
2003-01-01
The principal resonance of a visco-elastic systems under both deterministic and random parametric excitation was investigated. The method of multiple scales was used to determine the equations of modulation of amplitude and phase. The behavior, stability and bifurcation of steady state response were studied by means of qualitative analysis. The contributions from the visco-elastic force to both damping and stiffness can be taken into account. The effects of damping, detuning, bandwidth, and magnitudes of deterministic and random excitations were analyzed. The theoretical analysis is verified by numerical results.
Directory of Open Access Journals (Sweden)
Tim ePalmer
2015-10-01
Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.
Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter
2016-04-01
Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of
Superspace geometry and the minimal, non minimal, and new minimal supergravity multiplets
Energy Technology Data Exchange (ETDEWEB)
Girardi, G.; Grimm, R.; Mueller, M.; Wess, J.
1984-11-01
We analyse superspace constraints in a systematic way and define a set of natural constraints. We give a complete solution of the Bianchi identities subject to these constraints and obtain a reducible, but not fully reducible multiplet. By additional constraints it can be reduced to either the minimal nonminimal or new minimal multiplet. We discuss the superspace actions for the various multiplets.
A deterministic partial differential equation model for dose calculation in electron radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Duclous, R; Dubroca, B [CELIA and IMB Laboratories, Bordeaux University, 33405 Talence (France); Frank, M, E-mail: duclous@celia.u-bordeaux1.f, E-mail: dubroca@celia.u-bordeaux1.f, E-mail: frank@mathcces.rwth-aachen.d [Department of Mathematics and Center for Computational Engineering Science, RWTH Aachen University, Schinkelstr. 2, 52062 Aachen (Germany)
2010-07-07
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
A deterministic partial differential equation model for dose calculation in electron radiotherapy
Duclous, R.; Dubroca, B.; Frank, M.
2010-07-01
High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung
Blackfolds, Plane Waves and Minimal Surfaces
Armas, Jay
2015-01-01
Minimal surfaces in Euclidean space provide examples of possible non-compact horizon geometries and topologies in asymptotically flat space-time. On the other hand, the existence of limiting surfaces in the space-time provides a simple mechanism for making these configurations compact. Limiting surfaces appear naturally in a given space-time by making minimal surfaces rotate but they are also inherent to plane wave or de Sitter space-times in which case minimal surfaces can be static and compact. We use the blackfold approach in order to scan for possible black hole horizon geometries and topologies in asymptotically flat, plane wave and de Sitter space-times. In the process we uncover several new configurations, such as black helicoids and catenoids, some of which have an asymptotically flat counterpart. In particular, we find that the ultraspinning regime of singly-spinning Myers-Perry black holes, described in terms of the simplest minimal surface (the plane), can be obtained as a limit of a black helicoid...
Locally minimal topological groups 1
Chasco, María Jesús; Dikranjan, Dikran N.; Außenhofer, Lydia; Domínguez, Xabier
2015-01-01
The aim of this paper is to go deeper into the study of local minimality and its connection to some naturally related properties. A Hausdorff topological group ▫$(G,tau)$▫ is called locally minimal if there exists a neighborhood ▫$U$▫ of 0 in ▫$tau$▫ such that ▫$U$▫ fails to be a neighborhood of zero in any Hausdorff group topology on ▫$G$▫ which is strictly coarser than ▫$tau$▫. Examples of locally minimal groups are all subgroups of Banach-Lie groups, all locally compact groups and all mini...
Minimal flows and their extensions
Auslander, J
1988-01-01
This monograph presents developments in the abstract theory of topological dynamics, concentrating on the internal structure of minimal flows (actions of groups on compact Hausdorff spaces for which every orbit is dense) and their homomorphisms (continuous equivariant maps). Various classes of minimal flows (equicontinuous, distal, point distal) are intensively studied, and a general structure theorem is obtained. Another theme is the ``universal'' approach - entire classes of minimal flows are studied, rather than flows in isolation. This leads to the consideration of disjointness of flows, w
Heart bypass surgery - minimally invasive
... Names Minimally invasive direct coronary artery bypass; MIDCAB; Robot-assisted coronary artery bypass; RACAB; Keyhole heart surgery; ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...